id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.09100 | Btor2MLIR: A Format and Toolchain for Hardware Verification | Formats for representing and manipulating verification problems are extremely
important for supporting the ecosystem of tools, developers, and practitioners.
A good format allows representing many different types of problems, has a
strong toolchain for manipulating and translating problems, and can grow with
the community. In the world of hardware verification, and, specifically, the
Hardware Model Checking Competition (HWMCC), the Btor2 format has emerged as
the dominating format. It is supported by Btor2Tools, verification tools, and
Verilog design tools like Yosys. In this paper, we present an alternative
format and toolchain, called Btor2MLIR, based on the recent MLIR framework. The
advantage of Btor2MLIR is in reusing existing components from a mature compiler
infrastructure, including parsers, text and binary formats, converters to a
variety of intermediate representations, and executable semantics of LLVM. We
hope that the format and our tooling will lead to rapid prototyping of
verification and related tools for hardware verification. | Joseph Tafese, Isabel Garcia-Contreras, Arie Gurfinkel | 2023-09-16T21:49:24Z | http://arxiv.org/abs/2309.09100v1 | # Btor2MLIR: A Format and Toolchain for Hardware Verification
###### Abstract
Formats for representing and manipulating verification problems are extremely important for supporting the ecosystem of tools, developers, and practitioners. A good format allows representing many different types of problems, has a strong toolchain for manipulating and translating problems, and can grow with the community. In the world of hardware verification, and, specifically, the Hardware Model Checking Competition (HWMCC), the Btor2 format has emerged as the dominating format. It is supported by Btor2Tools, verification tools, and Verilog design tools like Yosys. In this paper, we present an alternative format and toolchain, called Btor2MLIR, based on the recent MLIR framework. The advantage of Btor2MLIR is in reusing existing components from a mature compiler infrastructure, including parsers, text and binary formats, converters to a variety of intermediate representations, and executable semantics of LLVM. We hope that the format and our tooling will lead to rapid prototyping of verification and related tools for hardware verification.
## I Introduction
Hardware Verification has been one of the biggest drivers of formal verification research [1], with a history that spans many breakthroughs. The developments in this field have thrived through organized events such as the Hardware Model Checking Competition (HWMCC) [2] which has run since \(2011\). Btor2[3] has emerged as the dominating format in this competition. Btor2 has been translated into several languages, for example, Constrained Horn Clauses (CHCs)12 and LLVM-IR3 to make use of existing verification techniques. Universality, however, was not an objective of these projects, and thus, for these translations, be it to CHCs or to LLVM-IR, similar tasks had to be replicated.
Footnote 1: [https://github.com/zhanghongce/HWMCC19-in-CHC](https://github.com/zhanghongce/HWMCC19-in-CHC)
Footnote 2: [https://github.com/stepwise-alan/btor2chc](https://github.com/stepwise-alan/btor2chc)
Footnote 3: [https://github.com/stepwise-alan/btor2llvm](https://github.com/stepwise-alan/btor2llvm)
During the past decade, the LLVM project [4] has dedicated significant effort to universality. One such effort is MLIR [5], a project that proposes a generic intermediate representation with operations and types common to many programming languages. MLIR was designed to be easily extensible, by providing tools to build new intermediate representations (IR) as dialects of the base MLIR. This eases the creation of new compilers, circumventing the need to re-implement core technologies and optimizations. Extensibility and scalability are what MLIR strives for, making it a great candidate for the creation of new tools and formats that represent many types of problems and have strong tool support for manipulating and translating problems.
During the same time, with the rise of LLVM as a compiler infrastructure, many software verification tools have been built for LLVM-IR programs. Existing tools tackle this hard problem in many ways. For example, dynamic verification is implemented in LibFuzzer[6], a fuzzer, and Klee[7], a symbolic execution engine; SMT-based static verification is implemented in SeaHorn[8] both as Bounded and Unbounded Model Checking; and Clam[9] static analysis that analyzes LLVM-IR statically using abstract interpretation.
This paper contributes Btor2MLIR, a format and toolchain for hardware verification. It is built on MLIR to incorporate advances and best practices in compiler infrastructure, compiler design, and the maturity of LLVM. At its core, Btor2MLIR provides an intermediate representation for Btor2 as an MLIR dialect. This dialect has an encoding very close to Btor2 and preserves Btor2's semantics. This design not only facilitates the creation of a new format for hardware verification but also simplifies the extension of this format to support future targets by using MLIR for all intermediate representations. For example, Btor2MLIR can be used to generate LLVM-IR from our custom MLIR dialect. The value of this approach is quite evident in CIRCT [10], an open-source project, that applies this design to tackle the inconsistency and usability concerns that plague tools in the Electronic Design Automation industry. Although it has a different goal than Btor2MLIR, both projects draw great benefit from adapting the benefits of an MLIR design to their respective fields.
As an added bonus, using Btor2MLIR to generate LLVM-IR enables the reuse of established tools to apply software verification techniques to verify hardware circuits. To illustrate the usability of the toolchain, a new model checker is developed using SeaHorn. The results are compared to BtorMC [3], a hardware model checker provided by the creators of Btor2.
The rest of the paper is organized as follows. Section II lays some background. Our format and toolchain, Btor2MLIR, is described in Section III. We discuss its correctness in Section IV and evaluate the tool in Section V. We close with a note on related works in Section VI and conclude in Section VI.
[FIGURE
Section VII.
## II Background
Btor2: Btor2[3] is a format for quantifier-free formulas and circuits over bitvectors and arrays, with Smt-Lib[11] semantics, that is used for hardware verification. Btor2 files are often generated using tools like Yosys[12], from the original design in a language like Verilog[13]. A simple 4-bit counter is shown in Fig. 1. Its corresponding description, in Verilog, is shown in Fig. 1b. The circuit updates its output at each step starting from \(0\) to its maximum value, \(15\). It also has the safety property that the output should not be equal to \(15\), shown by the assertion in Fig. 1b. The circuit together with the desired safety property are captured in Btor2 in Fig. 1c. First, a bitvector of width \(4\) is defined as '1' in line \(1\). Sort are used later when declaring registers and operations. For example, lines \(2\), \(3\), \(5\), and \(8\) refer to sort '1', respectively, by declaring '2' to be a zero bitvector (\(0000\)) (line 2), state out to be a register of sort '1' (line 3), '5' to be a one bitvector (\(0001\)) (line 5) and '8' to be bitvector of ones (\(1111\)) (line 8). On line 4, out is initialized with value '2'. On line \(7\), the transition function is defined (activated at each clock edge), by assigning the next state of out to the value out incremented by one (the result of line \(6\)). Finally, a safety property is defined in line \(11\) with the keyword bad, requiring that the equality of line \(10\) does not hold. That is, the value of out is never \(1111\). Note that no clock is specified in Fig. 1c. In Btor2 it is always assumed that there is one single clock, and the keyword next is used to declare how registers are updated after a clock cycle. For a register that has not been assigned a next value, it will get a new non-deterministic value or keep it's initial value (if one was given).
BtorMC: BtorMC[3] is a bounded model checker (BMC) for Btor2. BtorMC generates verification conditions as Smt formulas and uses Boolector[3] as an Smt solver. Based on the satisfiability result of the formula, BtorMC on our example tells us that the safety property is violated, as expected, since out does reach a state with value \(1111\).
MLIR: Multi-Level Intermediate Representation (MLIR) [5] is a project that was developed for TensorFlow [14] to address challenges faced by the compiler industry at large: modern languages end up creating their own high-level intermediate representation (IR) and the corresponding technologies. Furthermore, these domain-specific compilers have to be recreated for different compilation and optimization targets and do not easily share a common infrastructure or intermediate representations. To remedy this, MLIR facilitates the design and implementation of code generators, translators, and optimizers at different levels of abstraction and also across application domains, hardware targets, and execution environments.
Modern languages vary in the set of operations and types that they use, hence the need to create domain-specific high-level IRs. MLIR addresses this problem by making it easy for a user to define their own dialects. An MLIR dialect captures the operations and types of a target language. It is created using TableGen, a domain specific language for defining MLIR dialects. It is used to automatically generate code to manipulate the newly defined dialect including its Abstract Syntax Tree (AST) and parsing. MLIR tools and optimizations such as static single assignment, constant propagation, and dead-code elimination can be applied off the shelf to custom MLIR dialects. These capabilities make MLIR a reusable and extensible compiler infrastructure. One of its strengths is the builtin dialects it introduces, such as a Builtin, Standard, and LLVM dialects4, among others. These dialects make it possible to have a rich infrastructure for dialect conversion that enables a user to define pattern-based rewrites of operations from one dialect to another. For example, a dialect conversion pass is provided to convert operations in the Standard dialect to operations in the LLVM dialect. MLIR also provides an infrastructure for user-defined language translation passes. One such pass that is provided out of the box is a translation from LLVM dialect to LLVM-IR.
Footnote 4: [https://github.com/llvm/llvm-project/tree/release/14.x/mlir/include/mlir/Dialect](https://github.com/llvm/llvm-project/tree/release/14.x/mlir/include/mlir/Dialect)
## III Btor2MLIR
We present our tool, Btor2MLIR, which contributes the Btor Dialect, and three modules on the existing MLIR infrastructure: a Btor2 to Btor Dialect translation pass, a Btor Dialect to Btor2 translation pass and a dialect conversion pass from Btor Dialect to LLVM dialect. Our tool has approximately \(3\,900\) lines of C++ code and \(1\,200\) lines of TableGen. Fig. 2 shows the architecture of our tools with our contributions highlighted in green. Btor2MLIR uses the original Btor2 parser provided in Btor2Tools[3], marked in blue, and MLIR builtin passes, marked in brown. Btor2MLIR is open-sourced and publicly available on GitHub5.
Footnote 5: [https://github.com/jetafese/btor2mlir/tree/llvm-14](https://github.com/jetafese/btor2mlir/tree/llvm-14)
We illustrate how each of the components of Btor2MLIR works by translating a factorial circuit, shown in Fig. 3a, that is described in Btor2. There are two safety properties, one per bad statement. Line \(14\) states that the loop counter, i, reaches \(15\). Line \(19\) states that the value of factorial is always even.
Btor Dialect: Our first contribution is the Btor Dialect, an MLIR dialect to represent Btor2 circuits. Fig. 3b shows the Btor Dialect code corresponding to Fig. 3a. It represents the execution of the circuit using an MLIR function main. The control flow is explicit, using a standard MLIR representation of basic blocks with arguments and branches. The example has two basic blocks: an unnamed initial block (bb0) and a block bb1. Circuit initialization is modeled by instructions in bb0, and each cycle by instructions in bb1. Note that bb1 has two predecessors: bb0 for initialization and bb1 for each cycle. Bitvector types are mapped to integer types (provided by MLIR), for example, bitvec 4 becomes i4. Each operation in the Btor Dialect, prefixed with btor, models a specific Btor2 operation. For example, btor.mul
is mul, and btor.slice is slice. Safety properties such as bad are represented by btor.assert_not. Special operators such as one, ones and constd are represented by the btor.constant operation with the expected integer value. Boolean operators are represented by btor.cmp. For example, eq becomes btor.cmp eq.
Translating Btor2 to Btor Dialect:Btor2MLIR takes Btor2 circuits as input, using Btor2Tools to create a data structure for each Btor2 line. Our pass then generates a program in Btor Dialect by constructing the appropriate MLIR AST. Each Btor2 operator is mapped to a unique operation in Btor Dialect, a capability that is greatly simplified and enabled by the MLIR infrastructure.
A program in MLIR can be written using multiple dialects since the MLIR framework enables the interaction of multiple IRs. To enable this capability, MLIR provides dialects that are designed to serve as building blocks for more domain-specific dialects. We utilized the framework by building the Btor Dialect using the Standard and Builtin dialects. For example, we use the _module_, _func_ and _bb_ operations from Builtin. We utilize the _br_ operation in the Standard dialect to enable interaction between the two basic blocks in Fig. 2(b). This approach is consistent with the intended use of the Standard and Builtin dialects. It saves time and effort since we do not need to recreate operations that already exist in other dialects. Furthermore, MLIR provides a conversion pass from Standard dialect to LLVM dialect, making it worthwhile to build Btor Dialect on top of the Builtin and Standard dialects.
Dialect conversionThe Btor2MLIR conversion pass from Btor Dialect to LLVM dialect utilizes the MLIR infrastructure for pattern-based rewrites. It rewrites Btor Dialect operations into LLVM dialect operations. For most operations in Btor Dialect there exists a semantically equivalent operation in LLVM dialect. For example, _btor.constant_ in Fig. 2(b) is converted to _llvm.mlir.constant_ in LLVM dialect. For some operations, an equivalent in LLVM dialect does not exist, in these cases it is required to rewrite them into several LLVM operations (e.g., in _btor.slice_) and/or to modify the module structure (e.g., _btor.assert_not_). In LLVM dialect, _btor.slice_ is replaced by a logical shift right, _llvm.lshr_, and a truncation operation, _llvm.trunc. btor.assert_not_ is mapped to a new basic block in the LLVM dialect that has the _llvm.unreachable_ operation. We split the basic block _bb1_, in Fig. 2(b), by adding a conditional branch, _llvm.cond_br_, to direct control flow to the new block when the assertion is satisfied.
Translate LLVM Dialect to LLVM-IR: Btor2MLIR uses a translation pass from LLVM dialect to LLVM-IR, provided by MLIR. Note the optimizations in the resulting LLVM-IR, shown in Fig. 2(c), such as constant propagation and phi nodes.
## IV Correctness
When introducing a new tool or framework to the community, there is always a question of how polished it is. Btor2MLIR builds on two mature frameworks:
Fig. 1: 4-bit counter.
Fig. 2: Btor2MLIR Architecture.
Btor2Tools and MLIR. This is done not only because of the frameworks' functionalities, but because they have been extensively reviewed, used, and tested. Btor2Tools has been widely used in the hardware model-checking community since its introduction in 2018. MLIR builds on LLVM, a compiler framework that has been used and improved over numerous projects in the last two decades and is actively supported by industry.
Specifically, Btor2MLIR uses the parser from Btor2Tools to generate corresponding operations and functions in the Btor Dialect of MLIR. The Btor Dialect is written in TableGen-- an MLIR domain-specific language for dialect creation. We show how our dialect and the class of binary operations are defined in Fig. 3(a). For example, the BtorBinaryOp class defines a class of operations that have two arguments lhs, rhs and a result res. It also has a trait SameOperandsAndResultType to enforce that lhs, rhs and res have the same type. Finally, the class specifies how the default MLIR parsers and printers should handle such operations. We create our arithmetic operations as shown in Fig. 3(b). We mark relevant operations as Commutative. Operation descriptions are not shown for simplicity. We ensure that each Btor2 operator has a one-to-one mapping with an operation in the Btor Dialect so that the translation from Btor2 to Btor Dialect is lossless and preserves Btor2 semantics.
Btor2MLIR relies on the optimization, folding, and canonization passes that MLIR provides in its translation from the LLVM Dialect in MLIR to LLVM-IR. MLIR also provides the mechanism for pattern-based rewrites which has helped us avoid the introduction of undefined behavior into the resulting LLVM-IR. We show an example of this in Fig. 4. MLIR allows us to identify which operations in the Btor Dialect we want to replace at the end of our conversion pass. A subset of such operations are shown in Fig. 4(a). For each operation that has been identified, we provide a lowering that maps it to a legal operation in the LLVM dialect. We are able to use lowering patterns like VectorConvertToLLVMPattern from MLIR for common arithmetic and logical operations as shown in Fig. 4(b).
We performed extensive testing using the HWMCC20 benchmark set to verify the correctness of Btor2MLIR. This is the same benchmark set used to test [15]. The tests are run on a Linux machine with x86_64 architecture, using BtorMC
with an unroll bound of \(20\), a timeout of \(300\) seconds and memory limit of \(65\) GB. We present the results in Table I, where bitvector benchmarks categories are in the top half and bitvector + array benchmark categories are in the bottom half. All times in this table reflect solved instances and do not include timeouts. We do not show the time it takes to run Btor2MLIR since the time is negligible. The results are grouped by competition contributor such that each row shows the time, instances solved (safe/unsafe) and timeouts (TO) for both the original and round-tripped circuits. For example, for the wolf/18D category, we can see that the original Btor2 circuit solves 34 safe instances and 0 unsafe instances in 157 seconds, with 2 timeouts. The round-tripped circuit solves 34 safe instances and 0 unsafe instances in 168 seconds with two timeouts.
We can see that the safety properties in Btor2 circuits are neither changed nor violated after being round-tripped by Btor2MLIR. In two categories with only bitvectors, 19/beem and 19/mann, one more instance in each category is found safe after round trip, while the original circuit leads to a memout and timeout respectively. This gives us confidence that the translation to Btor Dialect, using the Btor2Tools parser, is indeed correct. Then, we tested whether the same holds after translation to LLVM-IR. Through this method, we were able to ensure that Btor2MLIR does not have errors when handling operations that are represented in the benchmark set. This approach is not complete, however, since it would not identify errors that might be in our implementation but are not exercised by the benchmarks we use. For example, Btor2 expects that a division by zero would result in \(-1\), but there are no benchmarks that exercise this kind of division. We mitigate this by generating benchmarks for division, remainder, and modulus operators to ensure that the expected behavior of Btor2 operators are represented in our test suite.
In the future, it is interesting to explore other translation validation and verification approaches. For example, it would be useful for Btor2MLIR to produce a proof trail that justifies all of the transformations that are performed by the tool. This, for example, might be possible to achieve by building on the work of [16, 17].
LimitationsBtor2MLIR is able to round trip Btor2 operators and their sorts. In LLVM-IR all Btor2 operators and their sorts are supported as well, but not fairness and justice constraints.
## V Evaluation
To evaluate Btor2MLIR, we have built a prototype hardware model checker by connecting our tool with SeaHorn[8], a well-known model checker for C/C++ programs that works at the LLVM-IR level. It has recently been extended with a bit-precise Bounded Model Checking engine [18]. This BMC engine was evaluated in a recent case study [19] and we use the same configuration of SeaHorn in our evaluation.
The goal of our evaluation is to show that Btor2MLIR makes it easy to connect hardware designs with LLVM-based verification engines. We did not expect the existing software engines to outperform dedicated hardware model checkers. However, we hope that this will enable further avenues of research. In the future, we plan to extend the framework to support other LLVM-based analysis tools, such as symbolic execution engine KLEE [7], and fuzzing framework [6].
For the evaluation, we have chosen the bitvector category of btor benchmarks from the most recent Hardware Model Checking Competition (HWMCC) [2]. We have excluded benchmarks with arrays since the export to LLVM-IR is not supported by SeaHorn in our experimental setup. All our experiments are run on a Linux machine with x86_64 architecture, with unroll bound of \(20\), a timeout of \(300\) seconds and memory limit of \(65\) GB. The results are presented in Table II, grouped by competition contributor. All times in this table reflect solved instances and do not include timeouts. We do not show the time it takes to run Btor2MLIR since the time is negligible. In the rest of this section, we highlight some of the interesting findings.
We have run BtorMC on the same machine and exact same experimental setup (unroll bound and CPU and memory
Fig. 4: Using TableGen for Dialect Creation.
limits). We chose BtorMC because it is well integrated with the HWMCC environment and is specifically designed for Btor2. The results of running BtorMC are shown in the first columns of BtorMC in Table II. For each category, we show the total time for all instances that are solved in that category, and the number of instances that are solved as safe, unsafe, and timed-out (TO), respectively. For example, the 20/mann category is solved in 257 seconds, 10 instances are safe, 5 are unsafe, and no instance has timed out. The performance of BtorMC is quite good across the board.
We evaluate the problems generated by Btor2MLIR by plugging them into SeaHorn. SeaHorn pre-processes programs before attempting to verify them. This includes, standard LLVM optimizations (i.e., -O3), loop unrolling and loop cutting are applied. We found that SeaHorn was able to, in some instances, remove the assertions in the LLVM-IR, meaning that the program was found to be safe statically, before invoking the BMC. The BMC also runs simplifications on the formulas that it sends to Z3, its default underlying Smt solver. The results for this run are shown in the Z3 columns of Table II. For example, the 20/mann category is solved in 94 seconds, 8 instances are safe, 5 are unsafe and 2 have
Fig. 5: Using Patter Based Rewriters in MLIR.
Fig. 6: Verification Strategies.
timed out. The reported time does not include the instances that have timed out.
The aggregate time of SeaHorn on most of the categories is higher than that of BtorMC, often by a significant amount. We looked into this and found that SeaHorn treats the given bound as a lower bound, rather than an upper bound. That is, it ensures that it unrolls the programs to a depth of at least \(20\), but it may continue past that point. Taking this into account, we ran BtorMC with a bound of \(25\). The results are in the second columns of BtorMC in Table II. As expected, its aggregate times are higher than the run of BtorMC with bound \(20\). We notice, however, that it is slower than SeaHorn in the 19/mann category.
Boolector and Z3 are the Smt solvers used by BtorMC and SeaHorn respectively. Given that Boolector is optimized for Btor2 circuits, we evaluated whether the Smt formulas generated by SeaHorn would be solved faster by Boolector. The results for generating Smt-Lib formulas using SeaHorn are presented in the VCGen column of Table II. The times are low for most categories except wolf/18D, wolf/19C, 19/goel and 19/mann. For example, it takes SeaHorn 175 seconds to generate the verification conditions for instances in the 19/mann category, with two timeouts. This includes the time it takes SeaHorn to print the Smt formulas to disk. We plug the resulting Smt formulas into Boolector and present the results in the btor columns of Table II. The results show that using SeaHorn to generate verification conditions and Boolector to solve these instances is often better than using BtorMC. For example, for category 19/mann, it takes 3015s for Boolector to solve \(44\) instances with \(9\) timeouts. Therefore, the total time for SeaHorn and Boolector (3 190) represents the time it takes to translate, generate Smt formula and verify the 19/mann category. Note that two of the 9 timeouts in this category are attributed to the fact that SeaHorn has a timeout when generating verification conditions.
To get the big picture of how the different infrastructures performed, we collected the results over all categories in Table III. From this table, we can see that our hybrid pipeline combining Btor2MLIR, SeaHorn, and Boolector solves \(240\) instances with \(84\) timeouts in \(6\,796\)s (sum of VCGen and btor total times), which is very encouraging. We also present plots that compare the different pipelines that have been explored in Fig. 7. We set the time for all timeout instances to 350 seconds so that they are distinguished from instances that were solved close to the timeout threshold. First, we look at the performance of the hybrid pipeline that combines Btor2MLIR, SeaHorn and its default Smt solver Z3 against BtorMC in Fig. (a)a. Z3 does as well as BtorMC for most instances that are easy, however, it struggles when the problems are harder. This is not as clear from Table II since focuses on the number of timeouts and benchmarks solved. Second, we present the performance of BtorMC against the hybrid pipeline that combines Btor2MLIR, SeaHorn and
\begin{table}
\begin{tabular}{l l r r r r r} \multicolumn{5}{c}{BtorMC} & \multicolumn{4}{c}{SeaHorn} \\ \cline{3-7} & & 20 & 25 & VCGen + Z3 & VCGen & Btor \\ \hline \multirow{4}{*}{Botot} & Time (s) & 157 & 394 & 560 & 543 & 745 \\ & Safe & 34 & 34 & 29 & - & 34 \\ & Unsafe & 0 & 0 & 0 & - & 0 \\ & TO & 2 & 2 & 7 & 2 & 2 \\ \hline \multirow{4}{*}{Bot} & Time (s) & 146 & 106 & - & - & - \\ & Safe & 0 & 0 & 0 & - & 0 \\ & Unsafe & 1 & 1 & 0 & - & 0 \\ & TO & 17 & 17 & 18 & 18 & 18 \\ \hline \multirow{4}{*}{Bot} & Time (s) & 2 & 2 & 2 & 3 \\ & Safe & 3 & 3 & 3 & - & 3 \\ & Unsafe & 0 & 0 & 0 & - & 0 \\ & TO & 0 & 0 & 0 & 0 \\ \hline \multirow{4}{*}{Bot} & Time (s) & 834 & 1 101 & 354 & 418 & 1 085 \\ & Safe & 108 & 107 & 102 & - & 106 \\ & Unsafe & 0 & 0 & 0 & - & 0 \\ & TO & 5 & 6 & 11 & 2 & 7 \\ \hline \end{tabular}
\end{table} TABLE II: HWMCC20 Results.
\begin{table}
\begin{tabular}{l l r r r r r} \multicolumn{5}{c}{BtorMC} & \multicolumn{4}{c}{SeaHorn} \\ \cline{3-7} & & 20 & 25 & VCGen + Z3 & VCGen & Btor \\ \hline \multirow{4}{*}{Bot} & Time (s) & 278 & 251 & 309 & 35 & 85 \\ & Safe & 9 & 8 & 6 & - & 7 \\ & Unsafe & 2 & 2 & 2 & - & 2 \\ & TO & 4 & 5 & 7 & 4 & 6 \\ \hline \multirow{4}{*}{Bot} & Time (s) & 190 & 349 & 489 & 132 & 335 \\ & Safe & 26 & 25 & 25 & - & 28 \\ & Unsafe & 2 & 2 & 1 & - & 2 \\ & TO & 43 & 44 & 45 & 27 & 41 \\ \hline \multirow{4}{*}{Bot} & Time (s) & 4 442 & 8 674 & 3 811 & 175 & 3 015 \\ & Safe & 29 & 28 & 19 & - & 30 \\ & Unsafe & 15 & 15 & 14 & - & 14 \\ & TO & 9 & 10 & 20 & 2 & 9 \\ \hline \multirow{4}{*}{Bot} & Time (s) & 257 & 495 & 94 & 35 & 188 \\ & Safe & 10 & 10 & 8 & - & 9 \\ \cline{1-1} & Unsafe & 5 & 5 & 5 & - & 5 \\ \cline{1-1} & TO & 0 & 0 & 2 & 0 & 1 \\ \hline \end{tabular}
\end{table} TABLE III: Total results for each tool.
Boolector in Fig. 6(b). We can see that there are more benchmarks that this pipeline solves faster than BtorMC. It is also clear that it solves more benchmarks than the Z3 configuration in Fig. 6(a), as we would expect from Table III. Third, we compare the two hybrid pipelines in Fig. 6(c). We can see that the configuration that uses SeaHorn to generate verification conditions and Boolector for solving easily outperforms the Z3 configuration.
## VI Related Work
Translating Btor2 circuits into other formats enables the application of different verification methods and techniques. The gains that can be made from applying one method of encoding over another could enable solving a class of benchmarks that are not solved with existing approaches.
Btor2LLVM6 and Btor2CHC7 are tools that convert Btor2 circuits to programs in LLVM-IR and CHCs, respectively. These tools are developed in Python, in order to be light weight, but end up repeating shared functionality and tools since they lack a common infrastructure. Translated Btor2 benchmarks8 have also been collected to facilitate research, but information of what tools were used to get the CHC format is not publicly available. While a collection of translated benchmarks is valuable, it is important that there are tools to do the translation on demand. This enables rapid prototyping in a way that saved benchmarks do not.
Footnote 6: [https://github.com/stepwise-alan/btor2llvm](https://github.com/stepwise-alan/btor2llvm)
Footnote 7: [https://github.com/stepwise-alan/btor2chc](https://github.com/stepwise-alan/btor2chc)
Footnote 8: [https://github.com/zhanghongce/HWMCC19-in-CHC](https://github.com/zhanghongce/HWMCC19-in-CHC)
Btor2C [15] is a recent tool that converts Btor2 circuits to C programs. It has been used to facilitate the utilization of software analyzers by serving as a pre-processing step that bridges the gap between the world of software verification and hardware verification. There are limitations that arise, however, from differences in the semantics of Btor2 and C. An important limitation that C imposes on this project is the inability to represent arbitrary width bitvectors. This means that Btor2 circuits which operate on bitvectors of width greater than 128 are not supported. These limitations, as well as Btor2C lack of support for Btor2 operators that have overflow detection are resolved by using LLVM-IR as the target language.
A common theme across these efforts is that they are not built on an architecture that can be easily extended. Each project aims to make it easier to utilize advances in formal verification, but they fail to offer a solution that does not require recreating components that already exist.
## VII Conclusion
In this paper, we present Btor2MLIR -- a new format and toolchain for hardware verification, based on the MLIR intermediate representation framework of the LLVM compiler infrastructure. Our goal is to open new doors for the research and applications of hardware verification by taking advantage of recent innovations in compiler construction technology. We believe that this project opens new avenues for exploring the application of existing verification and testing techniques developed for software to the hardware domain. As a proof of concept, we have connected Btor2MLIR with the SeaHorn verification engine. While out-of-the-box, this gives acceptable performance, when combined with Boolector, a combination that is competitive against BtorMC. In the future, we plan to continue this line of research and explore applying testing and simulation technologies such as Klee[7] and LibFuzzer[6]. We also plan to generate formats for other verification techniques such as Aiger[20], Constrained Horn Clauses, and Smt-Lib.
|
2305.19978 | On the fundamentality of the radial acceleration relation for late-type
galaxy dynamics | Galaxies have been observed to exhibit a level of simplicity unexpected in
the complex galaxy formation scenario posited by standard cosmology. This is
particularly apparent in their dynamics, where scaling relations display much
regularity and little intrinsic scatter. However, the parameters responsible
for this simplicity have not been identified. Using the Spitzer Photometry &
Accurate Rotation Curves galaxy catalogue, we argue that the radial
acceleration relation (RAR) between galaxies' baryonic and total dynamical
accelerations is the fundamental $1$-dimensional correlation governing the
radial (in-disk) dynamics of late-type galaxies. In particular, we show that
the RAR cannot be tightened by the inclusion of any other available galaxy
property, that it is the strongest projection of galaxies' radial dynamical
parameter space, and that all other statistical radial dynamical correlations
stem from the RAR plus the non-dynamical correlations present in our sample. We
further provide evidence that the RAR's fundamentality is unique in that the
second most significant dynamical relation does not possess any of these
features. Our analysis reveals the root cause of the correlations present in
galaxies' radial dynamics: they are nothing but facets of the RAR. These
results have important ramifications for galaxy formation theory because they
imply that to explain statistically late-type galaxy dynamics within the disk
it is necessary and sufficient to explain the RAR and lack of any significant,
partially independent correlation. While simple in some modified dynamics
models, this poses a challenge to standard cosmology. | Richard Stiskalek, Harry Desmond | 2023-05-31T16:03:46Z | http://arxiv.org/abs/2305.19978v2 | # On the fundamentality of the radial acceleration relation for late-type galaxy dynamics
###### Abstract
Galaxies have been observed to exhibit a level of simplicity unexpected in the complex galaxy formation scenario posited by standard cosmology. This is particularly apparent in their dynamics, where scaling relations display much regularity and little intrinsic scatter. However, the parameters responsible for this simplicity have not been identified. Using the Spitzer Photometry & Accurate Rotation Curves galaxy catalogue, we argue that the radial acceleration relation (RAR) between galaxies' baryonic and total dynamical accelerations is the fundamental correlation governing the radial (in-disk) dynamics of late-type galaxies. In particular, we show that the RAR cannot be tightened by the inclusion of any other available galaxy property, that it is the strongest projection of galaxies' radial dynamical parameter space, and that all other statistical radial dynamical correlations stem from the RAR plus the non-dynamical correlations present in our sample. We further provide evidence that the RAR's fundamentality is unique in that the second most significant dynamical relation does not possess any of these features. Our analysis reveals the root cause of the correlations present in galaxies' radial dynamics: they are nothing but facets of the RAR. These results have important ramifications for galaxy formation theory because they imply that to explain statistically late-type galaxy dynamics within the disk it is necessary and sufficient to explain the RAR and lack of any significant, partially independent correlation. While simple in some modified dynamics models, this poses a challenge to standard cosmology.
keywords: galaxies: kinematics and dynamics - dark matter - methods: data analysis
## 1 Introduction
A key task of astrophysics is to test the galaxy formation scenario posited by the concordance \(\Lambda\) Cold Dark Matter model of cosmology (\(\Lambda\)CDM; Rees & Ostriker, 1977; White & Rees, 1978; White & Frenk, 1991). This may be done by studying unusual or otherwise highly constraining individual or small sets of objects, for example satellites in coherent planes (Pawlowski, 2018), tidal dwarf galaxies (Bournaud, 2010), dark matter deficient galaxies (van Dokkum et al., 2018), massive high-redshift galaxies (Lovell et al., 2023) and large clusters (Asencio et al., 2021). Alternatively one may ask whether the galaxy population as a whole is as expected in \(\Lambda\)CDM. A key feature of galaxy formation in \(\Lambda\)CDM is that it is highly stochastic: not only do the properties of galaxies depend on the complex merger and mass accretion histories of the dark matter haloes in which they live, but chance baryonic phenomena such as supernova and feedback from active galactic nuclei can heat and redistribute mass, significantly altering galaxies' photometry and kinematics. One way to test the model is, therefore, to ask whether this complexity is manifest in galaxy phenomenology.
Surprisingly, galaxies are observed to be rather simple systems. This is most readily shown by applying the principal component analysis (PCA; F.R.S., 1901; Hotelling, 1936) to galaxy photometry, which calculates the fraction of the covariance in the data that may be accounted for by selected linearly combinations of fluxes in various bands. On multiple data sets and a wide range of variations of the basic technique (e.g. including derived quantities such as star formation rate, size, gas mass and morphology) it is found that only 1-3 principal components are required to explain the great majority (\(\gtrsim 90\) per cent) of the data covariance (Disney et al., 2008; Conselice, 2006; Garcia-Appadoo et al., 2009; Whitmore, 1984; Brosche, 1973; Bujarrabal et al., 1981; Chang et al., 2010; Cooray et al., 2022). Nonlinear dimensionality reduction techniques such as variational auto-encoders and self-organizing maps have also been applied, and the results extended to galaxies' spectra (Portillo et al., 2020; Vanderplas & Connolly, 2009; in der Au et al., 2012; Davidzon et al., 2022; Hemmati et al., 2019; Rahmani et al., 2018). These studies again indicate that fewer pieces of information are required to reproduce galaxy properties at high fidelity than would
be expected in a model where the entire past history of the galaxy and its halo is important.
These findings have analogues in galaxy kinematics, which may be viewed as the set of correlations between the internal motions of galaxies, tracing the underlying potential, and their baryonic mass distributions. To first order the radial kinematics can be summarized by a characteristic velocity and the mass distribution by a total mass and size, leading to the Tully-Fisher relation (TFR; Tully and Fisher 1977) for late-type galaxies and the fundamental plane (FP; Djorgovski and Davis 1987; Dressler et al. 1987) for early-types. The baryonic TFR has very small intrinsic scatter and is a near-perfect power-law over five decades of mass (McGaugh et al., 2000; McGaugh, 2012), with few clear outliers (Mancera Pina et al., 2019, 2020). This essentially indicates that two independent degrees of freedom suffice to explain the data (the normalization and power-law index), and perhaps more significantly that at fixed mass surface brightness is asymptotically irrelevant for the rotation velocity (Lelli et al., 2016; Desmond et al., 2019). The FP is a planar manifold in {mass, velocity, size} space, also with very small intrinsic scatter and a tilt relative to the expectation from the Newtonian virial theorem (Pizagno et al., 2007; Cappellari et al., 2013; Desmond and Wechsler, 2017).
In late-type galaxies, the global radial (i.e. in-disk) kinematic behaviour is subsumed by a local relation between the baryonic (\(g_{\rm bar}\)) and dynamical (\(g_{\rm obs}\)) centripetal acceleration across rotation curves (RCs) known as the mass discrepancy-acceleration or radial acceleration relation (RAR; Milgrom, 1983; Sanders, 1990; McGaugh, 2004; Lelli et al., 2017). This is highly regular and possesses little intrinsic scatter, and may be fit near-perfectly by functions with only 1-2 free parameters (Lelli et al., 2017; Desmond et al., 2023). It has recently been shown that the intrinsic or underlying RAR--obtained by marginalizing over all relevant galaxy variables-- has a scatter of \(0.034\pm 0.002\) dex around a 1-parameter function in the Spitzer Photometry & Accurate Rotation Curves (SPARC; Lelli et al. 2016) sample, making it the tightest known dynamical galaxy scaling relation (Desmond, 2023). Evidence is growing that early-type galaxies in fact follow the same RAR as late-types (Lelli et al., 2017; Shelest and Lelli, 2020; Chae et al., 2019, 2020; Tian and Ko, 2017).
Without a clear prediction or explanation of galaxies' simplicity within concordance cosmology it is important to study it empirically in an attempt to learn the features that govern it. These are the features a theory must account for to explain galaxy phenomenology, and may also provide new empirical tools for measuring galaxies' properties (as the TFR and FP afford redshift-independent distance probes). Motivated by the striking properties of the RAR, we test here the hypothesis that it is the "fundamental" relation in (late-type) radial galaxy dynamics, meaning that it alone accounts statistically for _all_ correlations involving dynamics within the disk. If this is so, then the full set of such correlations amount to nothing more than the RAR in conjunction with the non-dynamical correlations present in the sample, and explaining them is tantamount to explaining the RAR (and lack of any competing correlation). We break this question into two which we tackle in turn:
1. Do the residuals of the RAR have a statistically significant correlation with any other galaxy or environmental property? (If not, the RAR is not fundamental because it can be augmented by inclusion of those properties.)
2. Is the RAR the tightest projection of galaxies' radial dynamical parameter space, and can all other projections be explained by the RAR in conjunction with non-dynamical correlations present among the galaxy properties? (If not, there is at least one other relation as important as the RAR.)
We consider an affirmative answer to both questions to be necessary and sufficient to establish the fundamentality of the RAR. We then further assess whether fundamentality is a _unique_ property of the RAR, or whether there are any other (partially) independent relations also satisfying these fundamentality conditions. These aims are achieved by means of non-linear machine learning algorithms.
The structure of the paper is as follows. In Section 2 we describe the SPARC galaxy sample on which we base our analysis. Section 3 lays out our methods, including the statistics and methods we employ and the generation of mock data to validate our procedure. Section 4 contains our results--separately for each question above--while Section 5 discusses the broader ramifications of our results, caveats and systematic uncertainties, and useful further work. Section 6 concludes. The reader without particular interest in the technicalities of the methods is advised to skip Section 3 and refer back to it as and when it is mentioned in Section 4. For brevity, we will henceforth refer to statistically averaged, equilibrium radial motions within the plane of the disk simply as "dynamics"; these caveats are discussed further in Section 5.2.1. All logarithms are base-10.
## 2 Observational data
We utilize the SPARC database1 containing 175 galaxy RCs with _Spitzer_ 3.6 \(\mu\)m photometry (Lelli et al., 2016). Following Lelli et al. (2017), we exclude galaxies with a quality flag of 3 (large asymmetry, non-circular motion, large offset between stellar and HI distribution), inclination less than 30 degrees, or fractional uncertainty on the observed rotational velocity exceeding 10 per cent. We are left with \(2,696\) RC samples from 147 late-type galaxies. We define the total disk and bulge surface brightness as \(\Sigma_{\rm tot}=\Sigma_{\rm disk}+\Sigma_{\rm bulge}\). Unresolved disk and bulge surface brightness, \(\Sigma_{\rm disk}\) and \(\Sigma_{\rm bulge}\), are replaced with a single value below the minimum resolved value of 0.01 \(L_{\odot}\)/pc\({}^{2}\) (which has no effect on our results).
Footnote 1: [http://astroweb.cwru.edu/SPARC/](http://astroweb.cwru.edu/SPARC/)
Table 1 lists the parameters used for predicting galactic dynamical variables and searching for residual correlations. Local parameters (one value per RC point per galaxy) are listed above the line, while galaxy-wide parameters (one value per galaxy) are listed below it. We take the logarithm of properties with a dynamical range spanning several orders of magnitude, as indicated in the final column.
Each of these parameters is contained in the SPARC database except for the gravitational field at the SPARC galaxies' position, sourced by surrounding baryonic mass. This is calculated in Chae et al. (2021) and will allow us to assess whether \(g_{\rm obs}\) depends on the external field. At fixed internal mass distribution this would constitute a violation of the strong equivalence principle, as has been argued for
by Chae et al. (2020, 2021, 2022). We adopt the results assuming maximal clustering of unseen baryons with visible structures, which maximizes data agreement using the fitting function of Chae & Milgrom (2022), as shown in Chae et al. (2021, 2022). The results assuming no clustering of unseen baryons are however very similar. The values of \(e_{\rm N}\) are available for only 90 galaxies in the Sloan Digital Sky Survey (SDSS; Kollmeier et al. 2017) footprint; when including \(e_{\rm N}\) as a feature, we fill the missing values with the median of the measured sample.
We calculate the uncertainty of \(g_{\rm bar}\) and \(g_{\rm obs}\) as
\[\delta g_{\rm bar} =\frac{1}{10}\frac{\Upsilon_{\rm gas}V_{\rm gas}^{2}}{r}+\frac{ 1}{4}\frac{\left(\Upsilon_{\rm disk}V_{\rm disk}^{2}+\Upsilon_{\rm bulge}V_{ \rm bulge}^{2}\right)}{r}, \tag{1a}\] \[\delta g_{\rm obs} =g_{\rm obs}\sqrt{4\frac{dV_{\rm obs}^{2}}{V_{\rm obs}^{2}}+4 \frac{\delta i^{2}}{\tan^{2}i}+\frac{\delta D^{2}}{D^{2}}}, \tag{1b}\]
where the numerical factors in \(\delta g_{\rm bar}\) correspond to the mass-to-light ratio uncertainties (Lelli et al. 2016). We assume no statistical uncertainty on the velocity components; however, the calculation in Lelli et al. (2016) requires a disk thickness which is expected to introduce unmodeled uncertainty to the baryonic velocities at the \(\sim 10-15\) per cent level (F. Lelli, priv. comm.). We propagate the uncertainty as \(\delta\log g_{\rm bar}=\delta g_{\rm bar}/\left(g_{\rm bar}\ln 10\right)\) and similarly for \(\delta\log g_{\rm obs}\).
## 3 Methodology
In this section we detail the statistical tools used in our analysis. We discuss the analytic functions used to approximate the RAR, the partial correlation (PC) analysis, machine learning regression techniques, the loss function, and the generation of SPARC-like mock data with controlled dynamics. The main thrust of our analysis, including its results, can be found in Section 4, which refers back to this section for details.
### RAR analytic approximations
We consider two functions known to describe the RAR well. The first is the RAR interpolating function (RAR IF) introduced by Lelli et al. (2017):
\[g_{\rm obs}=\frac{g_{\rm bar}}{1-e^{-\sqrt{g_{\rm bar}/a_{0}}}}, \tag{2}\]
where \(a_{0}\sim 10^{-10}\) m/s\({}^{2}\) is a global characteristic acceleration scale. The second is the simple interpolating function + external field effect (Simple IF + EFE), which includes an additional parameter \(e_{\rm N}\) causing a downturn in the RAR at low \(g_{\rm bar}\)(Milgrom 1983; Chae & Milgrom 2022):
\[g_{\rm obs}=g_{\rm bar}\left(\frac{1}{2}+\left(\frac{1}{4}+ \left(\left(\frac{g_{\rm bar}}{a_{0}}\right)^{2}+(1.1e_{\rm N})^{2}\right)^{- \frac{1}{2}}\right)^{\frac{1}{2}}\right)\times\] \[\left(1+\tanh\left(\frac{1.1e_{\rm N}}{g_{\rm bar}/a_{0}}\right)^ {1.2}\times\left(-\frac{1}{3}\right)\times\right. \tag{3}\] \[\frac{\left(\left(\frac{g_{\rm bar}}{a_{0}}\right)^{2}+(1.1e_{\rm N })^{2}\right)^{-\frac{1}{2}}}{1+\left(\left(\frac{g_{\rm bar}}{a_{0}}\right)^ {2}+(1.1e_{\rm N})^{2}\right)^{-\frac{1}{2}}}\left).\]
\(e_{\rm N}\) may be considered as either a global constant or a galaxy-specific parameter, leading to a slightly distinct RAR for each galaxy. Studies of the SPARC data have provided evidence for the effect of \(e_{\rm N}\) Chae et al. (2020, 2021, 2022); Chae & Milgrom (2022).
These empirical fitting functions hold particular significance in the theory of Modified Newtonian Dynamics (MOND; Milgrom 1983a,c,b), an alternative to the Newtonian gravity on which General Relativity and hence \(\Lambda\)CDM is based. Eq. (2) is a simple instantiation of the fundamental MOND postulate that \(g_{\rm obs}=g_{\rm bar}\) for \(g_{\rm bar}\gg a_{0}\) and \(g_{\rm obs}=\sqrt{g_{\rm bar}a_{0}}\) for \(g_{\rm bar}\ll a_{0}\). Eq. (3) models the external field effect (EFE) in the Aquadratic Lagrangian (AQUAL; Bekenstein & Milgrom 1984) theory: due to the nonlinear acceleration-based modification, galaxies with identical mass distributions but differing environments will have different internal dynamics. A stronger external field pushes a galaxy closer to the Newtonian limit and hence produces a fall-off in the RAR at low \(g_{\rm bar}\). In this interpretation, \(e_{\rm N}\equiv g_{\rm ext}/a_{0}\), where \(g_{\rm ext}\) represents the strength of the Newtonian gravitational field sourced by the surrounding baryonic mass.
### Partial correlation
We employ a one-sided PC analysis (Baba et al. 2004) to investigate the correlation between \(g_{\rm obs}\) and other galaxy properties \(\{x_{k}\}\) at fixed \(g_{\rm bar}\). We consider the RAR in logarithmic space and define the residuals \(\Delta\log g_{\rm obs}\equiv\log g_{\rm obs}-\mathcal{F}(\log g_{\rm bar})\), where \(\mathcal{F}\) is a RAR-like function that predicts \(\log g_{\rm obs}\) based on \(\log g_{\rm bar}\). We then compute the \(g_{\rm bar}\)-controlled PC between \(g_{\rm obs}\) and \(x_{k}\) as the Kendall rank correlation \(\tau\in[-1,1]\) of \(\Delta\log g_{\rm obs}\) and \(x_{k}\), which we denote \(\tau(g_{\rm obs},x_{k}|g_{\rm bar})\)(Kendall 1938). This choice of correlation coefficient is motivated by the lack of prior expectation for linear residual correlations and by the greater robustness of the Kendall than Spearman coefficient (Colwell & Gillett 1982; Puth et al. 2015). The Spearman coefficient yields very similar results.
The PC coefficient does not account for uncertainties, so a
\begin{table}
\begin{tabular}{l c c} Label & Property & Logarithm? \\ \hline \hline \(g_{\rm bar}\) & Baryonic centripetal acceleration & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\
non-zero \(\tau(g_{\rm obs},x_{k}|g_{\rm bar})\) does not necessarily indicate a significant relationship between \(\Delta\log g_{\rm obs}\) and \(x_{k}\). While the correlation coefficient is sensitive to the mean trend, the predictive relationship between specific data realizations might be dominated by noise and the standard \(p\)-values therefore unreliable. Instead, we assess the impact of uncertainties on the PC by generating mock data using the Simple IF + EFE with the SPARC error model (Section 3.5) and calculate the corresponding PC coefficients to compare to those of the real data. This yields the distribution of PC coefficients corresponding to a SPARC-like data set that explicitly contains no dynamical correlations besides the RAR. If the data PC coefficient is a typical draw from the mock distribution, SPARC is a typical realization of a galaxy population in which dynamics is set entirely by the RAR.
### Design of machine learning regressors
We investigate the relationship between baryonic and dynamical properties using a fully connected, feed-forward neural network (NN) and decision tree-based regression techniques. The baryonic features considered as potential predictors of a target dynamical quantity are presented in Table 1. For galaxy-wide properties, each RC sample is assigned its galaxy's value. All properties are scaled to be zero-centred with a variance of unity.
#### 3.3.1 Neural network architecture
A NN represents a continuous, differentiable mapping: \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) with \(m\) input and \(n\) output nodes. The simplest fully connected, feed-forward NN without hidden layers relates an \(m\)-dimensional input feature vector to an \(n\)-dimensional prediction by passing the sum of the product of the feature vector with a weight matrix and a constant bias factor through a non-linear activation function (e.g., sigmoid, hyperbolic tangent; Bishop & Nasrabadi, 2007; Roberts et al., 2021). The weights and biases are trainable parameters optimized by minimizing a loss function on training samples through back-propagation (Rumelhart et al., 1986; see Section 3.4). In NNs with hidden layers, the output of a previous layer serves as input for the subsequent layer.
Our chosen NN architecture is as follows. The network's input layer is normalizing: it adjusts the samples' mean to zero and variance to unity. The network comprises two parallel components with outputs that are combined: a fully-connected NN with a wide hidden layer, and a single-node hidden layer with no activation, functioning as a linear operation. For the wide hidden layer, we employ the leaky rectified linear unit (LReLU) activation function.
\[\phi(x)=\begin{cases}x&\text{if }x\geqslant 0\\ \alpha x&\text{if }x<0,\end{cases} \tag{4}\]
where we choose \(\alpha=-0.2\). In contrast to the conventional ReLU (\(\alpha=0\)), the LReLU has a non-vanishing gradient if \(x<0\). We add a dropout layer following the wide hidden layer, which randomly sets a fraction \(f\) of nodes' inputs to \(0\) and scales the remaining inputs by \(1/(1-f)\)(Gal & Ghahramani, 2015). Additionally, we apply an \(l^{2}\) norm to regularize the weights with \(\lambda=0.01\). The loss function in Eq. (9), which depends on the derivative of the NN prediction with respect to its input, is optimized using the Adam stochastic optimizer (Kingma & Ba, 2014) with default hyperparameters2 and a cosine decay learning schedule (Loshchilov & Hutter, 2016). We implement the NN architecture in TensorFlow3(Abadi et al., 2015) and optimize the width of the hidden layer and the dropout rate (Section 3.3.3).
Footnote 2: [https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam)
Footnote 3: [https://www.tensorflow.org](https://www.tensorflow.org)
Training is performed over the training samples containing 60 per cent of galaxies. This fraction is chosen by fitting the RAR as a function of it and optimizing the resulting goodness-of-fit to maximize the models' generalization to unseen observations. We verify robustness of our conclusions to this choice. The train-test split is performed at the galaxy level, rather than on separate RC samples, to prevent information leakage between the sets otherwise introduced by the galaxy-wide properties. We choose a batch fractional size of \(1/3\), with one epoch consisting of a single pass through all three batches. The training process continues until the validation loss has not decreased for at least \(2,000\) epochs, after which the optimal set of weights is restored. Due to the limited sample size, the validation loss is evaluated on the entire training data set.
#### 3.3.2 Decision tree models
Decision tree regressors are grown by recursively partitioning samples into increasingly homogeneous subsamples minimizing a specific loss function, such as the weighted variance of the descendant nodes' samples. Predictions are acquired by traversing a path within the tree and procuring the mean value of a terminal leaf. The random forest (RF; Breiman, 2001) is a widely employed model. In it, an ensemble of decision trees is constructed, with each tree trained on bootstrap aggregated resamples of the training data and a randomly selected number of features for optimal split determination at each node. This approach mitigates the over-fitting issue commonly encountered in individual decision trees. Another ensemble model is the extra-trees (ET; Geurts et al., 2006). This neither performs bootstrap resampling nor seeks an optimal split, but instead randomly selects a subset of features at each node, assigns them a random split, and divides the samples based on the feature whose split minimizes the loss function. An alternative to constructing large ensembles of independent decision trees is to consider sequential, gradient-boosted trees where each tree is designed to predict the residuals of its predecessors (Friedman, 2001, 2002).
At first glance, tree-based regressors possess an appealing property: the ability to provide feature importance scores. The feature importance is the average summed impurity decrease (homogeneity increase) across all nodes split along a particular feature, averaged over all trees in the ensemble. An alternative metric not limited to tree-based regressors, permutation importance, is computed by permuting a feature, thereby randomizing its correlation with both the remaining features and the target, and subsequently observing the model's relative predictive loss. Notably, the model is not retrained during this process, and the procedure is
repeated multiple times to obtain a reliable estimate. High permutation importance indicates that the model's predictive power diminishes upon feature permutation, while low permutation importance suggests that the feature is either irrelevant or strongly correlated with another feature. However, interpreting the allocation of either feature or permutation importance to individual features can be challenging when features exhibit strong mutual correlations. Explainable machine learning methods, such as Shapley additive explanation (SHAP; Lundberg and Lee, 2017; Lundberg et al., 2020) attempt to address this issue, although their interpretation is not always straightforward (Krishna et al., 2022). Because of this, we do not put strong emphasis on the explainability aspect of our methods and will only briefly rely on the permutation importance score in Section 4.2.
We employ the ET model ExtraTreesRegressor from scikit-learn4(Pedregosa et al., 2011), and the extreme gradient boosting gradient boosting decision tree model XGBRegressor from the XGBoost5 library (Chen and Guestrin, 2016, denoted XGB hereafter). We do not use the RF as we find its behaviour near-identical to the ET. We again allocate 60 per cent of galaxies to the training set and the remainder to the test set.
Footnote 4: [https://scikit-learn.org/](https://scikit-learn.org/)
Footnote 5: [https://xgboost.readthedocs.io/](https://xgboost.readthedocs.io/)
#### 3.3.3 Hyperparameter selection
The efficacy and accuracy of any machine learning (ML) regressor hinges upon the appropriate selection of its hyperparameters, describing the architecture of the algorithm. Striking an optimal balance in the complexity is crucial, as overly simplistic models may fail to capture relevant features of the data while excessively complex ones risk overfitting and hence poor extrapolation or generalization. We employ Optuna6 for hyperparameter optimization, which facilitates efficient exploration of high-dimensional hyperparameter spaces (Akiba et al., 2019). The optimal set of hyperparameters is determined via cross-validation, in each case optimizing a loss function described in Section 3.4. Because the training of a NN is more computationally demanding than that of ET and XGB, we automatically optimize only the hidden layer width and dropout rate.
Footnote 6: [https://optuna.org](https://optuna.org)
We conduct 25 trials using Optuna, with each trial involving 50 re-trainings of the NN on the same data to account for stochasticity. For the ET and XGB we execute \(10,000\) trials without any retraining in each trial. The hyperparameters that we optimize for each model are presented in Table 2, which also includes the best parameters for the prediction of \(g_{\mathrm{obs}}\) from \(g_{\mathrm{bar}}\) (i.e. the RAR).
### Scoring regressors
Fitting a model \(f:\mathbb{R}\to\mathbb{R}\), be it an analytic function or machine learning regressor, requires a loss function to minimize. In machine learning this is typically considered to be the mean square error (or equivalently coefficient of determination \(R^{2}\)) between the data and prediction. This however ignores the measurement uncertainties which are crucial for reliable inference. We therefore consider a Gaussian likelihood function allowing for uncertainties in both the independent (\(x\)) and dependent (\(y\)) variable. This necessitates introducing latent variables for the true location of the independent variable, \(x_{\mathrm{t}}\), on which the full likelihood function depends (e.g. Berger et al., 1999). To eliminate these nuisance parameters one can either marginalize over them with a suitable prior or fix them to their maximum likelihood values as a function of the inferred parameters to make a profile likelihood. The profile likelihood is unbiased in parameters describing the shape of the function because its maximum is by construction at the same point in parameter space as the original likelihood's, and the underestimation of uncertainties is unimportant when one considers, as we do, only best-fit values.
The likelihood of observed values \(x,\,y\) given \(x_{\mathrm{t}}\) and the functional fit \(f\) is
\[p(x,y|x_{\mathrm{t}},I)\propto\exp\left(-\frac{(y-f(x_{\mathrm{t}}))^{2}}{2 \sigma_{y}^{2}}-\frac{(x-x_{\mathrm{t}})^{2}}{2\sigma_{x}^{2}}\right), \tag{5}\]
assuming uncorrelated Gaussian distributions for \(x\) and \(y\) with uncertainties \(\sigma_{x}\) and \(\sigma_{y}\), respectively, and omitting normalization terms. Here, \(I\) represents the free parameters of \(f(x)\). The maximum likelihood estimate of the true independent variable is given by
\[\hat{x}_{\mathrm{t}}=\operatorname*{arg\,max}_{x_{\mathrm{t}}}p(x,y|x_{ \mathrm{t}},I). \tag{6}\]
We expand the function \(f(x)\) to first order around the observed value \(x\) as
\[f(x_{\mathrm{t}})=f(x)+(x_{\mathrm{t}}-x)\left.\frac{\mathrm{d}f}{\mathrm{d}x }\right|_{x}+\ldots. \tag{7}\]
As the RAR is approximately composed of two linear segments in log-space (and other dynamical correlations are similarly near power-laws), the error induced by the neglected higher order terms is negligible. Substituting the above two equations into the joint probability, we obtain the profile likelihood:
\[p_{\mathrm{prod}}(x,y|I)\propto\exp\left(-\frac{1}{2}\frac{(y-f(x))^{2}}{ \sigma_{y}^{2}+\sigma_{z}^{2}\left.\mathrm{d}f/\mathrm{d}x\right|_{x}^{2}} \right), \tag{8}\]
where \(\left.\mathrm{d}f(x)/\mathrm{d}x\right|_{x_{\mathrm{t}}}^{2}\) projects the uncertainty of \(x\) along \(y\). For a set of statistically independent observations \(\{x,y,\sigma_{x},\sigma_{y}\}\), the overall likelihood is given by their product. A derivation of the alternative \(x_{\mathrm{t}}\)-marginalized likelihood, which performs better in the presence of intrinsic scatter, can be found in Desmond et al. (2023).
In accordance with ML terminology, we introduce a loss function \(\mathcal{L}\) as the negative profile log-likelihood
\[\mathcal{L}(x,y|I)\equiv\frac{1}{2}\frac{(y-f(x))^{2}}{\sigma_{y}^{2}+\sigma_ {z}^{2}\left.\mathrm{d}f/\mathrm{d}x\right|_{x}^{2}}, \tag{9}\]
which we denote henceforth simply as \(\mathcal{L}\). This is equivalent to the common least-squares (or \(\chi^{2}\)) loss with a sample weight of \(1/\sigma^{2}\) where
\[\sigma^{2}\equiv\sigma_{y}^{2}+\sigma_{x}^{2}\left.\frac{\mathrm{d}f(x)}{ \mathrm{d}x}\right|_{x}^{2}. \tag{10}\]
To propagate the uncertainty of the independent variable, it is essential that the regressor \(f(x)\) be differentiable. This condition is satisfied for our analytic functions as well as our
NN thanks to automatic differentiation (e.g. Gunes Baydin et al., 2015). Our ET and XGB models are however non-differentiable and so we cannot directly minimize Eq. (9). We therefore approximate the gradient with an analytic expression prior to model fitting and set the inverse of Eq. (10) to be the sample weight. For the RAR, we use the gradient of the RAR IF, assuming a best-fit value of \(a_{0}=1.118\times 10^{-10}\) m/s\({}^{2}\) obtained by fitting this function to the full SPARC data set using Eq. (9). For multi-dimensional cases where \(f:\mathbb{R}^{m}\to\mathbb{R}\) with \(m>1\), the loss function in Eq. (9) is extended by summing over the square of the product of gradient and uncertainty in each independent variable. In certain instances we will ignore the uncertainty in the independent variable (i.e. \(\sigma_{x}=0\)), denoting this loss as
\[\mathcal{L}_{0}=\frac{1}{2}\frac{(y-f(x))^{2}}{\sigma_{y}^{2}}. \tag{11}\]
We verify that these choices have little effect on the minimum loss or the parameter values at which it occurs.
### Mock data
To quantify sample variance and aid in interpreting our results, we generate mock data that by construction exhibit a specific dynamical correlation only. These differ only in the values of \(V_{\mathrm{obs}}\) and their correlations with other variables, and are otherwise generated according to the SPARC error model. The first set of mocks calculates \(V_{\mathrm{obs}}\) through the \(g_{\mathrm{bar}}-g_{\mathrm{obs}}\) correlation of the data (i.e. the RAR) and the second through the \(\Sigma_{\mathrm{tot}}-J_{\mathrm{obs}}\) relation, where \(J_{\mathrm{obs}}\) is the jerk \(V_{\mathrm{obs}}^{\lambda}/r^{2}\). The reason for the latter choice is that \(\Sigma_{\mathrm{tot}}-J_{\mathrm{obs}}\) will be found in Section 4.2 to be the second strongest dynamical correlation in SPARC; we wish to see whether either of these relations is sufficient to account for the other correlations. We do not model intrinsic scatter in either relation, which (at the level at which it is present in SPARC) would simply weaken slightly the strengths of the correlations.
Both mock data sets account for the covariance of \(V_{\mathrm{bar}}\) and \(V_{\mathrm{obs}}\) across a single galaxy's RC induced by uncertainties in distance (\(D\)), inclination (\(i\)), mass-to-light ratios (gas \(\Upsilon_{\mathrm{gas}}\), disk \(\Upsilon_{\mathrm{disk}}\), bulge \(\Upsilon_{\mathrm{bulge}}\)), and luminosity (\(L_{3.6}\)) by directly sampling them for each galaxy from their prior distributions. We assume Gaussian distributions for all galaxies and denote with an overbar the prior mean as taken from SPARC:
\[D \leftrightarrow\mathcal{G}\left(\bar{D},\delta D\right), \tag{12a}\] \[i \leftrightarrow\mathcal{G}\left(\bar{i},\delta i\right),\] (12b) \[\log\Upsilon_{\mathrm{gas}} \leftrightarrow\mathcal{G}\left(\log 1.0,0.04\right),\] (12c) \[\log\Upsilon_{\mathrm{disk}} \leftrightarrow\mathcal{G}\left(\log 0.5,0.10\right),\] (12d) \[\log\Upsilon_{\mathrm{bulge}} \leftrightarrow\mathcal{G}\left(\log 0.7,0.10\right),\] (12e) \[L_{3.6} \leftrightarrow\mathcal{G}\left(\bar{L}_{3.6},\delta L_{3.6} \right), \tag{12f}\]
where the standard deviation of \(\log\Upsilon_{\mathrm{gas}}\) corresponds to a 10 per cent uncertainty on \(\Upsilon_{\mathrm{gas}}\).
We calculate the mock baryonic acceleration and surface brightness as
\[g_{\mathrm{bar}} =\frac{L_{3.6}}{L_{3.6,\mathrm{t}}}\left(\sum_{\mathrm{X}}\frac{ \Upsilon_{\mathrm{X}}V_{\mathrm{X}}|V_{\mathrm{X}}|}{\bar{r}}\right)+\frac{ \Upsilon_{\mathrm{gas}}V_{\mathrm{gas}}|V_{\mathrm{gas}}|}{\bar{r}}, \tag{13a}\] \[\Sigma_{\mathrm{tot}} =\left(\frac{L_{3.6}}{L_{3.6}}\right)\bar{\Sigma}_{\mathrm{tot}}. \tag{13b}\]
where \(\mathrm{X}\in\{\mathrm{disk},\ \mathrm{bulge}\}\). We scale the disk and bulge contributions along with the surface brightness by the observational error on luminosity, which is however very small. \(g_{\mathrm{bar}}\) and \(\Sigma_{\mathrm{tot}}\) are distance independent; in case of the former velocities are proportional to \(\sqrt{D}\) and galactocentric separation is proportional to \(D\). We generate 100 mock realizations of the SPARC data set and calculate the mean and standard deviation among them.
#### 3.5.1 Imposing the RAR
For this mock data set we employ a fitting function \(\mathcal{F}\), specifically the RAR IF of Eq. (2) or Simple IF + EFE of Eq. (3), to connect the sampled baryonic acceleration to the "true" total acceleration \(\langle g_{\mathrm{obs}}\rangle=\mathcal{F}(g_{\mathrm{bar}})\). Considering that
\begin{table}
\begin{tabular}{l l l l} & Hyperparameter & Options & Best for RAR \\ \hline \hline TensorFlow NN & width & \(\mathbb{N}(8,128)\) & 16 \\ & dropout\_rate & \(\mathbb{R}(0.001,0.10)\) & 0.05 \\ \hline ExtraTreesRegressor & n\_estimators & \(\mathbb{N}(64,128)\) & 92 \\ & max\_depth & \(\mathbb{N}(2,16)\) & 8 \\ & min\_samples\_split & \(\mathbb{N}(2,32)\) & 2 \\ & max\_features & \(\{\)sqrt, log\(\}\), None\(\}\) & sqrt \\ & min\_impurity\_decrease & \(\mathbb{R}(10^{-14},0.5)\) & \(4.2\times 10^{-5}\) \\ & ccp\_alpha & \(\mathbb{R}(10^{-14},0.5)\) & \(4.4\times 10^{-12}\) \\ & max\_samples & \(\mathbb{R}(0.1,0.99)\) & 0.94 \\ \hline XGBoostRegressor & n\_estimators & \(\mathbb{N}(16,128)\) & 125 \\ & max\_depth & \(\mathbb{N}(2,8)\) & 4 \\ & booster & \(\{\)gbree, dart\(\}\) & dart \\ & learning\_rate & \(\mathbb{R}(0.01,0.99)\) & 0.94 \\ & gamma & \(\mathbb{R}(0,10)\) & 1.6 \\ & min\_child\_weight & \(\mathbb{R}(0.5,2.5)\) & 2.14 \\ & subsample & \(\mathbb{R}(0.5,1)\) & 0.56 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameter ranges of the NN, ET and XGB regressors that are optimized with Optuna (Section 3.3), with \(\mathbb{N}\) and \(\mathbb{R}\) representing integer and real ranges, respectively. The rightmost column shows the best hyperparameters for predicting \(g_{\mathrm{obs}}\) from \(g_{\mathrm{bar}}\), i.e. the RAR relation.
\(V_{\rm obs}\propto 1/\sin i\) and \(r\propto D\), we adjust the mock total acceleration for the sampled inclination and distance:
\[g_{\rm obs}\leftrightarrow\mathcal{G}\left(\langle g_{\rm obs}\rangle\left(\frac{ \sin i}{\sin i}\right)^{2}\left(\frac{D}{D}\right),\frac{2\delta V_{\rm obs}}{V _{\rm obs}}\right), \tag{14}\]
where the standard deviation accounts for the propagated statistical uncertainty of \(V_{\rm obs}\) to \(g_{\rm obs}\). The sampled galactocentric distance is \(r=r_{\rm t}D/\bar{D}\). This yields a set of mock "observed" values \(\{r,g_{\rm bar},g_{\rm obs}\}\) for the galaxy's RC generated with \(\mathcal{F}\). When using the Simple IF + EFE to generate the mock data, we have that \(g_{\rm obs}=\mathcal{F}(g_{\rm bar},e_{\rm N})\), where \(e_{\rm N}\) is sampled according to \(\log\varepsilon_{\rm N}\leftrightarrow\mathcal{G}(\log\varepsilon_{\rm N}, \delta\log\varepsilon_{\rm N})\) given the results of Chae et al. (2021). We set the free parameter \(a_{0}\) in the mock data by minimizing the loss function Eq. (9) over the entire SPARC data set. For the RAR IF, we obtain \(a_{0}=1.118\times 10^{-10}\) ms\({}^{-2}\), while for the Simple IF + EFE we find \(a_{0}=1.170\times 10^{-10}\) ms\({}^{-2}\). In the latter case we simultaneously fit a global parameter \(e_{\rm N}\) (for which we find \(\log\varepsilon_{\rm N}=-2.088\)), although this is not used in the construction of the mock data. The precise values of these parameters make no difference to the results.
#### 3.5.2 Imposing the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation
To model the \(\log\Sigma_{\rm tot}-\log J_{\rm obs}\) relation we fit the relation in SPARC, shown in Fig. 1, with a cubic polynomial by minimizing Eq. (9). This fits the data well and yields coefficients for the cubic, quadratic, linear and constant terms of 4.07, 0.27, 0.08, 0.02. Our results are not significantly affected by the form of the \(\Sigma_{\rm tot}-J_{\rm obs}\) function provided it captures the mean trend of the data. We use this to calculate the true mock jerk \(\langle J_{\rm obs}\rangle\) from \(\Sigma_{\rm tot}\). We then determine the "observed" jerk given the inclination and distance for this mock data set, and the statistical uncertainty of \(V_{\rm obs}\), according to:
\[J_{\rm obs}\leftrightarrow\mathcal{G}\left(\langle J_{\rm obs}\rangle\left( \frac{\sin i}{\sin i}\right)^{3}\left(\frac{D}{D}\right)^{2},\frac{3\delta V_ {\rm obs}}{V_{\rm obs}}\right). \tag{15}\]
## 4 Results
### Residual correlations of the RAR
Our first objective is to investigate the residual correlations in the RAR to determine whether \(g_{\rm obs}\) exhibits correlations with any other variables at fixed \(g_{\rm bar}\). The existence of such correlations would disqualify the RAR as fundamental because it would imply that a tighter manifold could be constructed in a higher-dimensional space by incorporating secondary variables. This is analogous to the correlation of velocity dispersion with galaxy size at fixed luminosity in early-type galaxies, which refines the Faber-Jackson relation to the FP (Djorgovski and Davis, 1987; Dressler et al., 1987; Bender et al., 1992; Cappellari et al., 2013). To address this, we employ two methods:
1. compute partial correlation coefficients between \(g_{\rm obs}\) and all other accessible variables at fixed \(g_{\rm bar}\), and
2. employ flexible, non-linear machine learning algorithms to determine if the prediction of \(g_{\rm obs}\) can be improved by considering additional variables besides \(g_{\rm bar}\).
#### 4.1.1 Partial correlation
We calculate the Kendall correlation coefficient of galaxy properties with the RAR residuals around the RAR IF (Section 3.1): for the details see Section 3.2. In Fig. 2 we present the SPARC PC coefficients as red crosses, and compare them to the expectation from mock data using \(g_{\rm obs}\) generated from the Simple IF + EFE (Section 3.5), depicted as blue violins. Although several data PC coefficients are non-zero, this is consistent with the null hypothesis that the real data is drawn from the mocks. The width of the mock PC coefficient distribution is determined by the sensitivity of the respective features to the RAR residuals. All violins in Fig. 2 use the same set of resampled residuals. Thus, given the size of the data set and the uncertainties, the PC analysis reveals no significant secondary correlations in the RAR. We have also tested replacing the RAR IF fit with our machine learning regressors, finding very similar PC values.
We also show mock PC coefficients around the RAR generated from the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation as vertical bars extending to \(\pm 2\sigma\) with dots at the median. We find significantly non-zero mock PC coefficients in case of \(T\), \(R_{\rm eff}\), \(r/R_{\rm eff}\) and \(\Sigma_{\rm bul}\), indicating that a different relation between baryonic and dynamical properties would lead to strong residual correlations in the RAR. We further calculate the PC coefficients of the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation of the SPARC data by correlating its residuals around the cubic fit of Section 3.5.2 with baryonic properties (not shown). Comparing these with the expectations from the \(\Sigma_{\rm tot}-J_{\rm obs}\) mocks, we observe the SPARC PC coefficients of \(T\), \(g_{\rm bar}\), \(\Sigma_{\rm bul}\) to be significant at \(\geq 3\sigma\). These coefficients in the data are \(\tau(J_{\rm obs},T|\Sigma_{\rm tot})=-0.277\), \(\tau(J_{\rm obs},g_{\rm bar}|\Sigma_{\rm tot})=0.256\) and \(\tau(J_{\rm obs},\Sigma_{\rm bul}|\Sigma_{\rm tot})=0.223\). This shows that the RAR is special in not having any residual correlations: it is not simply that such correlations are washed out by measurement uncertainties or the mixing of different galaxy types in SPARC.
For the RAR, the most significant partial correlation in
Figure 1: Relation between surface brightness \(\Sigma_{\rm tot}\) and jerk \(J_{\rm obs}=V_{\rm obs}^{3}/r^{2}\) for both SPARC and an example mock realization. The mock data is generated by assuming a cubic \(\Sigma_{\rm tot}-J_{\rm obs}\) relation (red line) in conjunction with the SPARC error model, as described in the text. This is used as a foil for the RAR mock data to establish which dynamical correlations are primary and which derivable from them.
SPARC, at roughly the \(2\sigma\) level given the RAR mock data, is \(\tau(g_{\rm obs},e_{\rm N}|g_{\rm bar})\approx-0.2\), suggesting an anti-correlation between \(g_{\rm obs}\) at fixed \(g_{\rm bar}\) and \(e_{\rm N}\). This is as expected from the EFE. Surprisingly, however, the mock PC of \(e_{\rm N}\) is approximately centred at 0 even though it is generated using the Simple IF + EFE function with the best-fit (max clustering) values of \(e_{\rm N}\) from Chae et al. (2021). One would expect a negative PC coefficient \(\tau(g_{\rm obs},e_{\rm N}|g_{\rm bar})\) when the mock data includes a non-zero EFE and the residuals are calculated using the RAR IF fit, as increasing \(e_{\rm N}\) lowers \(g_{\rm obs}\) at low \(g_{\rm bar}\). However, we show in Fig. 3 that this effect is washed away by the systematic uncertainties in distance and inclination, as well as the statistical uncertainty in \(V_{\rm obs}\). We show this by sequentially increasing each uncertainty from 0 to its SPARC value, while keeping the other uncertainties fixed at 0. Thus, one would not expect to see \(\tau(g_{\rm obs},e_{\rm N}|g_{\rm bar})\) inconsistent with 0 in the real data even if the EFE were present at the level suggested by Chae et al. (2021).
#### 4.1.2 Regression with \(g_{\rm bar}\)
We begin our regression analysis by training our machine learning regressors to predict \(g_{\rm obs}\) based solely on \(g_{\rm bar}\) (see Sections 3.3 and 3.4). This will afford a comparison point when incorporating additional features. We generate \(10,000\) test-train splits and follow the procedure outlined in Section 3.3.3 to determine the optimal hyperparameters for each feature and target.
In Fig. 4, we present the RAR fits using the RAR IF, Simple IF + EFE, NN, ET, and XGB. Note that when fitting the Simple IF + EFE we use a global value of \(\log e_{\rm N}=-2.088\), whereas for the mock data generation using the Simple IF + EFE we use the galaxy-by-galaxy best-fit values of Chae et al. (2021). Due to the EFE, the RAR IF predicts higher values of \(g_{\rm obs}\) at low \(g_{\rm bar}\) than the Simple IF + EFE, whereas elsewhere it is the Simple IF + EFE that marginally exceeds the RAR IF. The regressors successfully recover the mean trend of the RAR and are approximately consistent with the analytic functions. The NN provides a smooth approximation of the RAR, closely aligned with the Simple IF + EFE. The ET and XGB methods struggle to produce a smooth fit due to their discontinuous nature. They also systematically under-predict \(g_{\rm obs}\) at the highest \(g_{\rm bar}\) limit and over-predict it at the lowest \(g_{\rm bar}\) as a result of interpolating between previously seen samples, driving the edge samples closer to the mean.
In Fig. 5 we show the loss \(\mathcal{L}\) from Eq. (9) evaluated on the unseen test samples. Due to the train-test splitting being performed over galaxies rather than the RC samples directly, \(\mathcal{L}\) is assessed over a varying number of observations. We instead display the mean test set loss per observation. In the absence of train-test splitting (i.e. simply fitting to the entire data set), the Simple IF + EFE is favoured over the RAR IF, as indicated by the vertical lines in Fig. 5. This is practically guaranteed since the Simple IF + EFE contains an additional free parameter \(e_{\rm N}\), and reduces to the Simple interpolating function, which closely resembles the RAR IF, when \(e_{\rm N}=0\). However, the train-test splitting diminishes this preference, resulting in near-identical distributions of \(\mathcal{L}\) for the Simple IF + EFE and RAR IF, with the latter being weakly preferred on average. This suggests that the Simple IF + EFE may be slightly over-fitted.
In examining the goodness-of-fit of the ML models presented in Fig. 5, we find that the NN performs comparably to the Simple IF + EFE. The distributions of \(\mathcal{L}\) for the ET
Figure 3: Effect of \(i\), \(D\), \(V_{\rm obs}\), and \(L_{3.6}\) uncertainties on the \(g_{\rm obs}\) and \(e_{\rm N}\) PC, \(\tau\,(g_{\rm obs},e_{\rm N}|g_{\rm bar})\). For each parameter, we scale its uncertainty by \(f_{\rm err}\) while setting the uncertainty in the remaining parameters to 0. \(f_{\rm err}=1\) represents the fiducial SPARC uncertainty. While in the noiseless case we would expect to see \(\tau<-0.2\) due to the presence of the EFE in these mocks, the uncertainties in \(i\), \(D\), \(V_{\rm obs}\) effectively wash it away. The bands show \(1\sigma\) Monte Carlo uncertainty from \(1,000\) mock data realizations while the solid lines show the mean.
Figure 2: Partial correlation coefficients of the residuals of \(g_{\rm obs}\) at fixed \(g_{\rm bar}\) (as described by the RAR IF) with various galaxy and environmental features. The SPARC results are shown by red crosses while those of many mock data sets generated by the Simple IF + EFE are shown by blue volins, with the blue ticks indicating the median and 1 and 2\(\sigma\) levels. Vertical green bars show \(\pm 2\sigma\) results for mock data with dynamics sourced by the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation, with dots at the median. No significant deviations of the data from the Simple IF + EFE mocks are observed, indicating that SPARC is a typical realization of a population with dynamics specified fully by the RAR. The only near-significant secondary correlation of \(g_{\rm obs}\) is with \(e_{\rm N}\), hinting at the presence of the EFE. The \(\Sigma_{\rm tot}-J_{\rm obs}\) mock exhibits non-zero PC coefficients incompatible with the data.
and XGB models are systematically higher (i.e. worse) than those of the analytic functions and the NN. We therefore find that none of the ML models can produce a better description of the data than the RAR IF; we discuss this point further in Section 5.2.
#### 4.1.3 Regression with other features
We now extend our analysis by incorporating additional features alongside \(g_{\rm bar}\) to predict \(g_{\rm obs}\). The dashed lines in Fig. 5 represent the goodness-of-fit for the ET, XGB, and NN regressors using all available features. Although the peak \(\mathcal{L}\) remains relatively unchanged, a more pronounced high-loss tail is observed. Notably, the NN regressor exhibits a significantly degraded performance, with a mean loss several times larger, likely due to convergence issues; hence, it is not displayed in Fig. 5. We also explore dimensionality reduction via PCA for predicting \(g_{\rm bar}\) from all features, allowing Optuna to optimize the number of dimensions. However, this approach yields no improvement.
The regressors may be confused when presented with all features, especially if several of them are uninformative. We therefore investigate separately the predictivity of \(g_{\rm obs}\) using all individual and pairs of features. In Fig. 6, the diagonal cells display the mean loss \(\mathcal{L}_{0}\) per observation for the ET regressor when predicting \(g_{\rm obs}\) using single features. (We verify that other regressors yield similar results.) For simplicity, we present the loss \(\mathcal{L}_{0}\) without propagating the uncertainty in the independent variable, as an approximate value is sufficient to establish the relative merits of the features. We find that \(g_{\rm bar}\) is by far the best individual predictor of \(g_{\rm obs}\). Surface brightness emerges as the next-best predictor, although with a loss nearly an order of magnitude larger. The predictivity of surface brightness can be attributed to its strong correlation with \(g_{\rm bar}\), which is calculated from it. We then assess the predictability of \(g_{\rm obs}\) from pairs of features, shown by the off-diagonal elements of Fig. 6. The inclusion of any secondary feature does not enhance the performance of the \(g_{\rm bar}\) predictions, but in fact typically degrades it. All pairs of features that do not include \(g_{\rm bar}\) exhibit poor goodness-of-fit, and it is rarely the case that a pair of features provides more information on \(g_{\rm obs}\) than either one alone. Altogether, similar to the PC analysis, these results indicate that no additional information on \(g_{\rm obs}\) can be gleaned from features besides \(g_{\rm bar}\).
### Projections of the dynamical parameter space
We have established the RAR as the fundamental correlation when \(g_{\rm obs}\) is the target variable, with \(g_{\rm bar}\) fully accounting for \(g_{\rm obs}\) up to apparently random noise. Nevertheless, it remains to be determined whether \(g_{\rm obs}\) is the optimal target variable. There may exist other dynamical variables more strongly correlated with, or predictable from, baryonic properties, which could potentially imply the RAR as an approximation. To explore this possibility, we generalize our analysis from Section 4.1 to predict an arbitrary dynamical variable, \(\mathcal{D}\left(\alpha,\beta\right)\), defined as
\[\mathcal{D}\left(\alpha,\beta\right)\equiv\frac{V_{\rm obs}^{\alpha}}{r^{ \beta}}. \tag{16}\]
This encompasses as special cases \(V_{\rm obs}\) (\(\alpha=1,\beta=0\)), \(g_{\rm obs}\) (\(\alpha=2,\beta=1\)), jerk \(J_{\rm obs}\) (\(\alpha=3,\beta=2\)) and higher derivatives, without privileging any combination of \(V_{\rm obs}\) and \(r\). The full 2D parameter space of \(\alpha,\beta\) need not be considered because \(\mathcal{D}(\alpha,\beta)^{\gamma}\) is equally predictable as \(\mathcal{D}(\alpha,\beta)\). Without loss of generality we therefore parametrize \(\mathcal{D}\) by a new variable
Figure 4: _Upper:_ The SPARC RAR (grey points) with various fits overlaid. The lines show predicted values from analytic functions or ML regressors trained on \(g_{\rm bar}\) only, averaged over train-test splits. _Lower_: Ratios of predicted \(g_{\rm obs}\) with respect to the RAR IF. The bands indicate \(1\sigma\) variation between the different test-train splits.
Figure 5: Comparison of RAR average test set loss per data point for \(10,000\) train-test splits, predicting \(g_{\rm obs}\) from \(g_{\rm bar}\) only (solid lines) and all features (dashed lines). The vertical lines show the RAR IF and Simple IF + EFE results without train-test splitting. The NN performs comparably to the analytic functions, while ET and XGB perform worse. No ML regressor improves when given features beyond \(g_{\rm bar}\).
\(\theta\in[0,\pi)\), defined by
\[\tan\theta\equiv\frac{\beta}{\alpha}. \tag{17}\]
We consider individual baryonic properties, pairs of properties, and the full set of properties as predictors. The uncertainty in \(V_{\rm obs}\), \(D\) and \(i\) is propagated to \(\mathcal{D}(\alpha,\beta)\) via
\[\frac{\delta\mathcal{D}(\alpha,\beta)}{\mathcal{D}(\alpha,\beta)}=\sqrt{\alpha ^{2}\frac{\delta V_{\rm obs}^{2}}{V_{\rm obs}^{2}}+\alpha^{2}\frac{\delta i^{2} }{\tan^{2}i}+\beta^{2}\frac{\delta D^{2}}{D^{2}}}. \tag{18}\]
We generate the mock RAR-like data for an arbitrary \(\theta\) following the approach of Section 3.5 as \(\mathcal{D}\left(\alpha,\beta\right)=g_{\rm obs}^{\alpha/2}r^{\alpha/2-\beta}\), with \(g_{\rm obs}\) computed using the RAR IF with \(a_{0}=1.118\times 10^{-10}\) ms\({}^{-2}\). This introduces an explicit correlation only between \(g_{\rm bar}\) and \(g_{\rm obs}\). We similarly calculate \(\mathcal{D}\) for our mock data by calculating \(V_{\rm obs}\) from either \(g_{\rm obs}\) or \(J_{\rm obs}\) and applying Eq. (16). This will let us determine the extent to which SPARC-like data with dynamics set solely by the \(g_{\rm bar}-g_{\rm obs}\) or \(\Sigma_{\rm tot}-J_{\rm obs}\) relation is able to match the predictivity of \(\mathcal{D}\) from baryonic variables in the real data.
We use the ET and NN regressors. For ET, we simplify the process by neglecting the propagation of uncertainty in the dependent variable and only generate sample weights using Eq. (18), showing the loss \(\mathcal{L}_{0}\). The ET hyperparameters are fixed to those used in the \(g_{\rm obs}\) regression. For the NN, we fully incorporate the uncertainty in the dependent variables in the loss \(\mathcal{L}\) given by Eq. (9) and adopt the NN architecture and training described in Section 3.3.1.
We present the goodness-of-prediction of \(\mathcal{D}(\alpha,\beta)\) by various features according to the ET model in Fig. 7. We compute \(\mathcal{L}_{0}\) as a function of \(\theta\) for individual features listed in Table 1 and for all features combined, showing the results for the real data in solid lines. We focus on the individual dependence of \(g_{\rm bar}\), \(\Sigma_{\rm tot}\), \(L_{3.6}\), as the remaining features are either non-informative or exhibit no interesting trends. We find that \(g_{\rm bar}\) is the best predictor of \(g_{\rm obs}\) (\(\theta=\arctan 1/2\)), in agreement with the results of Section 4.1. Interestingly, we find that \(L_{3.6}\) is the best predictor of \(V_{\rm obs}\). This is reminiscent of the TFR but surprising because \(L_{3.6}\) is a galaxy-wide feature, which cannot therefore capture variations in \(V_{\rm obs}\) within a single galaxy. \(\Sigma_{\rm tot}\) is as good a predictor of \(J_{\rm obs}=V_{\rm obs}^{3}/r^{2}\) as \(g_{\rm bar}\), and \(J_{\rm obs}\) is approximately the dynamical quantity most predictable from \(\Sigma_{\rm tot}\). That the \(J_{\rm obs}-\Sigma_{\rm tot}\) relation is the second strongest correlation in the data is what motivates our decision to investigate mock data created from it, as a foil for the RAR results. This is discussed further in Section 5.1.
The inclusion of additional features does not enhance the prediction of \(g_{\rm obs}\), corroborating the result of Section 4.1. However, when considering all features simultaneously, the prediction of \(\mathcal{D}\) exhibits a general improvement (black line). Notably, for \(\theta\gtrsim\arctan(2/3)\) the predictability of \(\mathcal{D}\) improves substantially when all features are considered. To investigate this further, we calculate the loss of \(\mathcal{D}(\theta)\) using \(g_{\rm bar}\), \(\Sigma_{\rm tot}\) and subsequently \(g_{\rm bar}\), \(\Sigma_{\rm tot}\), \(M_{\rm HI}\) (shown in Fig. 7). We found that the inclusion of \(M_{\rm HI}\) significantly reduces the loss, despite being an uninformative feature on its own. However, this bears no consequence on the RAR as the global minimum.
The dashed and dotted lines in Fig. 7 show analogous results obtained from the RAR and \(\Sigma_{\rm tot}-J_{\rm obs}\) mock data, respectively (see Section 3.5). The mock RAR lines are remarkably consistent with the real data, showing that the RAR alone is approximately able to account for all dynamical correlations present in SPARC. Minor discrepancies arise in the case of predicting \(V_{\rm obs}\) from \(L_{3.6}\) at \(\theta\sim 0\) and \(g_{\rm obs}\) from \(g_{\rm bar}\). In the former, a stronger relation is found in the real data compared to the mock data, while in the latter, the relation is stronger in the mock data. These deviations are significant at less than the \(\sim\)2\(\sigma\) level given the formal uncertainties derived from the test-train splitting. However, we do not expect the mock data generation to capture all the nuances of the real galaxies: the improved prediction at \(g_{\rm obs}\) can be attributed to the fact that the mock RAR data contains no intrinsic scatter (while that in the real data is \(\sim 0.1\) dex; Lelli et al., 2017), and all discrepancies may be explained by minor systematics in the data such as variations in disk thickness, inclination or mass-to-light ratios across the galaxies. In contrast, the mock \(\Sigma_{\rm tot}-J_{\rm obs}\) data does not accurately reproduce the dynamical SPARC correlations. Most notably, the RAR shows a much larger loss when predicting \(\mathcal{D}\) from \(g_{\rm bar}\), compared to the mock RAR data set which captures the \(\Sigma_{\rm tot}-J_{\rm obs}\) correlation with a loss comparable to that of SPARC. It can also be seen that the \(\Sigma_{\rm tot}-J_{\rm obs}\) mock has behaviour discrepant with SPARC when predicting from \(L_{3.6}\). We verify that the agreement between SPARC and the RAR mock, and disagreement with the \(\Sigma_{\rm tot}-J_{\rm obs}\) mock, extends also to features not shown in Fig. 7.
We present the corresponding ET relative permutation importances of \(g_{\rm bar}\), \(\Sigma_{\rm tot}\), \(L_{3.6}\), \(M_{\rm HI}\), and \(T\) in Fig. 8. This quantifies the average increase of \(\mathcal{L}_{0}\) per observation when a feature is permuted, normalized such that the sum of all importances is unity at each \(\theta\). These five features account for the majority of the ET-inferred importance. In the real data we observe a weak importance of galaxy type around \(V_{\rm obs}\), while at \(g_{\rm obs}\) the ET recognizes \(g_{\rm bar}\) as clearly the most important feature despite its strong correlation with \(\Sigma_{\rm tot}\). In
Figure 6: Mean test set loss over \(10,000\) test-train splits when predicting \(g_{\rm obs}\) using the ET regressor with a single feature (diagonal cells) or pairs of features (off-diagonal cells). The diagonal cells are ordered by increasing loss. \(g_{\rm bar}\) alone is the optimal predictor: its performance is diminished by combining it with any other feature. No other feature, or combination thereof, yields accurate prediction of \(g_{\rm obs}\). The mean and standard deviation of the four lowest loss (yellow) cells are \(g_{\rm bar}:0.894\pm 0.152\); \(g_{\rm bar}+i:0.910\pm 0.149\); \(g_{\rm bar}+e_{\rm N}:0.912\pm 0.152\) and \(g_{\rm bar}+r:0.920\pm 0.163\).
case of \(J_{\rm obs}\), we see the permutation score to be significant for both \(g_{\rm bar}\) and \(\Sigma_{\rm tot}\), however from Fig. 7 we know that the dependence of \(J_{\rm obs}\) on \(\Sigma_{\rm tot}\) is a consequence of the RAR and, therefore, this is again just a case of \(g_{\rm bar}\) and \(\Sigma_{\rm tot}\) being strongly correlated. The dashed and dotted lines again show the permutation importances for the mock RAR and \(\Sigma_{\rm tot}-J_{\rm obs}\) data respectively. While the former are again in agreement with the real data, the importance of \(g_{\rm bar}\) and \(\Sigma_{\rm tot}\) in the \(\Sigma_{\rm tot}-J_{\rm obs}\) mocks disagrees strongly. This shows that the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation is a reflection of the RAR, not the other way around. Qualitatively similar conclusions hold for the decision tree importance scores calculated as the average impurity decrease due to a feature.
We repeat this analysis with the NN regressor, optimizing the loss \(\mathcal{L}\). We focus only on the most important features--\(g_{\rm bar}\), \(\Sigma_{\rm tot}\), and \(L_{3.6}\)--due to the increased computational cost associated with training the NN. This plot is not shown because despite the use of a different loss function, we observe almost identical trends to the ET analysis (Fig. 7). This shows that the results are not sensitive to the regressor used. Finally, to crosscheck the ML models' findings we calculate the Kendall correlation coefficient of \(\mathcal{D}(\alpha,\beta)\) with \(g_{\rm bar}\), \(\Sigma_{\rm tot}\), \(L_{3.6}\), \(T\), \(M_{\rm HI}\) and \(R_{\rm eff}\) Fig. 9. We again find that the strongest dynamical-to-baryonic correlation is between \(g_{\rm bar}\) and \(g_{\rm obs}\), with strong consistency between the real and RAR mock data. Again as in the ET analysis, the mock \(\Sigma_{\rm tot}-J_{\rm obs}\) data fails to explain the dynamical correlations of SPARC: it predicts too strong a \(\Sigma_{\rm tot}-J_{\rm obs}\) relation, and too weak a \(g_{\rm obs}-g_{\rm bar}\), \(L_{3.6}-V_{\rm obs}\) and \(T-V_{\rm obs}\) relation. Scatter in the mock data or additional noise or systematics will only weaken these trends further in the mock data. By contrast, the mock RAR data recovers the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation (indeed the full \(\Sigma_{\rm tot}-\mathcal{D}\) relation) near-perfectly.
## 5 Discussion
### Implications of results
We demonstrate that in the SPARC sample the RAR is both necessary and sufficient to account for the full set of correlations among galaxy properties involving dynamical variables, so that it may be considered the fundamental relation of late
Figure 7: Averaged test set loss \(\mathcal{L}_{0}\) for predicting \(\mathcal{D}(\alpha,\beta)=V_{\rm obs}^{\alpha}/r^{\beta}\) from individual or combinations of features using the ET regressor. The generic dynamical variable \(\mathcal{D}\) is parametrized by \(\tan\theta\equiv\beta/\alpha\). Solid lines show the results in the SPARC data set, while dashed and dotted correspond to the mock RAR and \(\Sigma_{\rm tot}-J_{\rm obs}\) data respectively. _Left panels_: \(g_{\rm bar}\), \(\Sigma_{\rm tot}\) and \(L_{3.6}\) used individually as predictors, with shaded regions indicating \(1\sigma\) SPARC uncertainty. For clarity, we omit uncertainty bands for the mock data which are similar to those of the real data. _Right panels_: predictivity of other features individually and combinations of features (uncertainty bands omitted). The _lower_ panels show the ratios of the SPARC and mock-evaluated losses. The RAR is the single strongest 1D relation between galaxy baryonic and dynamical properties, and the mock data using it accounts for the remaining SPARC correlations which therefore appear as nothing but aspects of the RAR. On the other hand, the mock \(\Sigma_{\rm tot}-J_{\rm obs}\) mock data fails to explain the RAR as well as other observed dynamical correlations.
type galaxy dynamics. The fact that our ML regressors are unable to produce superior fits to the \(g_{\rm bar}-g_{\rm obs}\) relation than the simple analytic expressions further testifies to the relative simplicity of the SPARC RAR, the information in which is almost fully captured by simple 1-2 parameter functions. Consequently, the primary task in comprehend these galaxies' dynamics lies in understanding the emergence of the RAR and non-emergence of any alternative or additional correlation.
While the RAR is an immediate consequence of MOND (Milgrom, 1983a), its status in a \(\Lambda\)CDM cosmology is debated. Several authors argue that aspects of the RAR are highly challenging to reproduce in a \(\Lambda\)CDM-based galaxy formation scenario (Wu and Kroupa, 2015; Desmond, 2017; Tenneti et al., 2018; McGaugh, 2015), while others argue that it emerges more or less naturally in hydrodynamical simulations or semi-analytic models (van den Bosch and Dalcanton, 2000; Keller and Wadsley, 2017; Ludlow et al., 2017). Even if the latter were so, the discovery that the RAR is apparently _the only_ fundamental correlation of late-type galaxy dynamics provides an additional hurdle for such models. This adds to the existing challenges faced by \(\Lambda\)CDM models attempting to explain the RAR (Di Cintio and Lelli, 2016; Desmond, 2017; Keller and Wadsley, 2017; Ludlow et al., 2017; Paranjape and Sheth, 2021; Desmond, 2023).
In Section 4.2, we showed that \(L_{3.6}\) is individually the best predictor of \(V_{\rm obs}\), as expected from the TFR (Tully and Fisher, 1977; McGaugh et al., 2000). This result was replicated using mock RAR data, indicating that the TFR-like \(V_{\rm obs}-L_{3.6}\) relation can be derived from the RAR. It is important to note however that \(V_{\rm obs}\) is a local quantity, varying across a single galaxy's RC, while the TFR is defined in terms of an overall characteristic velocity. The correlation is strongest when using the flat rotation velocity (Ponomareva et al., 2017, 2018; Lelli et al., 2019), which is only captured in the final \(V_{\rm obs}\) values of some SPARC galaxies. It is therefore unsurprising that the \(V_{\rm obs}-L_{3.6}\) relation we find is significantly weaker than the true TFR. Additionally, we found that \(J_{\rm obs}\) is well predictable individually by both \(g_{\rm bar}\) and \(\Sigma_{\rm tot}\) (which are strongly correlated as the former is calculated from the latter), with the \(\Sigma_{\rm tot}-J_{\rm obs}\) correlation being a reflection of the RAR. We demonstrated that the \(\Sigma_{\rm tot}-J_{\rm obs}\) correlation alone is insufficient to replicate the dynamical correlations in SPARC: it fails to accurately reproduce the observed RAR, while the RAR completely explains the observed \(\Sigma_{\rm tot}-J_{\rm obs}\) relation. Additionally, its residuals are correlated with baryonic properties in the data, and if it were the correlation driving the dynamics it would induces residual correlations in the RAR, which are not observed.
We hypothesize that the \(\Sigma_{\rm tot}-J_{\rm obs}\) correlation is caused by the specific disk matter distribution of SPARC galaxies, in conjunction with the RAR determining the dynamics. To test this, we calculate \(g_{\rm bar}\) for model galaxies using (1) a
Figure 8: Relative permutation importances of selected features for predicting \(\mathcal{D}(\alpha,\beta)\) using the ET regressor. Solid lines denote SPARC data (with \(1\sigma\) uncertainty bands), while the dashed and dotted lines indicate the mock RAR and mock \(\Sigma_{\rm tot}-J_{\rm obs}\) data, respectively. The lower panel shows the difference between the mock and SPARC importances. DThe shown features account for practically all of the model’s predictivity at all \(\theta\). As in Fig. 7, the consistency between the real and mock data shows that the relative importances of all features derive from the RAR in conjunction with the non-dynamical correlations present in SPARC.
Figure 9: Kendall’s \(\tau\) correlation coefficient between \(\mathcal{D}(\alpha,\beta)\) and \(g_{\rm bar}\), \(\Sigma_{\rm tot}\), \(L_{3.6}\), \(T\), \(M_{\rm HI}\). Solid lines represent the SPARC data, dashed lines the mock RAR data, dotted lines the mock \(\Sigma_{\rm tot}-J_{\rm obs}\) data and bands the \(1\sigma\) SPARC uncertainties. The lower panel shows the difference between the mock and SPARC correlation coefficients, illustrating the superiority of the RAR mock data. The results corroborate and reinforce those of Figs. 7 and 8.
thin exponential density profile (Mo et al., 2010, Eq. 11.30), and (2) the Kuzmin disk (Kuzmin, 1956; Binney & Tremaine, 2008, Eq. 2.68a) for the baryonic mass distribution. We then calculate \(g_{\rm obs}\) and hence \({\cal D}(\theta)\) from the RAR IF. In both cases, we observe a maximum correlation between \({\cal D}(\theta)\) and surface brightness around \(\theta=\arctan(2/3)\), i.e. \(J_{\rm obs}\), as in the real data, although the correlation of the disk solution peaks closer to \(\theta=\arctan(1/2)\), i.e. \(g_{\rm obs}\). This shows that the fact that the \(\Sigma_{\rm tot}-{\cal D}\) relation peaks almost exactly at \(J_{\rm obs}\) in SPARC is somewhat coincidental because it depends on the baryonic mass profile; for a different sample the extrema of \({\cal L}_{0}\) and \(\tau\) may occur at a \(\theta\) different by some tenths of a radian.
We assumed that both the RAR and \(\Sigma_{\rm tot}-J_{\rm obs}\) relations have no intrinsic scatter when generating the mock data. While this assumption is approximately true for the RAR (Desmond, 2023 measured it to be \(0.034\pm 0.002\) dex), it is unlikely to hold for \(\Sigma_{\rm tot}-J_{\rm obs}\). However, adding uncorrelated intrinsic scatter to the latter relation can only weaken the correlations, deteriorating the prediction of the RAR still further. While it is possible to introduce correlations of the residuals with other variables and fine-tune them to reproduce the correlations seen in the data, this will simply approximate the success of the mock data generated solely according to the RAR. Thus the RAR satisfies the fundamentality criterion of Section 1 that it can explain the other dynamical correlations, while the \(\Sigma_{\rm tot}-J_{\rm obs}\) relation does not. Since that relation is the second strongest in the data (measured by \({\cal L}_{0}\), \({\cal L}\) or \(\tau\)), it follows that no dynamical correlation in the SPARC data set besides the RAR can be fundamental: the rest will make even worse predictions for the RAR and \({\cal D}(\theta)\) generally.
Within the MOND paradigm our analysis is capable of shedding light on the presence or absence of the EFE. We confirm previous results (Haghi et al., 2016; Chae et al., 2020, 2021, 2022; Desmond, 2023) that the inclusion of the EFE improves the fit to galaxy kinematics (i.e. the Simple IF + EFE relative to the RAR IF), and show that the largest partial correlation of \(g_{\rm obs}\) at fixed \(g_{\rm bar}\) is with the external field strength \(e_{\rm N}\) (Fig. 2). This suggests the external field to be the most important quantity besides \(g_{\rm bar}\) in the determination of \(g_{\rm obs}\). However, when performing test-train splits the Simple IF + EFE no longer outperforms the RAR IF (Fig. 5), and none of our regressors can utilize \(e_{\rm N}\) to improve the prediction of \(g_{\rm obs}\) (shown for the ET regressor in Fig. 6). We therefore do not find significant evidence for the EFE, and conclude that better data is required to establish the validity or breakdown of the strong equivalence principle in galaxy RCs.
### Caveats and systematic uncertainties
#### 5.2.1 Scope of the conclusions
As mentioned at the end of Section 1, it is important to stress that our results on the fundamentality of the RAR do not extend to late-type galaxy dynamics _in their totality_, but only to _equilibrium radial_ dynamics. The RAR is limited to motion in the plane of the disk and does not cover the perpendicular, vertical motion which also possesses much interesting dynamics. Examples include vertical velocity dispersion in disks (e.g. van der Kruit & Freeman, 1984; Bershady et al., 2010), vertical "breathing modes" (e.g. Kumar et al., 2022) and the more complicated anisotropic motions of bars and bulges. There also exist dynamical scaling relations in late-type galaxies that involve the vertical gravitational field such as the central baryonic-dynamical surface density relation (Lelli et al., 2016). The perpendicular structure of the disk is important for phenomena such as the "Freeman law," an observed upper limit to disk surface densities (Freeman, 1970; Bosma & Freeman, 1993; McGaugh et al., 1995), which may arise from stability requirements involving the full 3D structure (Banik et al., 2018). Out-of-equilibrium and non-axisymmetric dynamics cause deviations from the regular radial motions encapsulated by the RAR and are therefore also not covered by our analysis.
The other important restriction to the scope of our conclusion, even within the radial dynamics of late-types, is that the RAR is not a substitute for the galaxy-by-galaxy study of RCs. Our conclusions therefore only hold for equilibrium radial dynamics when stacked across many galaxies and viewed statistically, i.e. _on average_. Although the RAR contains the correlation of the RCs with the baryonic mass distribution, particular points may be of special interest without standing out in the RAR plane. An example is "Renzo's rule," a point-by-point correlation of features in the baryonic mass profile with features of the RCs (Sancisi, 2004; Famaey & McGaugh, 2012). While obvious in case the baryons dominate the mass budget, this is difficult to understand in \(\Lambda\)CDM if the RC is dominated by the dark matter as this is supposed to be featureless and uncorrelated with the baryons. Velocity measurements for which this is pronounced are rare, so a model may predict the statistics of the RAR successfully on average while completely missing this behaviour: it would introduce only a few, relatively small residuals. Thus it cannot be said that explaining the RAR alone is sufficient to account in full generality for late-type galaxy dynamics, but only for the average equilibrium radial dynamics, parametrized by \({\cal D}\), that we have studied here.
#### 5.2.2 Goodness-of-fit & model complexity
Fig. 5 shows that while our NN performs comparably to the RAR IF and Simple IF + EFE, the losses achieved by the ET and XGB regressors are systematically higher. This is intriguing, given that the ML models are nearly arbitrarily complex and hence should be able to fit the data better. The suboptimal performance of the ML models could potentially be attributed to three factors: (1) the sampling of the underlying RAR distribution in SPARC, (2) overfitting and imperfect hyperparameter optimization, and/or (3) edge effects. We discuss each of these in turn.
First, the samples on the \(g_{\rm bar}-g_{\rm obs}\) plane are not independent and identically distributed, but instead are strongly correlated within each galaxy. Coupled with the relatively small sample size, this may result in inadequate representation of the test set by the training set galaxies, leading to suboptimal performance of the ET and XGB models. These models can only interpolate discontinuously between past observations, whereas the analytic functions, constrained by their functional form, and NN, which smoothly interpolates between past observations, are more robust in this regard. When training and evaluating the ML regressors on all data (i.e., without a test-train split), they "outperform" the analytic functions due to overfitting.
Second, as depicted in the lower panel of Fig. 4, the XGB and ET models generate a markedly jagged fit, suggesting potential overfitting on training samples. Enhancing the generalization capability of ML models often involves hyperparameters optimization, a crucial aspect of our analysis. We employed an automatic hyperparameter optimization using Optuna, conducting the optimization on the full training data until convergence and selecting hyperparameter that minimize the loss on validation samples selected via cross-validation. The choice of the number of folds is crucial as it determines the balance between training and validation; both extremes may lead to a non-representative validation loss. We opted for 5-fold cross-validation to optimize the hyperparameters, although the goodness-of-fit was similar when using either 3- or 4-fold cross-validation instead. The prime method of reducing overfitting is to reduce the complexity of the regressors, for example by restricting the maximum depth of decision trees or the minimum number of samples required to split a node. Although these are already optimized by Optuna (see Table 2), it is possible that a further small improvement could be achieved by hand-tuning them. Indeed the complexity threshold before overfitting occurs appears to be very low on the RAR data: as shown in Fig. 5, the Simple IF + EFE fits the entire SPARC data set better than the RAR IF but worse when train-test splitting is implemented. This function with only two free parameters is therefore likely overfitted.
Third, in Fig. 4 the performance of the ET and XGB models deteriorates near the boundaries of the data set, where the average loss per point is higher than average. This limitation arises from the decision trees' inability to extrapolate beyond past observations, resulting in predictions biased towards the mean. The impact of this on the loss distribution is, however, on the order of a few per cent, and is therefore not the main cause of the poor performance of the ET and XGB regressors.
#### 5.2.3 Neural network convergence
Although our NN allows derivatives to be propagated fully into the loss function and achieves as high accuracy as the RAR IF, it is not without problems. Despite extensive training, convergence to a global minimum of the loss is in practice not guaranteed due to the high dimensionality of the parameter space of the NN weights and biases. Indeed, whether an NN has converged to the global minimum is generally unclear. One way to address this is to use an ensemble of NNs, where the predictions are stacked to quantify the predictive disagreement arising purely from training (Stiskalek et al., 2022). An alternative approach to enhance the accuracy and generalizability is to incorporate additional information ("physics") during training to create a "physics-informed neural network" (PINN; Karniadakis et al., 2021; Cuomo et al., 2022). This may involve modification to the network architecture, loss function and/or activation functions. This however comes at the cost of reducing the agnosticism and purely empirical nature of the NN, and the simple exercise of replacing the linear connection of the input and output layer in our NN with the RAR IF (optimizing \(a_{0}\) alongside the other NN parameters) produced no improvement relative to the RAR IF alone. This again supports the conclusion that residuals of the RAR around the RAR IF are devoid of physical information.
#### 5.2.4 Raw data reduction & SPARC selection
The SPARC data contains various systematics unaccounted for in our mock data or likelihood model. This includes inclination variations across the disks in the tilted ring fits, potential variations in mass-to-light ratios and disk thickness across the RC and other assumptions made about the structure of the baryonic mass when solving for the RC such as the vertical distribution of the gas density (Di Teodoro and Fraternali, 2015). The data is also selected inhomogeneously, based on the high-quality RCs that happen to have been derived in the literature. Nevertheless, SPARC provides a thorough sampling of the physical parameter space occupied by rotationally supported galaxies, and there is no obvious reason to expect systematics or selection to bias our results. Some suggestions for the more sophisticated modelling required to corroborate our conclusions with greater precision are given below.
### Suggestions for further work
#### 5.3.1 Increasing the robustness of the inference
Our likelihood functions Eqs. (9) and (11) assume that all of the points on the RAR are independent. This is not true in detail because the methods used to derive the RC of a given galaxy necessarily couple the points, and several of the parameters relevant to \(V_{\mathrm{obs}}\) and \(V_{\mathrm{bar}}\)--distance, inclination and mass-to-light ratios--are properties of the galaxies as a whole. It is possible to infer these parameters simultaneously with a fit to the RAR (Desmond, 2023), but this assumes a functional form for the fit and hence is not useful for our more agnostic analysis here. It would however be possible to construct a covariance matrix for \(g_{\mathrm{obs}}\) and \(g_{\mathrm{bar}}\) within each galaxy by Monte Carlo-sampling \(D\), \(i\) and \(\Upsilon_{x}\) and use that in the Gaussian likelihood, thus assuming that galaxies are independent while RC points for a given galaxy are not.
A truly irreproachable demonstration of the fundamentality of the RAR would require showing that its residuals correlate with _no_ other galaxy or environment property. Here we are restricted to features present in the SPARC data set plus \(e_{\mathrm{N}}\) as calculated in Chae et al. (2021). Other interesting features could include colour, star formation rate and environmental density (or functions thereof) on a range of scales, which could be acquired by cross-correlating SPARC with other surveys and repeating the calculations of Chae et al. (2021) for other environmental measures. As an example, \(\sim 2/3\) of the SPARC galaxies were observed in the SDSS and hence possess all properties derivable from that survey.
Our analysis is not tied to a specific functional form for the RAR: to benchmark the ML regressor results we have employed the RAR IF and Simple IF + EFE simply as example \(g_{\mathrm{bar}}-g_{\mathrm{obs}}\) relations that are known to fit the data reasonably well. This is fortunate, as Desmond et al. (2023) demonstrated using symbolic regression that there exist more accurate and simpler functions than these. Utilizing the optimal functions from Desmond et al. would reduce the loss of the \(g_{\mathrm{bar}}-g_{\mathrm{obs}}\) relation, but would not alter the conclusion that incorporating additional features does not improve accuracy.
However, given the greater success of symbolic regression than numerical ML methods it would be interesting to try to construct multi-variable symbolic fits for \(g_{\rm obs}\) using all available features. This would likely be less prone to overfitting when using an information-theoretic model selection criterion such as the minimum description length principle (Bartlett et al., 2022), and to confusion when irrelevant features are considered. Feature importance may be assessed by the frequency of occurrence of variables in highly-ranked functions, or by the difference in accuracy or description length between similar functions including or not including a variable. While currently beyond the complexity limit of Exhaustive Symbolic Regression (Bartlett et al., 2022), this could be achieved by genetic algorithms such as Operon(Burlacu et al., 2020).
#### 5.3.2 Additional data
We have selected the SPARC sample as representative of the late-type galaxy population, given its extensive range in morphology (S0 to Irr), luminosity and surface brightness, and unrestrictive selection criteria (Lelli et al., 2016). To assess the impact of sample variance, we employed test-train splitting (a form of jackknifing) and Monte Carlo mock data generation, showing that our findings would be applicable to all SPARC-like data realizations. However, to establish the RAR as the fundamental dynamical correlation for late-type galaxies in general, further analysis on alternative data sets, such as LITTLE THINGS (Iorio et al., 2017) or PROBES (Stone et al., 2022), is necessary. Although these larger data sets may yield increased statistical uncertainties due to lower quality requirements, the same conclusions should be reached if the error model is reliable.
Ideally, the results presented here would be generalized further to the dynamics of galaxies of all types. The RAR has been investigated in diverse contexts, such as ultra-diffuse galaxies (e.g. Freundlich et al., 2022), local dwarf spheroidals (e.g. McGaugh and Wolf, 2010; McGaugh and Milgrom, 2013), low-acceleration regions including the outer Milky Way (Oman et al., 2020), early-type galaxies (Lelli et al., 2017; Shelest and Lelli, 2020; Chae et al., 2019, 2020; Tian and Ko, 2017) and galaxy groups and clusters (Chan and Del Popolo, 2020; Tian et al., 2020; Pradyuma and Desai, 2021; Gopika and Desai, 2021). Although these data sets contain sufficient information for our analysis, the properties employed as features may differ, and additional systematic uncertainties may arise. However, given that the SPARC galaxies are not qualitatively different to others--and are related to them evolutionarily--one would expect conclusions concerning the fundamentality of the RAR to generalize. Exploring the RAR using qualitatively different observables, such as stacked weak lensing (Brouwer et al., 2021), could provide valuable consistency checks and extend our results to new regimes.
#### 5.3.3 Testing \(\Lambda\)CDM
It would be interesting to perform our analysis on mock data generated by \(\Lambda\)CDM (through a semi-analytic model and/or hydrodynamical simulation), to see if, as in the data, \(g_{\rm obs}\) is uniquely predictable from \(g_{\rm bar}\), \(\{\alpha=2,r=1\}\) locates the \(V_{\rm obs}^{\alpha}/r^{\beta}\) most strongly correlated with baryonic properties and the \(g_{\rm bar}-g_{\rm obs}\) relation accounts for all dynamical correlations. It seems these would be unlikely coincidences, as neither \(g_{\rm bar}\) nor \(g_{\rm obs}\) have any particular significance in that theory.
## 6 Conclusion
We explore the origin of galaxies' surprising simplicity by studying the radial acceleration relation (RAR), which generalizes and subsumes the statistical regularities of late-type galaxies' radial (in-disk) dynamics. In particular, we question whether the RAR is _the fundamental_ relation of late-type radial dynamics, by which we mean that
1. its residuals do not correlate significantly with any other quantity,
2. it is the strongest dynamical-to-baryonic correlation,
3. it is sufficient to explain all other correlations involving radial dynamics (in conjunction with the non-dynamical correlations of the data set we use), and
4. no other relation possesses these properties.
To do so we develop machine learning regressors for predicting dynamic variables from baryonic ones, establishing feature importance directly and by determining which variables contribute usefully to the prediction. We supplement this by a partial correlation analysis of RAR residuals and validate our results on mock data that matches the non-dynamical properties of the Spitzer Photometry & Accurate Rotation Curves (SPARC) data while allowing control of its kinematic correlations.
We find the RAR to satisfy all of these criteria for unique fundamentality: features besides \(g_{\rm bar}\) only degrade the prediction of \(g_{\rm obs}\) (which in fact cannot significantly exceed simple fitting functions in accuracy), predicting a general dynamical variable \(V_{\rm obs}^{\alpha}/r^{\beta}\) picks out \(\{\alpha=2,r=1\}\), i.e. \(g_{\rm obs}\), as the most predictable from baryonic features, and mock data including the RAR but no other dynamical correlation has all \(\{\alpha,\beta\}\) correlations approximately consistent with the real data. Subsidiary correlations such as between luminosity and velocity or surface brightness and jerk are explicable as projections of the RAR, but not vice versa. Indeed, the second strongest dynamical correlation in the data possesses _none_ of the properties (i) - (iii) above. The RAR therefore appears to be the fundamental correlation of late-type radial galaxy dynamics; consequently, to explain such dynamics it is necessary to explain the RAR, and sufficient to explain the RAR plus the lack of any other significant, partially independent correlation. This poses an extremely stringent test for theories of galaxy formation, which those based on modified dynamics readily pass while those based on concordance cosmology may not.
## Data availability
The SPARC data is available at [http://astroweb.cwru.edu/SPARC/](http://astroweb.cwru.edu/SPARC/). The code underlying this article is available at [https://github.com/Richard-Sti/RARinterpret](https://github.com/Richard-Sti/RARinterpret) and other data will be made available on reasonable request to the authors.
## Acknowledgements
We thank Julien Devriendt, Harley Katz, Federico Lelli, Stacy McGaugh, Mordehai Milgrom, Adrianne Slyz and Tariq Yasin for useful comments and discussions. We also thank Jonathan Patterson for smoothly running the Glamdring Cluster hosted by the University of Oxford, where the data processing was performed. RS acknowledges financial support from STFC Grant No. ST/X508664/1. HD is supported by a Royal Society University Research Fellowship (grant no. 211046). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 693024). For the purpose of open access, we have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
|
2302.14223 | Bayesian Nagaoka-Hayashi Bound for Multiparameter Quantum-State
Estimation Problem | In this work we propose a Bayesian version of the Nagaoka-Hayashi bound when
estimating a parametric family of quantum states. This lower bound is a
generalization of a recently proposed bound for point estimation to Bayesian
estimation. We then show that the proposed lower bound can be efficiently
computed as a semidefinite programming problem. As a lower bound, we also
derive a Bayesian version of the Holevo-type bound from the Bayesian
Nagaoka-Hayashi bound. Lastly, we prove that the new lower bound is tighter
than the Bayesian quantum Cramer-Rao bounds. | Jun Suzuki | 2023-02-28T01:01:25Z | http://arxiv.org/abs/2302.14223v2 | # Bayesian Nagaoka-Hayashi Bound for Multiparameter Quantum-State Estimation Problem
###### Abstract
In this work we propose a Bayesian version of the Nagaoka-Hayashi bound when estimating a parametric family of quantum states. This lower bound is a generalization of a recently proposed bound for point estimation to Bayesian estimation. We then show that the proposed lower bound can be efficiently computed as a semidefinite programming problem. As a lower bound, we also derive a Bayesian version of the Holevo bound from the Bayesian Nagaoka-Hayashi bound. Lastly, we prove that the new lower bound is tighter than the generalized Personick bound, studied by Rubio and Dunningham.
## I Introduction
Bayesian parameter estimation with prior knowledge on unknown parameters naturally enters when estimating signals upon communication processes [1]. Quantum communication is a promising near term communication technology, which can transmit information more securely and efficiently than the classical protocols. There have been many investigation on how classical and/or quantum information can be transmitted faithfully over a given noisy quantum channel, see for example [2; 3; 4]. Quantum Bayesian estimation is a key ingredient for decoding classical information encoded in quantum states efficiently. Quantum Bayesian estimation also got a great attention in the field of quantum sensing and quantum metrology [5; 6; 7; 8].
Quantum Bayesian estimation was initiated about forty years ago by Personick [9; 10]. Due to recent advances in the quantum estimation theory, the quantum Bayesian estimation problem has gotten a renew interest by the community. Several quantum Bayesian bounds were proposed for the Bayes risk, see for example, [9; 10; 11; 12; 13; 14; 15; 16]. However, most of them do not capture the genuine quantum nature, since known lower bounds are based on almost a direct translation of classical Bayesian bounds. In particular, previously proposed lower bounds are derived by applying the Cauchy-Schwarz type inequality with respect to a certain choice for inner products on an operator space. Holevo initiated the investigation of nontrivial lower bounds for quantum estimation in the context of the general statistical decision problems [17]. He also analyzed lower bounds for the Bayes risk based on the quantum Fisher information matrices [18; 19]. In particular, he gave a thorough analysis on the gaussian shift model in the Bayesian setting.
When estimating non-random parameters, the Holevo bound established unique feature of quantum estimation theory [21; 22]. This is because it is expressed as a certain optimization problem without use of any quantum Fisher information matrix. Later, Nagaoka proposed a tighter bound [22] for two-parameter estimation. This lower bound is based on a different statistical problem in which one aims at finding an approximated diagonalization of two noncommuting matrices [23]. This Nagaoka's result was generalized by Hayashi for any finite number of noncommuting matrices [24]. In a recent paper [25], the Nagaoka bound for parameter estimation was generalized to estimating any number of non-random parameters, which was named as the Nagaoka-Hayashi bound. In this paper, we attempt to make a further step toward developing genuine quantum bounds based on the Nagaoka-Hayashi bound in Bayesian parameter estimation. In particular, we propose two Bayesian versions of these lower bounds, the Holevo bound and the Nagaoka-Hayashi bound. The unique nature of the proposed lower bounds is that they are expressed as a certain optimization problem. We show that it is tighter than the recently proposed a lower bound by Rubio and Dunningham [14].
This paper is organized as follows. Section 2 gives a brief summary of existing quantum Bayesian bounds. In Sec. 3, we propose two new quantum Bayesian bounds, and we also show that the proposed lower bound is tighter than the generalized Personick bound. In Sec. 4, we conclude and list some open problems. Technical lemmas are given in Appendix.
## II Preliminaries
Let \(\mathcal{H}\) be a finite dimensional Hilbert space and denote by \(\mathcal{S}(\mathcal{H})\) the totality of density matrices on \(\mathcal{H}\). A quantum parametric model is a smooth family of density matrices on \(\mathcal{H}\), \(\{S_{\theta}\,|\,\theta\in\Theta\}\), which is parametrized by \(n\)-parameter \(\theta=(\theta_{1},\theta_{2},\ldots,\theta_{n})\). In the following, we consider a regular model, in particular, \(S_{\theta}\) is full rank for all \(\theta\in\Theta\) and the state is differentiable with respect to the parameter sufficiently many times. A measurement is described by a set of positive semidefinite matrices \(\Pi_{x}\) on the measurement outcome label set \(\mathcal{X}\) such that all elements add to the identity \(I\). The set of operators corresponding to a quantum measurement is normally called a positive operator-valued measure (POVM), and it is
defined by
\[\Pi=\{\Pi_{x}\};\ \forall x,\Pi_{x}\geq 0,\ \sum_{x}\Pi_{x}=I, \tag{1}\]
where the measurement outcomes are labeled by real number \(x\in\mathcal{X}\subset\mathbb{R}\) and \(I\) is the identity operator on \(\mathcal{H}\). When the measurement outcomes are labelled with a continuous set, the condition on the POVM elements is \(\forall x,\Pi_{x}\geq 0,\ \int_{\mathcal{X}}dx\,\Pi_{x}=I\).
Measurement outcome is described by a random variable \(X\) that obeys the conditional probability distribution: \(p_{\theta}(x)=\mathrm{Tr}\left\{S_{\theta}\Pi_{x}\right\}\). (In the following, \(\mathrm{Tr}\left\{\cdot\right\}\) denotes the trace on \(\mathcal{H}\).) The expectation value for a random variable \(X\) is denoted by \(E_{\theta}[X|\Pi]=\sum_{x}xp_{\theta}(x)\). To infer the parameter value, we use an estimator that returns values on the set \(\Theta\): \(\hat{\theta}=(\hat{\theta}_{i}):\ \mathcal{X}\rightarrow\Theta\). The performance of the estimator is quantified by a loss function:
\[L:\ \Theta\times\mathcal{X}\rightarrow\{x\in\mathbb{R}|x\geq 0\}\cup\{\infty\}. \tag{2}\]
In this study, we adopt \(L(\theta,\hat{\theta})=\sum_{i,j}(\hat{\theta}_{i}-\theta_{i})\mathsf{W}_{ij} (\theta)(\hat{\theta}_{j}-\theta_{j})\). Here \(\mathsf{W}(\theta)=[\mathsf{W}(\theta)_{ij}]\) is an \(n\times n\) positive semidefinite matrix, called a weight (cost) matrix. As a special case, \(W\) can be parameter independent. In the language of statistical decision theory, the set \((\Pi,\hat{\theta})\) is called a quantum decision.
The main objective of parameter estimation about quantum states \(\{S_{\theta}\}\) is to find the best quantum decision \((\Pi,\hat{\theta})\) that minimizes the loss function. As the measurement outcomes are random, we need to further identify a risk for this optimization.
**Definition 1**.: _The Bayes risk for a given prior probability distribution \(\pi(\theta)\) on \(\Theta\) is defined by_
\[\mathsf{R}[\Pi,\hat{\theta}]:=\int_{\Theta}d\theta\pi(\theta)E_{\theta}\big{[} L\big{(}\theta,\hat{\theta}(X)\big{)}\big{|}\Pi\big{]}. \tag{3}\]
With this quantum Bayes risk, the objective is to find the best quantum decision that minimizes the risk, i.e. the minimization problem over \((\Pi,\hat{\theta})\):
\[\inf_{\Pi,\hat{\theta}}\mathsf{R}[\Pi,\hat{\theta}]. \tag{4}\]
Denoting the joint distribution by \(p(\theta,x):=\pi(\theta)p_{\theta}(x)=\mathrm{tr}\left(\pi(\theta)S_{\theta} \Pi_{x}\right)\), the Bayes risk (3) is also written as
\[\mathsf{R}[\Pi,\hat{\theta}]=E_{p}\big{[}L\big{(}\theta,\hat{\theta}(X)\big{)} \big{]}, \tag{5}\]
where \(E_{p}[\cdot]\) denotes the expectation value with respect to the joint distribution \(p(\theta,x)\).
In the following discussion, we will also express the Bayes risk in terms of the mean square error (MSE) matrix whose \(jk\) component is defined by
\[\mathsf{V}_{\theta,jk}[\Pi,\hat{\theta}]:=E_{\theta}\big{[}(\hat{\theta}_{j}( X)-\theta_{j})(\hat{\theta}_{k}(X)-\theta_{k})\big{|}\Pi\big{]}. \tag{6}\]
This then gives an alternative expression for the Bayes risk:
\[\mathsf{R}[\Pi,\hat{\theta}]=\int_{\Theta}d\theta\,\pi(\theta)\mathsf{Tr} \left\{\mathsf{W}(\theta)\mathsf{V}_{\theta}[\Pi,\hat{\theta}]\right\}, \tag{7}\]
where \(\mathsf{Tr}\left\{\cdot\right\}\) denotes the trace for matrices on the \(n\)-dimensional parameter space.
### Quantum van Tree inequality
The classical van Tree inequality is based on the covariance inequality [1]. This inequality is applicable to bound the Bayesian MSE matrix
\[\mathsf{V}_{B}[\Pi,\hat{\theta}]:=\int_{\Theta}d\theta\,\pi(\theta)\mathsf{V} _{\theta}[\Pi,\hat{\theta}]. \tag{8}\]
The resulting matrix inequality is
\[\mathsf{V}_{B}[\Pi,\hat{\theta}] \geq\left(J_{B}[\Pi]\right)^{-1}, \tag{9}\] \[J_{B}[\Pi] :=J(\pi)+\int_{\Theta}d\theta\,\pi(\theta)J_{\theta}[\Pi],\] (10) \[J_{ij}(\pi) :=\int_{\Theta}d\theta\pi(\theta)\frac{\partial\ell_{\pi}}{ \partial\theta_{i}}\frac{\partial\ell_{\pi}}{\partial\theta_{j}}, \tag{11}\]
where \(\ell_{\pi}(\theta):=\log\pi(\theta)\) and \(J_{\theta}[\Pi]\) is the Fisher information matrix about the distribution \(p_{\theta}(x)\). The van Tree inequality can be generalized in order to include the parameter dependent weight matrix. This can be accomplished by the use of the Gill-Levit bound [26]. A quantum version of the Gill-Levit bound was recently proposed [16].
It was Personick who proposed a quantum version of the van Tree inequality for the Bayes risk [9; 10]. Without going into the details, we give his result. It is known that the Fisher information matrix is bounded by appropriate quantum Fisher information matrices \(J_{\theta}^{Q}\). Using this fact, one can derive a quantum van Tree inequality:
\[\mathsf{V}_{B}[\Pi,\hat{\theta}] \geq(J_{B}^{Q})^{-1}, \tag{12}\] \[J_{B}^{Q} :=J(\pi)+\int_{\Theta}d\theta\pi(\theta)J_{\theta}^{Q}.\]
With this inequality, one gets a lower bound for the Bayes risk (7) when the weight matrix \(\mathsf{W}\) is parameter independent.
\[\mathsf{R}[\Pi,\hat{\theta}]\geq\mathcal{C}_{\mathrm{VT}}:=\mathsf{Tr} \left\{\mathsf{W}(J_{B}^{Q})^{-1}\right\}. \tag{13}\]
A well-known example for the quantum Fisher information matrix is the symmetric logarithmic derivative (SLD) Fisher information matrix [27]. Originally, this lower bound (12) was proven in Personick's thesis [9; Sec.2.2.2] immediately after the pioneering work by Helstrom that formulated point estimation about quantum states [27]. In his thesis, he also derived quantum versions of Bhattacharyya and Barankin bounds, however his results seem too early to be appreciated by the community at the time.
### Personick bound
Personick also proposed a different method to derive a lower bound for the Bayes risk in the same paper where he proposed the quantum van Tree inequality [10]. In the published paper, he considered one-parameter estimation, and then he proved this lower bound is tight. However, it is less known that he also derived a lower bound for the general \(n\)-parameter estimation problem [9].
Define the averaged states and the first moment by
\[\bar{S} =\int_{\Theta}d\theta\,\pi(\theta)S_{\theta}, \tag{14}\] \[\bar{S}_{j} =\int_{\Theta}d\theta\,\pi(\theta)\theta_{j}S_{\theta}, \tag{15}\]
where \(\bar{\ }\) denotes the averaged operators with respect to the prior distribution. Next, consider a set of Hermitian matrices \(L_{j}\) satisfying the so-called Bayesian version of the SLD equation:
\[\bar{S}_{j}=\frac{1}{2}(\bar{S}L_{j}+L_{j}\bar{S}). \tag{16}\]
For a positive definite averaged state \(\bar{S}\), the Bayesian SLD \(L_{j}\) is uniquely defined by the solution. Then, the real symmetric matrix \(K=[K_{jk}]\), the Bayesian SLD Fisher information matrix, is defined by
\[K_{jk}=\langle L_{j},L_{k}\rangle_{\bar{S}}=\frac{1}{2}\mathrm{Tr}\left\{\bar {S}(L_{j}L_{k}+L_{k}L_{j})\right\}. \tag{17}\]
Here, \(\langle X,Y\rangle_{\bar{S}}:=\mathrm{Tr}\left\{\bar{S}(X^{\dagger}Y+YX^{ \dagger})\right\}/2\) is the symmetrized inner product for linear operators \(X,Y\) on \(\mathcal{H}\) with respect to the state \(\bar{S}\), and \(X^{\dagger}\) denotes the Hermitian conjugation of \(X\).
For one parameter estimation, Personick proved the following inequality.
\[\mathrm{V}_{B}[\Pi,\hat{\theta}]\geq\overline{\theta^{2}}-K, \tag{18}\]
where \(\overline{\theta^{2}}=\int d\theta\,\pi(\theta)\theta^{2}\). Since the random parameter \(\theta\) is scalar, the second term is \(K=\mathrm{Tr}\left\{\bar{S}L_{1}^{2}\right\}\). (The original form of the second term is written as \(\mathrm{Tr}\left\{\bar{S}\bar{S}_{1}\right\}\) in Personick's work [10].)
Almost half century after the seminal work by Personick, Rubio and Dunningham generalized the Personick bound based on a different approach [14]. They proved
\[\mathrm{V}_{B}[\Pi,\hat{\theta}]\geq\overline{\theta\overline{\theta^{\intercal }}}-K, \tag{19}\]
where the first term is purely classical one defined by
\[\left[\overline{\theta\overline{\theta^{\intercal}}}\right]_{jk}=\int_{ \Theta}d\theta\,\pi(\theta)\theta_{j}\theta_{k}. \tag{20}\]
The second term is a contribution regarded as the quantum nature, and can be interpreted as the Bayesian version of the SLD Fisher information matrix. Their bound, Eq. (19) with a parameter-independent weight matrix, then takes the form
\[\mathcal{C}_{\mathrm{PRD}}=\mathsf{Tr}\left\{\mathsf{W}(\overline{\theta \theta^{\intercal}}-K)\right\}. \tag{21}\]
In the following, we call this lower bound as the generalized Personick bound.
## III Results
In this section, we propose a new lower bound for the Bayes risk, which is applicable for arbitrary weight matrix. To state the main theorem, we need to introduce several notations first.
### Alternative expression for the Bayes risk
Noting the MSE matrix \(\mathsf{V}\) consists of four terms
\[\mathsf{V}_{\theta,jk}[\Pi,\hat{\theta}]=\sum_{x}\hat{\theta}_{ j}(x)\hat{\theta}_{k}(x)\mathrm{Tr}\left\{S_{\theta}\Pi_{x}\right\}\\ -\theta_{j}\sum_{x}\hat{\theta}_{k}(x)\mathrm{Tr}\left\{S_{\theta }\Pi_{x}\right\}\\ -\sum_{x}\hat{\theta}_{j}(x)\mathrm{Tr}\left\{S_{\theta}\Pi_{x} \right\}\theta_{k}+\theta_{j}\theta_{k}, \tag{22}\]
we introduce the following Hermitian matrices on \(\mathcal{H}\):
\[\mathbb{L}_{jk}[\Pi,\hat{\theta}] =\sum_{x}\hat{\theta}_{j}(x)\Pi_{x}\hat{\theta}_{k}(x)\quad(j,k=1,2,\ldots,n),\] \[X_{j}[\Pi,\hat{\theta}] =\sum_{x}\hat{\theta}_{j}(x)\Pi_{x}\quad(j=1,2,\ldots,n).\]
Importantly, these matrices are solely defined by a quantum decision \(\Pi,\hat{\theta}\), and hence they are independent of model parameter \(\theta\). In the following, we often omit the argument \(\Pi\) and \(\hat{\theta}\), when it is clear from the context. With these definitions, the MSE matrix is expressed as
\[\mathsf{V}_{\theta,jk}[\Pi,\hat{\theta}]=\mathrm{Tr}\left\{S_{\theta}( \mathbb{L}_{jk}-\theta_{j}X_{k}-X_{j}\theta_{k}+\theta_{j}\theta_{k})\right\}.\]
We next define a matrix and a vector which are defined on the extended Hilbert space [25]. Consider the Hilbert space \(\mathbb{H}:=\mathbb{C}^{n}\otimes\mathcal{H}\), and define \(\mathbb{L}\) on \(\mathbb{H}\) whose \(jk\) component is given by \(\mathbb{L}_{jk}\). We also define a column vector \(X\) whose \(j\)th component is \(X_{j}\). We denote transpose of matrices and vectors with respect to \(\mathbb{C}^{n}\) by \((\cdot)^{\intercal}\). The fundamental inequality is stated in the following lemma, which is a variant of Holevo's lemma [21].
**Lemma 2** ([25]).: _For all POVMs and estimators, \(\mathbb{L}\) and \(X\) obey the matrix inequality:_
\[\mathbb{L}[\Pi,\hat{\theta}]\geq X[\Pi,\hat{\theta}]\left(X[\Pi,\hat{\theta}] \right)^{\intercal}. \tag{23}\]
The weighted trace of the MSE matrix then takes of the form:
\[\mathsf{Tr}\left\{\mathsf{W}(\theta)\mathsf{V}_{\theta}\right\}\] \[=\sum_{j,k}\mathsf{W}_{jk}(\theta)\mathrm{Tr}\left\{S_{\theta}( \mathbb{L}_{jk}-\theta_{j}X_{k}-X_{j}\theta_{k}+\theta_{j}\theta_{k})\right\}\] \[=\mathbb{Tr}\left\{\mathsf{S}_{\mathsf{W}}\mathbb{L}\right\}- \mathbb{Tr}\left\{\mathsf{S}_{\mathsf{W}}X^{\mathsf{T}}\right\}-\mathbb{Tr} \left\{X\mathsf{S}_{\mathsf{W}}^{\mathsf{T}}\right\}+\mathsf{W}_{\theta},\]
where \(\mathbb{Tr}\left\{\cdot\right\}\) denotes the trace on the extended Hilbert space \(\mathbb{H}\). In this expression, we define
\[\mathsf{S}_{\mathsf{W},jk}(\theta) :=\mathsf{W}_{jk}(\theta)\otimes S_{\theta}, \tag{24}\] \[\mathsf{S}_{\mathsf{W},j}(\theta) :=\sum_{k}\mathsf{W}_{jk}(\theta)\theta_{k}S_{\theta},\] (25) \[\mathsf{W}_{\theta} :=\sum_{j,k}\theta_{j}\mathsf{W}_{jk}(\theta)\theta_{k}. \tag{26}\]
\(\mathbb{S}_{\mathsf{W}}:=[\mathbb{S}_{\mathsf{W},jk}]\) is an operator on the extended Hilbert space. \(\mathsf{S}_{\mathsf{W}}\in\mathbb{H}\) is a vector with Hermitian matrix elements. These quantities are determined by the quantum statistical model and the weight matrix.
After combining the above expressions and the integration with respect to the prior, we obtain the alternative form of the Bayes risk:
**Lemma 3**.: _The Bayes risk is expressed as_
\[\mathsf{R}[\Pi,\hat{\theta}]=\mathbb{Tr}\left\{\bar{\mathbb{S}}_{\mathsf{W}} \mathbb{L}\right\}-\mathbb{Tr}\left\{\bar{\mathsf{S}}_{\mathsf{W}}X^{\mathsf{ T}}\right\}-\mathbb{Tr}\left\{X\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{T}} \right\}+\overline{\mathsf{W}}, \tag{27}\]
_where quantities with bars indicate the averaged quantities with respect to the prior._
\[\mathbb{S}_{\mathsf{W}} :=\int_{\Theta}d\theta\,\mathbb{S}_{\mathsf{W}}\pi(\theta), \tag{28}\] \[\mathbb{S}_{\mathsf{W}} :=\int_{\Theta}d\theta\,\mathsf{S}_{\mathsf{W}}\pi(\theta),\] (29) \[\overline{\mathsf{W}} :=\int_{\Theta}d\theta\,\mathsf{W}_{\theta}\pi(\theta). \tag{30}\]
We emphasize that everything is exact so far. We also remind ourselves that \(\mathbb{L}\) and \(X\) are functions of a POVM \(\Pi\) and an estimator \(\hat{\theta}\).
### New Bayesian bounds
To derive a lower bound for the Bayes risk \(\mathsf{R}[\Pi,\hat{\theta}]\), we follow the same line of logic used in Ref. [25]. This then gives the main result of the paper.
**Theorem 4** (Bayesian Nagaoka-Hayashi bound).: _For any POVM \(\Pi\) and estimator \(\hat{\theta}\), the following inequality holds for the Bayes risk._
\[\mathsf{R}[\Pi,\hat{\theta}]\geq\mathcal{C}_{\mathrm{NH}}\] \[\mathcal{C}_{\mathrm{NH}} :=\min_{\mathbb{L},X}\left\{\mathbb{Tr}\left\{\bar{\mathsf{S}}_{ \mathsf{W}}\mathbb{L}\right\}-\mathbb{Tr}\left\{\mathsf{S}_{\mathsf{W}}X^{ \mathsf{T}}\right\}-\mathbb{Tr}\left\{X\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{ T}}\right\}\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\overline {\mathsf{W}}. \tag{31}\]
_Here optimization is subject to the constraints: \(\forall jk,\mathbb{L}_{jk}=\mathbb{L}_{kj}\), \(\mathbb{L}_{jk}\) is Hermitian, \(X_{j}\) is Hermitian, and \(\mathbb{L}\geq XX^{\mathsf{T}}\)._
Proof.: Let \(\mathbb{L}_{*}\) and \(X_{*}\) be the optimal quantities calculated from an optimal POVM \(\Pi_{*}\) and an optimal estimator \(\hat{\theta}_{*}\). Then, the following chain of inequalities holds.
\[\mathsf{R}[\Pi,\hat{\theta}]\geq\mathsf{R}[\Pi_{*},\hat{\theta}_ {*}]\] \[=\mathbb{Tr}\left\{\mathsf{S}_{\mathsf{W}}\mathbb{L}_{*}\right\}- \mathbb{Tr}\left\{\bar{\mathsf{S}}_{\mathsf{W}}X^{\mathsf{T}}\right\}- \mathbb{Tr}\left\{X_{*}\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{T}}\right\}+ \overline{\mathsf{W}}\] \[\geq\min_{\mathbb{L},X}\left\{\mathbb{Tr}\left\{\bar{\mathsf{S}} _{\mathsf{W}}\mathbb{L}\right\}-\mathbb{Tr}\left\{\bar{\mathsf{S}}_{\mathsf{W} }X^{\mathsf{T}}_{*}\right\}-\mathbb{Tr}\left\{X_{*}\bar{\mathsf{S}}_{\mathsf{W }}^{\mathsf{T}}\right\}+\overline{\mathsf{W}}.\]
The first inequality follows by definition of the optimizer. The second line is due to Lemma 3. To get the third line, we apply Lemma 2. In the last line, optimization is subject to the constraints stated in the theorem in particular \(\mathbb{L}\geq XX^{\mathsf{T}}\).
The main difference from the Nagaoka-Hayashi bound for the point estimation setting [25] is that there is no constraint about the locally unbiasedness. The next result is that the proposed Bayesian Nagaoka-Hayashi bound can be computed efficiently by an semidefinite programming problem.
**Proposition 5**.: _The Bayesian Nagaoka-Hayashi bound is semidefinite programming._
Proof.: To put the Bayesian Nagaoka-Hayashi bound in the SDP form, we write the first three terms of Eq. (31) as
\[\mathbb{Tr}\left\{\bar{\mathsf{S}}_{\mathsf{W}}\mathbb{L}\right\}- \mathbb{Tr}\left\{\bar{\mathsf{S}}_{\mathsf{W}}X^{\mathsf{T}}\right\}- \mathbb{Tr}\left\{X\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{T}}\right\}\\ =\mathbb{Tr}\left\{\left(\begin{array}{cc}\bar{\mathsf{S}}_{ \mathsf{W}}&-\bar{\mathsf{S}}_{\mathsf{W}}\\ -\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{T}}&\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{ T}}(\bar{\mathsf{S}}_{\mathsf{W}})^{-1}\bar{\mathsf{S}}_{\mathsf{W}} \end{array}\right)\left(\begin{array}{cc}\mathbb{L}&X\\ X^{\mathsf{T}}&1\end{array}\right)\right\}\\ -\mathbb{Tr}\left\{\bar{\mathsf{S}}_{\mathsf{W}}^{\mathsf{T}}(\bar{\mathsf{S}}_{ \mathsf{W}})^{-1}\bar{\mathsf{S}}_{\mathsf{W}}\right\}. \tag{32}\]
Clearly, this is an semidefinite programming problem, since the constraint on the variable \(\mathbb{L}\geq XX^{\mathsf{T}}\) is equivalent to a positive semidefinite condition:
\[\left(\begin{array}{cc}\mathbb{L}&X\\ X^{\mathsf{T}}&1\end{array}\right)\geq 0.\]
Other constraints on \(\mathbb{L}\) and \(X\) can also be put in the trace condition for the variable (see Ref. [25]).
When estimating two parameters, the Bayesian Nagaoka-Hayashi bound (31) takes an optimization problem with respect to only the variable \(X\). This lower bound will be named as the Bayesian Nagaoka bound.
**Theorem 6** (Bayesian Nagaoka bound).: _When the number of parameters is equal to two, the Bayesian Nagaoka-Hayashi bound (31) is written as follows._
\[\mathcal{C}_{\mathrm{N}}:=\min_{X=(X_{1},X_{2})}\left\{\mathrm{Tr} \left\{\mathrm{sym}_{+}\left(\sqrt{\mathbb{S}_{\mathsf{W}}}XX^{\intercal}\sqrt{ \mathbb{S}_{\mathsf{W}}}\right)\right\}\right.\\ \left.\qquad\qquad+\mathrm{TrAbs}\left\{\mathrm{sym}_{-}\left( \sqrt{\mathbb{S}_{\mathsf{W}}}XX^{\intercal}\sqrt{\mathbb{S}_{\mathsf{W}}} \right)\right\}\right.\\ \left.\qquad\qquad-\mathrm{Tr}\left\{\bar{\mathbb{S}}_{\mathsf{W} }X^{\intercal}\right\}-\mathrm{Tr}\left\{X\bar{\mathbb{S}}_{\mathsf{W}}^{ \intercal}\right\}\right.\big{\}}+\overline{\mathsf{W}}. \tag{33}\]
_Here optimization is subject to: \(\forall j,\,X_{j}\) is Hermitian._
In this theorem, \(\mathrm{sym}_{\pm}(\mathbb{A}):=\frac{1}{2}(\mathbb{A}\pm\mathbb{A}^{\intercal})\) denotes the symmetrized (anti-symmetrized) matrix with respect to the first Hilbert space of the extended Hilbert space \(\mathbb{H}=\mathbb{C}^{n}\otimes\mathcal{H}\), i.e., for \(\mathbb{A}=[\mathbb{A}_{jk}]\in\mathbb{H}\), \([\mathrm{sym}_{\pm}(\mathbb{A})]_{jk}=(\mathbb{A}_{jk}\pm\mathbb{A}_{kj})/2\). \(\mathrm{TrAbs}\left\{\mathbb{A}\right\}\) denotes the sum of the absolute values for the eigenvalues of \(\mathbb{A}\).
Proof.: This theorem is proven by using Lemma 12 in Appendix.
We next turn our attention to the Bayesian Holevo bound which is in general lower than the Bayesian Nagaoka-Hayashi bound.
**Theorem 7** (Bayesian Holevo bound).: \[\mathcal{C}_{\mathrm{NH}}\geq\mathcal{C}_{\mathrm{H}}:=\min_{X: \mathrm{Hermitian}}\left\{\mathsf{Tr}\left\{\mathrm{Re}\,\mathsf{Z}_{\mathsf{W }}[X]\right\}\right.\\ \left.\qquad\qquad+\mathsf{TrAbs}\left\{\mathrm{Im}\,\mathsf{Z}_ {\mathsf{W}}[X]\right\}\right.\\ \left.\qquad-\mathrm{Tr}\left\{\bar{\mathbb{S}}_{\mathsf{W}}^{ \intercal}X\right\}-\mathrm{Tr}\left\{X^{\intercal}\bar{\mathbb{S}}_{\mathsf{W }}\right\}\right.\big{\}}+\overline{\mathsf{W}},\] (34)
_where \(\mathsf{Z}_{\mathsf{W}}[X]\) is an \(n\times n\) Hermitian matrix is defined by_
\[\mathsf{Z}_{\mathsf{W}}[X]:=\mathrm{Tr}\left\{\bar{\mathbb{S}}_{\mathsf{W}}XX ^{\intercal}\right\}.\]
In the above expression, \(\mathrm{Re}\,\mathsf{A}\) (\(\mathrm{Im}\,\mathsf{A}\)) denotes the component-wise real (imaginary) part of a matrix \(\mathsf{A}\in\mathbb{C}^{n\times n}\), and \(\mathsf{TrAbs}\left\{\mathsf{A}\right\}\) denotes the sum of the absolute values for the eigenvalues of \(\mathsf{A}\).
In the Bayesian Holevo bound, optimization is subject to \(X_{j}\): Hermitian. This is in contrast to point estimation under the locally unbiasedness condition.
Proof.: This theorem is due to Lemma 13 in Appendix.
### Parameter independent weight matrix
To make a comparison to existing lower bounds in the literature, we set the weight matrix to be parameter independent. Then, quantities (28) and (29) reduce to
\[\bar{\mathbb{S}}_{\mathsf{W}} =\mathsf{W}\otimes\bar{S}, \tag{35}\] \[\bar{\mathbb{S}}_{\mathsf{W},j} =\sum_{k}\mathsf{W}_{jk}\bar{S}_{k}, \tag{36}\]
where \(\bar{S}=\int_{\Theta}\,d\theta\pi(\theta)S_{\theta}\) and \(\bar{S}_{j}=\int_{\Theta}\,d\theta\pi(\theta)\theta_{j}S_{\theta}\) as before.
In this case, \(\bar{\mathbb{S}}_{\mathsf{W}}\) exhibits a tensor product structure. Then, we can apply Lemma 11 in Appendix to simplify the Bayesian Holevo bound as follows.
**Corollary 8**.: _For a parameter independent weight matrix, the Bayesian Holevo bound is_
\[\mathcal{C}_{\mathrm{H}}:=\min_{X:\mathrm{Hermitian}}\left\{ \mathsf{Tr}\left\{\mathsf{W}\mathrm{Re}\,\mathsf{Z}_{\bar{S}}[X]\right\}+ \mathsf{TrAbs}\left\{\mathsf{W}\mathrm{Im}\,\mathsf{Z}_{\bar{S}}[X]\right\} \right.\\ \left.\qquad-\mathsf{Tr}\left\{\mathsf{W}\mathsf{H}_{S}[X]\right\} -\mathsf{Tr}\left\{\mathsf{W}\mathsf{H}_{S}[X]^{\intercal}\right\}\right. +\overline{\mathsf{W}}, \tag{37}\]
_where \(\mathsf{Z}_{\bar{S},jk}[X]:=\mathrm{Tr}\left\{\bar{S}X_{k}X_{j}\right\}\) and \(\mathsf{H}_{\bar{S},jk}[X]:=\mathrm{Tr}\left\{\bar{S}_{j}X_{k}\right\}\), respectively._
When the number of parameters is two, the Bayesian Nagaoka bound can take the following explicit form by applying Lemma 14 in Appendix.
**Corollary 9**.: _For a two-parameter estimation with a parameter independent weight matrix, the Bayesian Nagaoka bound is expressed as_
\[\mathcal{C}_{\mathrm{N}}:=\min_{X:\mathrm{Hermitian}}\left\{ \mathsf{Tr}\left\{\mathsf{W}\mathrm{Re}\,\mathsf{Z}_{\bar{S}}[X]\right\}\right.\\ \left.\qquad\qquad+\sqrt{\mathrm{Det}\left\{\mathsf{W}\right\}} \,\mathrm{TrAbs}\left\{\bar{S}(X_{1}X_{2}-X_{2}X_{1})\right\}\right.\\ \left.\qquad-\mathsf{Tr}\left\{\mathsf{W}\mathsf{H}_{S}[X] \right\}-\mathsf{Tr}\left\{\mathsf{W}\mathsf{H}_{S}[X]^{\intercal}\right\} \right.\big{\}}+\overline{\mathsf{W}}. \tag{38}\]
### Relation to the generalized Personick bound
We claim the proposed Bayesian Nagaoka-Hayashi bound is tighter than the generalized Personick bound. To show this we have the following statement.
**Proposition 10**.: \(\mathcal{C}_{\mathrm{H}}\geq\mathcal{C}_{\mathrm{PRD}}\)_, and hence, \(\mathcal{C}_{\mathrm{NH}}\geq\mathcal{C}_{\mathrm{PRD}}\)._
Proof.: First, we set \(\theta\)-independent \(\mathsf{W}\) as the rank-1 projector \(\mathsf{W}=\mathsf{c}\mathsf{T}\) with \(\mathsf{c}\in\mathbb{R}^{n}\). Next, we ignore the second term in the minimization (34) to obtain a lower bound for \(\mathcal{C}_{\mathrm{H}}\). This gives the desired result.
\[\mathcal{C}_{\mathrm{H}} \geq\min_{X}\big{\{}\sum_{jk}\mathsf{c}_{j}\mathsf{c}_{k}( \mathrm{Tr}\left\{\bar{S}(X_{j}X_{k}+X_{k}X_{j})\right\}\] \[\qquad\qquad-\mathrm{Tr}\left\{\bar{S}_{j}X_{k}\right\}-\mathrm{ Tr}\left\{X_{j}\bar{S}_{k}\right\})\big{\}}+\mathsf{c}^{\intercal}\overline{ \theta}\overline{\theta}\overline{\mathsf{c}}\] \[=\min_{X}\{\langle X_{\mathsf{c}},X_{\mathsf{c}}\rangle_{\bar{S}}- \langle X_{\mathsf{c}},L_{\mathsf{c}}\rangle_{\bar{S}}-\langle L_{\mathsf{c}},X_{ \mathsf{c}}\rangle_{\bar{S}}\}+\mathsf{c}^{\intercal}\overline{\theta}\overline{ \theta}\overline{\mathsf{c}}\] \[=\min_{X}\{\langle X_{\mathsf{c}}-L_{\mathsf{c}},X_{\mathsf{c}}-L_{ \mathsf{c}}\rangle_{\bar{S}}\}-\langle L_{\mathsf{c}},L_{\mathsf{c}}\rangle_{ \bar{S}}+\mathsf{c}^{\intercal}\overline{\theta}\overline{\theta}\overline{\mathsf{c}}\] \[=\mathsf{c}^{\intercal}\overline{\theta}\overline{\theta}\overline{ \mathsf{c}}-\langle L_{\mathsf{c}},L_{\mathsf{c}}\rangle_{\bar{S}}\] \[=\mathsf{c}^{\intercal}(\overline{\theta}\overline{\theta}\overline{ \mathsf{c}}-\mathcal{K})\mathsf{c},\]
where we set \(X_{\mathsf{c}}=\sum_{j}\mathsf{c}_{j}X_{j}\) and \(L_{\mathsf{c}}=\sum_{j}\mathsf{c}_{j}L_{j}\). Since this is true for any choice of \(\mathsf{c}\), we obtain the relation \(\mathcal{C}_{\mathrm{H}}\geq\mathcal{C}_{\mathrm{PRD}}\).
## IV Conclusion and outlook
In summary we have proposed a new lower bound for the Bayes risk, called the Bayesian Nagaoka-Hayashi
bound. This bound then can be bounded by a Bayesian version of the Holevo bound. It was shown that our lower bounds are tighter than the existing lower bound proposed by Rubio and Dunningham. The proposed Bayesian lower bounds are based on the idea developed in the previous publication [25].
In this paper, we only derived bounds, and hence there are many future directions. Firstly, we need to analyze achievability of new proposed bounds both in the asymptotic and non asymptotic theory. For example, Ref. [34] investigated the first order asymptotics of the Bayes risk. Secondly, relations to other Bayesian bounds need to be examined. Thirdly, an extension including random parameters in the presence of nuisance parameters will be important. Non-random parameter estimation with the nuisance parameter has been investigated recently [35; 36]. Extensions of these formulations in the Bayesian setting will be presented in the future work.
## Acknowledgment
The work is partly supported by JSPS KAKENHI Grant Number JP21K11749 and JP21K04919. The author would like to thank Mr. L. Conlon and Dr. S.M. Assad for collaboration at the early stage of the project.
## Appendix A Lemmas
We list necessary lemmas in this appendix. These lemmas can be obtained by extending known results [21; 22; 24], and only sketches for proofs are given.
Consider a tensor product Hilbert space \(\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) with each dimension \(d_{j}\) (\(j=1,2\)). Let \(\mathcal{L}(\mathcal{H})\), \(\mathcal{L}_{+}(\mathcal{H})\), and \(\mathcal{L}_{h}(\mathcal{H})\) be the set of linear operators, positive semidefinite, and hermitian matrices on \(\mathcal{H}\), respectively. We denote by \(\mathcal{L}_{\mathrm{sym}_{+}}(\mathcal{H})\) the set of symmetric matrices under the partial transpose \((\cdot)^{T_{1}}\) with respect to the subsystem \(\mathcal{H}_{1}\). \(\mathrm{sym}_{\pm}(L):=\frac{1}{2}(L\pm L^{T_{1}})\) denotes the symmetrized (antisymmetrized) matrix of \(L\in\mathcal{L}(\mathcal{H})\) with respect to \(\mathcal{H}_{1}\). An operator \(L\in\mathcal{L}(\mathcal{H})\) is also expressed as a matrix-valued matrix in the form: \(L=[L_{jk}]\) where each component \(L_{jk}\) is a matrix on \(\mathcal{H}_{2}\). This representation is unique if we fix the basis of \(\mathcal{H}_{1}\). With this representation, \([L^{T_{1}}]_{jk}=[L_{kj}]\) and \([\mathrm{sym}_{\pm}(L)]_{jk}=(L_{jk}\pm L_{kj})/2\).
Given a strictly positive matrix \(S\) and a positive semidefinite matrix \(X\) on \(\mathcal{H}\), we define the following optimization problem.
\[H(S,X):=\min_{L\in\mathcal{L}_{\mathrm{sym}_{1}}(\mathcal{H})}\{\mathrm{Tr}\, \{SL\}\mid L\geq X\}. \tag{10}\]
We have alternative expressions and lower bounds for \(H(S,X)\) which are given in lemmas below.
**Lemma 11**.: \[H(S,X)=\mathrm{Tr}\left\{\mathrm{sym}_{+}(\sqrt{S}X\sqrt{S})\right\}\] \[+\min_{V\in\mathcal{L}_{\mathrm{sym}_{1}}(\mathcal{H})}\{\mathrm{ Tr}\,\{V\}\mid V\geq\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}.\] (11)
Proof.: Write the constraint as \(L\geq X\Leftrightarrow\sqrt{S}L\sqrt{S}\geq\sqrt{S}X\sqrt{S}=\mathrm{sym}_{+}( \sqrt{S}X\sqrt{S})+\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\), we note that the first term is attained because of \(\mathrm{sym}_{+}(\sqrt{S}X\sqrt{S})\in\mathcal{L}_{T_{1}}(\mathcal{H})\). We substitute \(L=S^{-1/2}\mathrm{sym}_{+}(\sqrt{S}X\sqrt{S})S^{-1/2}+S^{-1/2}VS^{-1/2}\) to get
\[\mathrm{Tr}\,\{SL\}=\mathrm{Tr}\left\{\mathrm{sym}_{+}(\sqrt{S}X\sqrt{S}) \right\}+\mathrm{Tr}\,\{V\}\,,\]
where optimization is subject to \(V\geq\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\).
**Lemma 12**.: _When the dimension of the Hilbert space \(\mathcal{H}_{1}\) is two (dim\(\mathcal{H}_{1}=2\)), the above optimization is carried out as_
\[H(S,X) =N(S,X),\] \[N(S,X): =\mathrm{Tr}\left\{\mathrm{sym}_{+}(\sqrt{S}X\sqrt{S})\right\}\] \[\qquad+\mathrm{TrAbs}\left\{\mathrm{Asym}(\sqrt{S}X\sqrt{S}) \right\}, \tag{12}\]
_where \(\mathrm{Asym}(L):=[L]_{12}-[L]_{21}\) denotes the difference between the off-diagonal elements of \(L\in\mathcal{L}(\mathcal{H})\) with respect to the index of the first Hilbert space, and \(\mathrm{TrAbs}\,\{L\}\) denotes the sum of absolute values for the eigenvalues of the operator \(L\in\mathcal{L}(\mathcal{H})\)._
Proof.: We note that the matrix \(\sqrt{S}X\sqrt{S}\) is positive semidefinite and hence it is Hermitian. This means that its antisymmetrized part is represented by
\[\left(\begin{array}{cc}0&Y\\ -Y&0\end{array}\right),\]
where \(Y:=([\sqrt{S}X\sqrt{S}]_{12}-[\sqrt{S}X\sqrt{S}]_{21})/2=\mathrm{Asym}(\sqrt{ S}X\sqrt{S})/2\). Since \(Y^{\dagger}=-Y\), \(iY\) is an Hermitian matrix on \(\mathcal{H}_{2}\). Then, it follows that \(iY\) has real eigenvalues. When \(\dim\mathcal{H}_{1}=2\), the matrix inequality \(V\geq\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\) is equivalent to \(V\geq\pm\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\). This is shown by taking Hermitian conjugate of both sides of the inequality. We can then apply Lemma in Holevo's book ([21, Lemma 6.61]) to get
\[\min_{V\in\mathcal{L}_{h}(\mathcal{H})}\{\mathrm{Tr}\,\{V\}\mid V\geq \pm\,\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}\] \[=\mathrm{TrAbs}\left\{\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\right\}\] \[=\mathrm{TrAbs}\left\{\mathrm{Asym}(\sqrt{S}X\sqrt{S})\right\}.\]
The last equality is due to \(\mathrm{TrAbs}\left\{\left(\begin{array}{cc}0&Y\\ -Y&0\end{array}\right)\right\}=2\mathrm{TrAbs}\,\{Y\}\).
We denote the partial trace of \(V\) with respect to the second Hilbert space \(\mathcal{H}_{2}\) by \(v=\mathrm{Tr}_{2}\{V\}\). With this notation, we can get a lower bound in Lemma 11.
**Lemma 13**.: _For any \(S,X\), the next lower bound holds for \(H(S,X)\)._
\[H(S,X) \geq h(S,X),\] \[h(S,X) :=\mathrm{Tr}\left\{\mathrm{sym}_{+}(\sqrt{S}X\sqrt{S})\right\}\] \[+\min_{v\in\mathcal{L}_{h}(\mathcal{H}_{1})}\{\mathrm{Tr}\left\{ v\right\}\mid v\geq\mathrm{Tr}_{2}\{\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}\},\]
_where \(Z_{S}(X):=\mathrm{Tr}_{2}\{\sqrt{S}X\sqrt{S}\}\in\mathcal{L}_{+}(\mathcal{H} _{1})\). Furthermore, \(h(S,X)\) is expressed as_
\[h(S,X)=\mathrm{Tr}\left\{\mathrm{Re}\,Z_{S}(X)\right\}+\mathrm{TrAbs}\left\{ \mathrm{Im}\,Z_{S}(X)\right\}.\]
Proof.: The first inequality is true since any matrix \(V\) under the constraint in \(H(S,X)\) satisfies the condition \(v\geq\mathrm{Tr}_{2}\{\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}\). This means that optimization for \(h(S,X)\) is carried out in a larger space. The equivalent expression for \(h(S,X)\) holds from the lemma [21, Lemma 6.61], since \(v\geq\mathrm{Tr}_{2}\{\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}\) implies \(v\geq\pm\mathrm{Tr}_{2}\{\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}\) and \(\mathrm{Tr}_{2}\{\mathrm{sym}_{-}(\sqrt{S}X\sqrt{S})\}=i\mathrm{Im}\,Z_{S}(X)\).
When \(S\) admits a tensor product matrix \(S=S_{1}\otimes S_{2}\) and \(\dim\mathcal{H}_{1}=2\), we have an explicit representation for Eq. (A3) in Lemma 12 as follows.
**Lemma 14**.: \[N(S,X)=\mathrm{Tr}\left\{\sqrt{S}\mathrm{sym}_{+}(X)\sqrt{S}\right\}\] \[\qquad+\sqrt{\mathrm{Det}\left\{S_{1}\right\}}\mathrm{TrAbs}\left\{ \sqrt{S_{2}}(X_{12}-X_{21})\sqrt{S_{2}}\right\}.\]
Proof.: If \(S=S_{1}\otimes S_{2}\) holds, \(\mathrm{sym}_{\pm}(\sqrt{S}X\sqrt{S})=\sqrt{S}\mathrm{sym}_{\pm}(X)\sqrt{S}\) is true for all \(X\in\mathcal{L}_{+}(\mathcal{H})\). Then the lemma is immediate after direct calculations.
|
2309.16746 | Implicit Gaussian process representation of vector fields over arbitrary
latent manifolds | Gaussian processes (GPs) are popular nonparametric statistical models for
learning unknown functions and quantifying the spatiotemporal uncertainty in
data. Recent works have extended GPs to model scalar and vector quantities
distributed over non-Euclidean domains, including smooth manifolds appearing in
numerous fields such as computer vision, dynamical systems, and neuroscience.
However, these approaches assume that the manifold underlying the data is
known, limiting their practical utility. We introduce RVGP, a generalisation of
GPs for learning vector signals over latent Riemannian manifolds. Our method
uses positional encoding with eigenfunctions of the connection Laplacian,
associated with the tangent bundle, readily derived from common graph-based
approximation of data. We demonstrate that RVGP possesses global regularity
over the manifold, which allows it to super-resolve and inpaint vector fields
while preserving singularities. Furthermore, we use RVGP to reconstruct
high-density neural dynamics derived from low-density EEG recordings in healthy
individuals and Alzheimer's patients. We show that vector field singularities
are important disease markers and that their reconstruction leads to a
comparable classification accuracy of disease states to high-density
recordings. Thus, our method overcomes a significant practical limitation in
experimental and clinical applications. | Robert L. Peach, Matteo Vinao-Carl, Nir Grossman, Michael David, Emma Mallas, David Sharp, Paresh A. Malhotra, Pierre Vandergheynst, Adam Gosztolai | 2023-09-28T16:02:39Z | http://arxiv.org/abs/2309.16746v2 | # Implicit Gaussian process representation of vector fields over arbitrary latent manifolds
###### Abstract
Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.
## 1 Introduction
A cornerstone of statistical learning theory is the _manifold assumption_, which posits that high-dimensional datasets are often distributed over low-dimensional smooth manifolds - topological spaces characterised by locally Euclidean structure. For instance, images of an object from varying camera angles or diverse renditions of a written letter can all be viewed as samples from a smooth manifold (Tenenbaum, 2000). Further, the common approximation of data by a proximity graph, based on a notion of affinity or similarity between data points, induces a Riemannian structure that is instrumental in geometric learning theories. For example, the analogy between the graph Laplacian matrix and the Laplace-Beltrami operator associated with a Riemannian manifold (Chung, 1997) has been widely exploited in manifold learning (Belkin and Niyogi, 2003; Coifman et al., 2005), shape analysis (Taubin, 1995), graph signal processing (Ortega et al., 2018), discrete geometry (Gosztolai and Arnaudon, 2021), graph neural networks (Defferrard et al., 2016; Kipf and Welling, 2017; Peach et al., 2020) and Gaussian processes (Borovitskiy et al., 2020, 2021).
However, many datasets contain a richer structure comprising a smoothly varying vector field over the manifold. Prime examples are dissipative dynamical systems where post an initial transient phase, trajectories converge to a manifold in state space (Fefferman et al., 2016). Likewise, in neuroscience, smooth vector fields arise from the firing rate trajectories of neural populations evolving over neural manifolds, which is instrumental in neural information coding (Sussillo and Barak, 2013;
Khora & Fiete, 2022; Gardner et al., 2022). Smooth vector fields are also pertinent in areas like gene expression profiling during development (La Manno et al., 2018) and multireference rotational alignment in cryoelectron microscopy (Singer & Wu, 2012). The breadth of these applications emphasises the need to generalise current learning paradigms to capture both the manifold structure and its associated vector field.
To address this need, a promising avenue is to consider the Laplace-Beltrami operator as a hierarchy of Laplacians that act on tensor bundles of a manifold with increasing order. The first member of this hierarchy is the Laplace-Beltrami operator, which acts on rank-\(0\) tensors, i.e., scalar signals. Similarly, higher-order signals, including vector fields, have associated Laplacian operators, which can encode their spatial regularity. Among these, the connection Laplacian (Barbreo et al., 2022), defined on vector bundles, and the related sheaf Laplacian (Knoppel et al., 2013; Bodnar et al., 2022), which allows the vector spaces on nodes to have different dimensions, are emerging as leading operators in machine learning (Bronstein et al., 2017; Gosztolai et al., 2023). Both of these operators are related to heat diffusion of higher-order signals over manifolds (Singer & Wu, 2012; Sharp et al., 2019) and thus intrinsically encode the signals' smoothness. The connection Laplacian is particularly appealing because it can be constructed, even when the manifold is unknown, from graph-based data descriptions (Singer & Wu, 2012; Budinksiy et al., 2019). We, therefore, asked how one could use this discrete approximation to derive continuous functions that implicitly represent the vector field over the manifold. Such representation could take account of the global regularity of the vector field to reconstruct intricate vector field structures lost in data sampling.
Gaussian processes - a renowned family of nonparametric stochastic processes - offer an excellent framework for learning implicit functional descriptions of data. While GPs are traditionally defined on Euclidean spaces (Rasmussen & Williams, 2006), several studies have extended them to Riemannian manifolds. However, these studies have either considered scalar signals (Wilson et al., 2021; Mallasto & Feragen, 2018; Mallasto et al., 2020; Borovitskiy et al., 2020; Boser et al., 2020; Jensen et al., 2020) or vector signals Hutchinson et al. (2021) but only in cases where the underlying manifold is known and is analytically tractable, such as spheres and tori. In this work, we generalise GPs to vector fields on arbitrary latent manifolds, which can only be approximated based on local similarities between data points, making them applicable in practice to real-world datasets.
Our contributions are as follows. (i) We generalise GPs to vector-valued data using the connection Laplacian operator, assuming that the data originates from a stationary stochastic process. (ii) We show that the resulting Riemannian manifold vector field GP (RVGP) method encodes the manifold and vector field's smoothness as inductive biases, enabling out-of-sample predictions from sparse or obscured data. (iii) To underscore the practical implications of RVGP, we apply it to electroencephalography (EEG) recordings from both healthy individuals and Alzheimer's disease patients. The global spatial regularity learnt by our method significantly outperforms the state-of-the-art approaches for reconstructing high-density electrical fields from low-density EEG arrays. This enables our method to better resolve vector field singularities and dramatically increase the classification power of disease states.
Taken together, our technique opens the door to a differential geometric formulation of kernel-based operators and demonstrates a direct relevance for fundamental and clinical neuroscience.
## 2 Comparison with related works
Let us begin by carefully comparing our method to related works in the literature.
Implicit neural representations (INRs)There has been an increasing interest in defining signals implicitly as parametrised functions from an input domain to the space of the signal. In Euclidean spaces, INRs have been a breakthrough in replacing pixel-wise description images or voxel-wise descriptions of 3D shapes by neural networks (Sitzmann et al., 2020; Lipman, 2021; Gosztolai et al., 2021; Koestler et al., 2022; Mildenhall et al., 2020). INRs have also been extended to signals over manifolds by graph Laplacian positional encoding (Grattrola & Vandergheynst, 2022). However, INRs are data-intensive due to the lack of explicit spatial regularisation. Further, they have not been extended to handle vector-valued data.
Gaussian processes over specific Riemannian manifoldsSeveral closely related works in the GP literature have provided various definitions of GPs on Riemannian manifolds. One line of works defined GPs as manifold-valued processes \(f:\mathbb{X}\to\mathcal{M}\)(Mallasto & Feragen, 2018; Mallasto et al., 2020) by using the exponential map of the manifold to perform regression in the tangent space. However, these works require that the manifold \(\mathcal{M}\) be known in order to define the exponential map. More notable are studies which define GPs as scalar-valued functions \(f:\mathbb{X}\to\mathbb{R}\). For example, considering data points as samples from a stationary stochastic process, the domain \(\mathbb{X}\) can be defined based on positional encoding using eigenfunctions of the Laplace-Beltrami operator (Solin & Sarkka, 2020; Borovitskiy et al., 2020) or the graph Laplacian (Borovitskiy et al., 2021). However, these works cannot be directly applied to vector-valued signals by treating vector entries as scalar channels. This is because these channels are generally not independent but related through the curvature of the manifold. To address this gap, Hutchinson et al. (2021) defined GPs as functions \(f:\mathbb{X}\to\mathcal{T}\mathcal{M}\) over the tangent bundle \(\mathcal{T}\mathcal{M}\) by first mapping the manifold isometrically to Euclidean space and then using a multi-input, multi-output GP to learn the projected signal. However, Hutchinson et al. (2021) has focused on cases when the manifold is explicitly known, and its mapping to Euclidean space can be explicitly defined. Here, we are specifically interested in the case where the manifold is _unknown_ - a common scenario in many scientific domains. To achieve this, we generalise the works of Borovitskiy et al. (2020, 2021) to vector fields on Riemannian manifolds using the connection Laplacian operator and its eigenvectors as positional encoding.
Gaussian processes in neuroscienceGPs have also been widely used in the neuroscience literature, particularly combined with linear dimensionality reduction to discover latent factors underlying neural dynamics. One popular method is Gaussian Process Factor Analysis (GPFA) (Yu et al., 2009), which defines GPs in the temporal domain and does not encode spatial regularity over ensembles of trajectories as inductive bias. GPFA has been used to define time-warping functions to align neural responses across trials, i.e., individual presentation of a stimulus or task (Duncker & Sahani, 2018). Likewise, GPFA has been extended to non-Euclidean spaces by simultaneously identifying the latent dynamics and the manifold over which it evolves (Jensen et al., 2020). However, this model is limited to manifolds built from \(SO(3)\) symmetry groups and requires them to be enumerated to perform Bayesian model selection. We instead seek a constructive framework that requires no assumption on the manifold topology.
## 3 Background
Here, we introduce the function-space view of GPs in Euclidean domains and their stochastic partial differential equation formulation developed for scalar-valued signals, which will then allow us to extend them to vector-valued signals.
### Gaussian processes in Euclidean spaces
A GP is a stochastic process \(f:\mathbb{X}\to\mathbb{R}^{d}\) defined over a set \(\mathbb{X}\) such that for any finite set of samples \(X=(\mathbf{x}_{1},\dots,\mathbf{x}_{n})\in\mathbb{X}^{n}\), the random variables \((f(\mathbf{x}_{1}),\dots,f(\mathbf{x}_{n}))\in\mathbb{R}^{n\times d}\) are jointly multivariate Gaussian, \(\mathcal{N}(\mathbf{x};\mathbf{\mu},\mathbf{K})\), with mean vector \(\mathbf{\mu}=m(X)\) and covariance matrix \(\mathbf{K}=k(X,X)\). Consequentially, a GP is fully characterised by its mean function \(m(\mathbf{x}):=\mathbb{E}(f(\mathbf{x}))\) and covariance function \(k(\mathbf{x},\mathbf{x}^{\prime}):=\mathrm{Cov}(f(\mathbf{x}),f(\mathbf{x}^{\prime}))\), also known as kernel, and denoted as \(f\sim\mathcal{GP}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x}^{\prime}))\). It is typical to assume that \(\mathbf{m}(\mathbf{x})=0\), which does not reduce the expressive power of GPs (Rasmussen & Williams, 2006).
One may obtain the best-fit GP to a set of training data points \((X,\mathbf{y})=\{(\mathbf{x}_{i},\mathbf{y}_{i})|i=1,...,n\}\) by Bayesian linear regression, assuming that the observations \(\mathbf{y}_{i}\) differ from the predictions of \(f\) by some Gaussian measurement noise, i.e., \(\mathbf{y}_{i}=f(\mathbf{x}_{i})+\epsilon_{i}\), where \(f\sim\mathcal{GP}(0,k)\) and \(\epsilon_{i}\sim\mathcal{N}(0,\sigma_{n}^{2})\) for some standard deviation \(\sigma_{n}\). Then, the distribution of training outputs \(\mathbf{y}\) and model outputs \(\mathbf{f}_{*}:=f(\mathbf{x}_{*})\) at test points \(\mathbf{x}_{*}\), is jointly Gaussian, namely
\[\begin{bmatrix}\mathbf{y}\\ \mathbf{f}_{*}\end{bmatrix}\sim\mathcal{N}\left(0,\begin{bmatrix}k(X,X)+\sigma_{ n}^{2}\mathbf{I}&k(X,X_{*})\\ k(X_{*},X)&k(X_{*},X_{*})\end{bmatrix}\right). \tag{1}\]
To generate predictions for a test set \(X_{*}=(\mathbf{x}_{1},\dots,\mathbf{x}_{n^{*}})\), one can derive the posterior predictive distribution conditioned on the training set (Rasmussen & Williams, 2006), namely
\(\mathcal{N}(\mathbf{\mu}_{|\mathbf{y}},\mathbf{K}_{|\mathbf{y}})\) whose mean vector and covariance matrix are given by the expressions
\[\mathbf{\mu}_{|\mathbf{y}}(\mathbf{x}_{*}) =k(X_{*},X)(k(X,X)+\sigma_{n}^{2}\mathbf{I})^{-1}\mathbf{y}, \tag{2}\] \[\mathbf{K}_{|\mathbf{y}}(\mathbf{x}_{*},\mathbf{x}_{*}) =k(\mathbf{x}_{*},\mathbf{x}_{*})-k(X_{*},X)(K+\sigma^{2}\mathbf{I})^{-1}k (X,X_{*}). \tag{3}\]
The advantage of GPs is that the smoothness of the training set regularises their behaviour, which is controlled by the kernel function. We focus on kernels from the Matern family, stationary kernels of the form:
\[k_{\nu}(\mathbf{x},\mathbf{x}^{\prime})\equiv k_{\nu}(\mathbf{x}-\mathbf{x}^{\prime})=\sigma^{ 2}\frac{2^{1-\nu}}{\Gamma(\nu)}\left(\sqrt{2\nu}\frac{||\mathbf{x}-\mathbf{x}^{\prime }||}{\kappa}\right)K_{\nu}\left(\sqrt{2\nu}\frac{||\mathbf{x}-\mathbf{x}^{\prime}||}{ \kappa}\right) \tag{4}\]
for \(\nu<\infty\), where \(\Gamma(\nu)\) is the Gamma function and \(K_{\nu}\) is the modified Bessel function of the second kind. Matern family kernels are favoured due to their interpretable behaviour with respect to their hyperparameters. Specifically, \(\sigma,\kappa,\nu\) control the GP's variability, smoothness and mean-squared differentiability. Moreover, the well-known squared exponential kernel, also known as radial basis function \(k_{\infty}(\mathbf{x}-\mathbf{x}^{\prime})=\sigma^{2}\exp\left(-||\mathbf{x}-\mathbf{x}^{ \prime}||^{2}/2\kappa^{2}\right)\) is obtained in the limit as \(\nu\to\infty\).
### Scalar-valued GPs on Riemannian manifolds
In addition to their interpretable hyperparameters, Matern GPs lend themselves to generalisation over non-Euclidean domains. A formulation that will allow extension to the vector-valued case is the one by Whittle (1963) who has shown that in Euclidean domains \(\mathbb{X}=\mathbb{R}^{d}\) Matern GPs can be viewed as a stationary stochastic process satisfying the stochastic partial differential equation
\[\left(\frac{2\nu}{\kappa^{2}}-\Delta\right)^{\frac{\kappa}{2}+\frac{d}{4}}f= \mathcal{W}, \tag{5}\]
where \(\Delta\) is the Laplacian and \(\mathcal{W}\) is the Gaussian white noise. Likewise, for \(\nu\to\infty\), the limiting GP satisfies \(\exp(-\kappa^{2}\Delta/4)f=\mathcal{W}\), where the left-hand side has the form of the heat kernel.
As shown by Borovitskiy et al. (2020), Eq. 5 readily allows generalising scalar-valued (\(d=1\)) Matern GPs to compact Riemannian manifolds \(\mathbb{X}=\mathcal{M}\) by replacing \(\Delta\) by the Laplace-Beltrami operator \(\Delta_{\mathcal{M}}\). The corresponding GPs are defined by \(f_{\mathcal{M}}\sim\mathcal{GP}(0,k_{\mathcal{M}})\), with kernel
\[k_{\mathcal{M}}(\mathbf{x},\mathbf{x}^{\prime})=\frac{\sigma^{2}}{C_{\nu}}\sum_{i=0}^ {\infty}\left(\frac{2\nu}{\kappa^{2}}+\lambda_{i}\right)^{-\nu-\frac{d}{2}}f_ {i}(\mathbf{x})f_{i}(\mathbf{x}^{\prime}), \tag{6}\]
where \((\lambda_{i},f_{i})\) are the eigenvalue-eigenfunction pairs of \(\Delta_{\mathcal{M}}\) and \(C_{\nu}\) is a normalisation factor.
### Scalar-valued GPs on graphs
Analogously, scalar-valued Matern GPs can be defined on graph domains, \(\mathbb{X}=G\)(Borovitskiy et al., 2021) by using the graph Laplacian in place of \(\Delta_{\mathcal{M}}\) as its discrete approximation. Specifically, the graph Laplacian is \(\mathbf{L}:=\mathbf{D}-\mathbf{W}\) with weighted adjacency matrix \(\mathbf{W}\in\mathbb{R}^{n\times n}\) of and diagonal node degree matrix \(\mathbf{D}=\text{diag}(\mathbf{W}\mathbf{1}^{T})\). The graph Laplacian admits a spectral decomposition,
\[\mathbf{L}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}, \tag{7}\]
where \(\mathbf{U}\) is the matrix of eigenvectors and \(\mathbf{\Lambda}\) is the diagonal matrix of eigenvalues and, by the spectral theorem, \(\Phi(\mathbf{L})=\mathbf{U}\Phi(\mathbf{\Lambda})\mathbf{U}^{T}\) for some function \(\Phi:\mathbb{R}\to\mathbb{R}\). Therefore, choosing \(\Phi(\lambda)=\left(2\nu/\kappa^{2}+\lambda\right)^{\nu/2}\) obtains the operator on the left-hand side of Eq. 5 up to a scaling factor1. Using this, one may analogously write
Footnote 1: Note the different signs due to the opposite sign convention of the Laplacians.
\[\left(\frac{2\nu}{\kappa^{2}}-\mathbf{L}\right)^{\frac{\kappa}{2}}\mathbf{f}=\mathcal{ W}, \tag{8}\]
for a vector \(\mathbf{f}\in\mathbb{R}^{n}\). Thus, analogously to Eq. 6, the scalar-valued GP on a graph becomes \(f_{G}\sim\mathcal{GP}(0,k_{G})\), with kernel
\[k_{G}(p,q)=\sigma^{2}\mathbf{u}(p)\Phi(\mathbf{\Lambda})^{-2}\mathbf{u}(q)^{T} \tag{9}\]
where \(\mathbf{u}(i),\mathbf{u}(j)\) are the columns of \(\mathbf{U}\) corresponding to nodes \(i,j\in V\), respectively.
## 4 Intrinsic representation of vector fields over arbitrary latent manifolds
We may now construct GPs on unknown manifolds and associated tangent bundles.
### Vector-valued GPs on unknown manifolds
We consider training data consisting of pairs \(\{(\mathbf{x}_{i},\mathbf{v}_{i})|i=1,\dots,n\}=(X,V)\), where \(\mathbf{x}_{i}\) are samples from a manifold \(\mathcal{M}\subset\mathbb{R}^{d}\) and \(\mathbf{v}_{i}\) are sampled from the tangent bundle \(\mathcal{TM}=\cup_{i}\mathcal{T}_{i}\mathcal{M}\). If the dimension of the manifold is \(m\leq d\) the tangent spaces \(\mathcal{T}_{i}\mathcal{M}:=\mathcal{T}_{\mathbf{x}_{i}}\mathcal{M}\) anchored to \(\mathbf{x}_{i}\) are isomorphic as a vector space to \(\mathbb{R}^{m}\). Importantly, we assume that both \(\mathcal{M}\) and \(\mathcal{T}_{i}\mathcal{M}\) are _unknown_ and seek a GP to provide an implicit description of the vector field over \(\mathcal{M}\) that agrees with the training set and provides a continuous interpolation at out-of-sample test points with controllable smoothness properties.
Approximating the manifold and the tangent bundleWe first fit a proximity graph \(G=(V,E)\) to \(X\), defined based on some notion of similarity (spatial or otherwise) in the data. While \(G\) approximates \(\mathcal{M}\) it will not restrict the domain to \(V\) as in Borovitskiy et al. (2021). Then, to approximate \(\mathcal{TM}\), note that the tangent spaces do not come with preferred coordinates. However, being isomorphic to \(\mathbb{R}^{m}\), \(\mathcal{T}_{i}\mathcal{M}\) can be parametrised by \(m\) orthogonal vectors in the ambient space \(\mathbb{R}^{d}\), to form a local frame, or gauge, \(\mathbb{T}_{i}\). To obtain this frame, we take vectors \(\mathbf{e}_{ij}\in\mathbb{R}^{d}\) from \(i\) to \(N\) nearest nodes \(j\), assuming that they span \(\mathcal{T}_{i}\mathcal{M}^{2}\) and form a matrix by stacking them column-wise. The left singular vectors corresponding to the \(m\) largest singular values yield the desired frame
\[\mathbb{T}_{i}=(\mathbf{t}_{1}^{(1)},\dots\mathbf{t}_{i}^{(m)})\in\mathbb{R}^ {d\times m}. \tag{10}\]
Then, \(\hat{\mathbf{v}}_{i}=\mathbb{T}_{i}^{T}\mathbf{v}_{i}\) acts as a projection of the signal to the tangent space in the \(\ell_{2}\) sense.
Constraining the vector field over the manifoldArmed with the approximation of \(\mathcal{M}\) and \(\mathcal{TM}\), by \(G\) and \(\{\mathbb{T}_{i}\}\), we may define the connection Laplacian operator \(\mathbf{L}_{c}\) that will regularise the GP's behaviour by using the global smoothness of the vector field.
The notion of vector field smoothness is formalised by the parallel transport map \(\mathcal{P}_{j\to i}\) that aligns \(\mathcal{T}_{j}\mathcal{M}\) with \(\mathcal{T}_{i}\mathcal{M}\) to allow the comparison of vectors in a common space. While parallel transport is generally path dependent, we assume that \(i,j\) are close enough such that \(\mathcal{P}_{j\to i}\) is the unique smallest rotation. Indeed, the construction of the nearest neighbour proximity graph limits pairs \(i,j\)
Figure 1: **Construction of vector-valued Gaussian processes on unknown manifolds. A Input data consists of samples from a vector field over a latent manifold. B The manifold is approximated by a proximity graph. Black circles mark two sample points, \(i\) and \(j\) and their graph neighbourhood. C The tangent bundle is a collection of locally Euclidean vector spaces over the manifold. It is approximated by parallel transport maps between local tangent space approximations. D The eigenvectors of the connection Laplacian are used as positional encoding to define the GP that leans the vector fields. E The GP is evaluated as unseen points to predict the smoothest possible vector field that is consistent with the training data. We use this GP to accurately predict singularities, near which sampling is typically sparse.**
to be close in space. This is known as the Levy-Civita connection and can be computed as a matrix \(\mathbf{O}_{ji}\in O(m)\) in the orthogonal group (rotation and reflection)
\[\mathbf{O}_{ji}=\arg\min_{\mathbf{O}\in O(m)}||\mathbb{T}_{i}-\mathbb{T}_{j} \mathbf{O}||_{F}, \tag{11}\]
where \(||\cdot||_{F}\) is the Frobenius norm and can be uniquely computed by the Kabsch algorithm in \(\mathcal{O}(m)\)-time (Kabsch, 1976).
Using the parallel transport maps, we may define the connection Laplacian (Singer and Wu, 2012), a block matrix \(\mathbf{L}_{c}\in\mathbb{R}^{nd\times nd}\), whose \((i,j)\) block entry is given by
\[\mathbf{L}_{c}(i,j)=\begin{cases}D_{ii}\mathbf{I}_{m\times m}&\text{for }i=j\\ W_{ij}\mathbf{O}_{ij}&\text{for }i,j\text{ adjacent.}\end{cases} \tag{12}\]
Let us remark that \(\mathbf{L}_{c}\) prescribes the smoothness of the vector field over an underlying continuous manifold that agrees with the available training data. In fact, as \(n\to\infty\), the eigenvectors of \(\mathbf{L}_{c}\) converge to the eigenfunctions of the connection Laplacian over the tangent bundle (Singer and Wu, 2017), such that the corresponding continuous signal satisfies the associated vector diffusion process (Berline and N., 1996). The solution of this diffusion process, minimises the vector Dirichlet energy \(\sum_{ij\in E}w_{ij}|\mathbf{v}_{i}-\mathbf{O}_{ij}\mathbf{v}_{j}|^{2}\), which quantifies the smoothness of the vector field (Knoppel et al., 2015).
Vector-field GP on arbitrary latent manifoldsWe are now ready to define a GP to regress the vector field over \(\mathcal{M}\). To this end, we consider a positional encoding of points on the tangent bundle \(\mathcal{TM}\) based on the spectrum of the connection Laplacian, \(\mathbf{L}_{c}=\mathbf{U}_{c}\mathbf{\Lambda}_{c}\mathbf{U}_{c}^{T}\), where \(\mathbf{\Lambda}_{c},\mathbf{U}_{c}\in\mathbb{R}^{nm\times nm}\). Comparing with Eq. 7 one notices that each node represents a vector space of dimension \(m\). Using the eigenvectors corresponding to the \(k\) smallest eigenvalues, and for some point \(\mathbf{x}\) corresponding to node \(i\), we consider the matrix
\[(\mathbf{U}_{c})_{i}=\sqrt{nm}\begin{pmatrix}u_{im,1}&\ldots&u_{im,k}\\ \vdots&&\vdots\\ u_{(i+1)m,1}&\ldots&u_{(i+1)m,k}\end{pmatrix}\in\mathbb{R}^{m\times k}, \tag{13}\]
obtained by keeping \(k\) columns of \(\mathbf{U}_{c}\). This allows us to define the positional encoding of \(\mathbf{x}\) by mapping the eigencoordinates defined in the respective tangent spaces using \(\mathbb{T}_{i}\) back into the ambient space
\[\mathbf{P}_{\mathbf{x}}=\mathbb{T}_{i}(\mathbf{U}_{c})_{i}\in\mathbb{R}^{d\times k}. \tag{14}\]
Then, by analogy to Eqs. 5-6, we define the scalar-valued Matern GP that represents the vector field over the manifold as \(f_{\mathcal{TM}}:\mathcal{TM}\to\mathbb{R}^{d}\) with a \(\mathbb{R}^{d\times d}\)-valued kernel
\[k_{\mathcal{TM}}(\mathbf{v},\mathbf{v}^{\prime})=\sigma^{2}\mathbf{P}_{\mathbf{v}}\Phi(\mathbf{ \Lambda}_{c})^{-2}\mathbf{P}_{\mathbf{v}^{\prime}}^{T}, \tag{15}\]
where \(\mathbf{v},\mathbf{v}^{\prime}\in\mathcal{TM}\). Note that we recover \(f_{\mathcal{M}}\) for scalar signals (\(m=1\)), where Laplace-Beltrami operator equals the connection Laplacian for a trivial line bundle on \(\mathcal{M}\). Therefore, the tangent spaces become trivial (scalar), recovering the well-known Laplacian eigenmaps \(\mathbf{P}_{\mathbf{x}}=\sqrt{n}(u_{i,1},\ldots,u_{i,k})\). Likewise, \(k_{\mathcal{TM}}(\mathbf{v},\mathbf{v}^{\prime})\) reduces to the scalar-valued kernel \(k_{\mathcal{M}}(\mathbf{x},\mathbf{x}^{\prime})\) for \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{M}\) in Eq. 6. Note that the kernel in Eq. 15 is a member of a general class defined in Hutchinson et al. (2021, Proposition 7) arising as an isometric projection of the tangent bundle into Euclidean space. Our construction makes this projection explicit using the connection Laplacian positional encoding (Eq. 13) and tangent spaces (Eq. 10). In Fig. 2, we show that this construction leads to specific performance advantages in recovering vector field singularities.
### Scalable training via inducing point methods
A notable drawback of GPs is their computational inefficiency, largely driven by the need to compute an \(n\times n\) covariance matrix. Therefore, inducing point methods, which reduce the effective number of data points to a set of \(n^{\prime}<n\) inducing points, have become a mainstay of GPs in practice (Titsias, 2009). Hutchinson et al. (2021) showed that the inducing point methods such as that of Titsias (2009) are readily extendible to vector fields over Riemannian manifolds, provided the covariance matrix of inducing points is represented in tangent space coordinates. Since the kernel of RVGP is constructed from a positional encoding using connection Laplacian eigenvectors, which are expressed in local coordinates, our method is readily amenable to computation by inducing point methods which we provide an implementation of.
## 5 Experiments
### Manifold-consistent interpolation of vector fields
We expected that RVGP fitted to sparse vector samples over an unknown manifold would leverage the global regularity of the vector field to provide accurate out-of-sample predictions. Thus, we conducted two experiments on the Stanford Bunny and toroidal surface mesh to test the RVGP's ability to super-resolve sparse samples and inpaint missing regions containing singularities on diverse manifold topologies. In both experiments, we first generated a ground truth vector signal. Next, we drew \(n\) evenly sampled anchor points \(\{(\mathbf{x}_{i})\}\) using the furthest point sampling algorithm (Qi et al., 2017), which has a parameter \(\alpha\) quantifying the average pairwise distance of points relative to the manifold diameter. We then sampled corresponding vectors \(\{\mathbf{v}_{i}\}\) from a uniform distribution on the sphere \(S^{3}\), projected them onto their respective tangent spaces \(\hat{\mathbf{v}}_{i}=\mathbb{T}_{i}^{T}\mathbf{v}_{i}\) and used the vector heat method (Sharp et al., 2019) to find the smoothest vector field. Specifically, concatenating signals as \(\hat{\mathbf{v}}=\|_{i=0}^{n}\hat{\mathbf{v}}_{i}\in\mathbb{R}^{nd\times 1}\) the vector heat method obtains \(\hat{\mathbf{v}}(\tau)=\hat{\mathbf{v}}(\tau)/u(\tau)\), where \(\hat{\mathbf{v}}(\tau)\) is the solution of the vector heat equation \(\hat{\mathbf{v}}(\tau)=\hat{\mathbf{v}}\exp{(-\mathbf{L}_{\mathbf{c}}\tau)}\) and \(u(\tau)\) is the solution of the scalar heat equation \(u(\tau)=|\hat{\mathbf{v}}|\exp{(-\mathbf{L}\tau)}\). To obtain a smooth field, we ran the process until diffusion time \(\tau=100\). On surfaces of genus \(0\) this process will lead to at least one singularity.
SuperresolutionIn the first experiment, we asked if RVGP can smoothly interpolate (also known as super-resolve) the vector field from sparse samples. To this end, we fitted a graph over the bunny mesh and trained RVGP using vectors over \(50\%\) of the nodes, reserving the remaining \(50\%\) for testing. We found that RVGP predictions were in excellent visual alignment with the training vector field (Fig. 2A). To investigate the sensitivity of predictions to sampling density and manifold description, we conducted ablation studies by varying both \(\alpha\) and the number of eigenvectors \(k\). As quantified by the inner product between predicted and test vectors, we found that RVGP produces accurate alignment with only \(10\) eigenvectors over a broad range of sampling densities (Fig. 2B).
InpaintingIn the second experiment, we tested RVGP's ability to inpaint whole vector field regions containing singularities. This experiment is more challenging than classical inpainting because it requires our method to infer the smoothest topologically consistent vector field. To this end, we masked off the vortex singularity and used the remaining points to train RVGP consisting of a positional encoding of the manifold based on graph Laplacian eigenvectors. For comparison, we also trained a radial basis function kernel that treats vector entries independently. We found that vectors predicted by RVGP closely followed the mesh surface, aligned with the training set on the mask boundary and smoothly resolved the singularity by gradually reducing the vector amplitudes to zero. By contrast, the RBF kernel did not produce vectors with smooth transitions on the mask boundary and often protruded from the mesh surface. This example thus shows that the connection Laplacian
Figure 2: **Superresolution and inpainting.****A** Sparse samples randomly distributed over the Stanford bunny and torus surface mesh are interpolated to a higher resolution (\(k=50\)). **B** Ablation studies for the Stanford Bunny, showing the dependence of mean alignment of the superresolved vector fields in the test set against the average distance between manifold points \(\alpha\) (for \(k=50\) fixed) and the number of eigenvectors \(k\). **C** Prediction in a masked area containing vector field singularity. Predictions using an RBF kernel with Laplacian eigenvectors as positional encoding produce discontinuities along the masked boundary (triangle), vectors that protrude the mesh surface (star) and do not converge to zero magnitude at the singularity. RVGP using connection Laplacian eigenvectors produces smoothly varying inpainting (\(k=50\)).
positional encoding with vector field kernel provides sufficient regularity to learn vector fields over complex shapes.
### Superresolution of EEG data
Finally, as a biologically and clinically relevant use case, we applied RVGP to superresolve electroencephalography (EEG) recordings from human subjects. EEG recordings measure spatiotemporal wave patterns associated with neural dynamics, which play a fundamental role in human behaviour (Sato et al., 2012; Xu et al., 2023). Accurately resolving these dynamics requires high-density EEG setups in excess of 200 channels Robinson et al. (2017); Seeber et al. (2019); Siclari et al. (2018). However, due to long setup times during which signal quality can rapidly degrade, experimentalists and clinicians commonly resort to low-density recordings with 32 or 64 channels (Chu, 2015).
Thus, we asked whether superresolving low-density 64-channel recordings (Fig. 3A) using RVGP into high-density 256-channel recordings (Fig. 3C) can facilitate biological discovery and clinical diagnostics. As a ground truth, we collected 256-channel EEG recordings of resting-state brain activity from 33 Alzheimer's patients (AD) and 28 age-matched healthy controls (see Appendix A.1). We focused on low-frequency alpha waves spanning the 8-15 Hz range, which represent the dominant rhythm at rest (Berger, 1934) and exhibits impaired dynamics in AD (Moretti et al., 2004; Besthorn et al., 1994; Dauwels et al., 2010). After preprocessing of EEG time series (see Appendix A.1), we used a triangulated mesh of the head surface, with vertices given by 256 electrode locations, and finite differencing to compute the vector field at mesh vertices (Fig. 3B,C (ground truth), Appendix A.2). Then, we constructed RVGP kernels using the connection Laplacian eigenvectors derived from a \(k\)-nearest neighbour graph (\(k=5\)) fit to mesh vertices. Finally, we trained RVGP using vectors at 64 training vertices (Fig. 3B) and then used it to infer the vector field at the remaining 192 test vertices (Fig. 3C). As a benchmark, we also used a channel-wise linear and spline interpolation of the vector field.
Figure 3: **Reconstruction of spatiotemporal wave patterns in the human EEG.****A** Snapshot of an alpha wave pattern (8-15 Hz) recorded on low density (64 channel) EEG from a healthy subject projected in two dimensions. **B** Phase field of an alpha wave. Vector field denotes the the spatial gradient of the voltage signal. **C** Ground-truth and reconstructed high-density phase field (256 channel) using RVGP, linear and spline interpolation. Streamlines, computed based on the vector field, high-light features of the phase field. RVGP significantly better preserves singularities, i.e., sources, sinks and vortices. **D** Reconstruction accuracy, measured by the preservation of singularities. **E** Receiver operating characteristic (ROC) for binary classification of patients with Alzheimer’s disease against healthy controls using a linear support vector machine trained on the divergence and vorticity fields.
To assess the quality of reconstructions, we computed the divergence and curl of the predicted EEG vector field and computed the mean absolute error (MAE) relative to the ground truth 256-channel EEG. The divergence and curl have previously been used to identify singularities in neural wave patterns from human neuroimaging recordings and have been linked to cognitive function and behaviour (Roberts et al., 2019; Xu et al., 2023). We found that the RVGP reconstruction closely approximated the divergence and curl of the high-density EEG (Fig. 3C). For visual comparison, we show a single time snapshot with characteristic vector field singularities such as sources, sinks and vortices. In particular, RVGP reconstruction substantially better preserved these singularities than linear interpolation, which had the tendency to underfit the vector field, and spline interpolation, which introduced spurious local structures. This observation is corroborated by significantly lower angular error and curl-divergence MAE for all subjects for RVGP compared with benchmark (\(n=61\), Fig. 3D).
Given the superior reconstruction accuracy of our approach, we sought to determine if it could enhance the classification accuracy between patients with Alzheimer's Disease (AD) and healthy controls. Contemporary diagnostic methods for AD are costly, invasive, and lengthy in procedure (Zetterberg and Bendlin, 2021). To underscore the potential clinical significance and scalability of our methodology, we employed a linear support vector machine for the classification of AD patients versus age-matched controls. This classification was based on the reconstructed divergence and curl fields derived from a brief, one-minute resting-state low-density EEG recording - a procedure that is feasibly integrated into clinical settings due to its non-invasive nature and cost-efficiency. Our results show significantly higher than state-of-the-art classification accuracy, approaching that derived from the ground truth high-density EEG signal (Figure 3E).
## 6 Conclusion
In this study, we introduced RVGP, a novel extension of Gaussian processes designed to model vector fields on latent Riemannian manifolds. Utilising the spectrum of the connection Laplacian operator, RVGP intrinsically captures the manifold's geometry and topology and the vector field's smoothness. This enables the method to learn global patterns while preserving singularities, thereby filling a significant gap in existing approaches that are limited to known, analytically tractable manifolds. A key strength of RVGP is its data-driven, intrinsic construction via a proximity graph, which enhances its practical utility by making it highly applicable to real-world datasets where the manifold is often not explicitly known. Demonstrated across diverse scientific domains such as neuroscience and geometric data analysis, RVGP not only advances the field of geometrically-informed probabilistic modelling but also offers a statistical tool for various high-impact applications, including clinical neuroscience, that is robust to sampling density and noisy data.
### Author Contributions
AG and PV devised the RVGP method and developed the theoretical foundations with contributions from RLP. AG and RLP wrote the RVGP code. AG, RLP and MVC designed the experiments. MVC and NG conceptualised the EEG preprocessing pipeline. MD, EM, DS, and PAM collected the EEG data. MVC, RLP and NG performed the analysis of the EEG data with contributions from AG. AG, RLP, MVC and PV wrote the paper.
### Acknowledgments
We thank Nicolas Aspert for the critical reading and feedback on the manuscript. AG acknowledges support from an HFSP Cross-disciplinary Postdoctoral Fellowship (LT000669/2020-C). PAM acknowledges funding from the NIHR Biomedical Research Centre at Imperial College London. RLP acknowledges the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 424778381-TRR 295.
### Code availability
The Python code to reproduce our results is openly available at [https://github.com/agosztolai/RVGP](https://github.com/agosztolai/RVGP). |
2309.07893 | Choosing a Proxy Metric from Past Experiments | In many randomized experiments, the treatment effect of the long-term metric
(i.e. the primary outcome of interest) is often difficult or infeasible to
measure. Such long-term metrics are often slow to react to changes and
sufficiently noisy they are challenging to faithfully estimate in short-horizon
experiments. A common alternative is to measure several short-term proxy
metrics in the hope they closely track the long-term metric -- so they can be
used to effectively guide decision-making in the near-term. We introduce a new
statistical framework to both define and construct an optimal proxy metric for
use in a homogeneous population of randomized experiments. Our procedure first
reduces the construction of an optimal proxy metric in a given experiment to a
portfolio optimization problem which depends on the true latent treatment
effects and noise level of experiment under consideration. We then denoise the
observed treatment effects of the long-term metric and a set of proxies in a
historical corpus of randomized experiments to extract estimates of the latent
treatment effects for use in the optimization problem. One key insight derived
from our approach is that the optimal proxy metric for a given experiment is
not apriori fixed; rather it should depend on the sample size (or effective
noise level) of the randomized experiment for which it is deployed. To
instantiate and evaluate our framework, we employ our methodology in a large
corpus of randomized experiments from an industrial recommendation system and
construct proxy metrics that perform favorably relative to several baselines. | Nilesh Tripuraneni, Lee Richardson, Alexander D'Amour, Jacopo Soriano, Steve Yadlowsky | 2023-09-14T17:43:02Z | http://arxiv.org/abs/2309.07893v2 | # Choosing a Proxy Metric from Past Experiments
###### Abstract
In many randomized experiments, the treatment effect of the long-term metric (i.e. the primary outcome of interest) is often difficult or infeasible to measure. Such long-term metrics are often slow to react to changes and sufficiently noisy they are challenging to faithfully estimate in short-horizon experiments. A common alternative is to measure several short-term proxy metrics in the hope they closely track the long-term metric - so they can be used to effectively guide decision-making in the near-term. We introduce a new statistical framework to both define and construct an optimal proxy metric for use in a homogeneous population of randomized experiments. Our procedure first reduces the construction of an optimal proxy metric in a given experiment to a portfolio optimization problem which depends on the true latent treatment effects and noise level of experiment under consideration. We then denoise the observed treatment effects of the long-term metric and a set of proxies in a historical corpus of randomized experiments to extract estimates of the latent treatment effects for use in the optimization problem. One key insight derived from our approach is that the optimal proxy metric for a given experiment is not apriori fixed; rather it should depend on the sample size (or effective noise level) of the randomized experiment for which it is deployed. To instantiate and evaluate our framework, we employ our methodology in a large corpus of randomized experiments from an industrial recommendation system and construct proxy metrics that perform favorably relative to several baselines.
## 1 Introduction
Randomized controlled trials (RCTs) are the gold standard approach for measuring the causal effect of an intervention (Hernan and Robins, 2010); however, designing and analyzing high-quality RCTs requires various considerations to ensure scientifically robust results. For example, an experimenter must clearly define the intervention, control, and choose a primary outcome for the study. In this work, we will assume that the intervention and control are clearly defined, and consider the problem of choosing a good primary outcome. A common approach is to choose the primary outcome to be a key metric which drives downstream decision-making. Such metrics are critical components in the decision-making pipelines of many large-scale technology companies (Chen and Fu, 2017; Rachitsky) as well as used to guide policy decisions in economics and medicine (Athey et al., 2019; Elliott et al., 2015). Unfortunately, direct measurement of such a metric can be impractical or infeasible. In many cases, they are long-term outcomes observed with a significant temporal delay, making them slow to move (i.e. insensitive) in the short term, and inherently noisy. Moreover, they may be prohibitively expensive to query.
On the other hand, proxy metrics (or surrogates) that are easier to measure or faster to react are often available to use in lieu of the long-term outcome. For example in clinical settings, CD4 white-blood cell counts in blood serve as a surrogate for mortality due to AIDS (Elliott et al., 2015), while in online experimentation platforms diversity of consumed content serves a proxy for long-term visitation frequencies (Wang et al., 2022). A significant literature exists on designing and analyzing proxy metrics and experiments that use them as a primary outcome. One important question addressed by this literature is choosing (or combining) proxy metrics to be a good surrogate for measuring the effect of the intervention on the long-term outcome (Prentice, 1989; Hohnhold et al., 2015; Parast et al., 2017; Athey et al., 2019; Wang et al., 2022, 2023). To do so, one needs a principled reason for why the measured treatment effect on the proxy outcome is related to the treatment effect on the long-term outcome. Frequently, this is done by making _causal_ assumptions about the relationship between the treatment, proxy outcome, and long-term outcome (see, e.g.,
VanderWeele, 2013; Athey et al., 2019; Kallus and Mao, 2022). However, motivated by the unique way that trials are run in technology product applications, we take a different approach based on _statistical_ regularity assumptions in a population of experiments, similar to meta-analytic approaches such as that taken by Elliott et al. (2015).
RCTs performed in technology products are typically referred to as A/B tests. They are used for a wide variety of applications in the technology industry, however one of the most common applications is for assessing the effect of a candidate launch of a new product feature or change on the user's experience. If the results of the A/B test suggest that the candidate launch has a positive effect on the user experience, then it will be deployed to all users. Depending on the scale of the product and engineering team, many candidate launches requiring many A/B tests may be required on a regular basis. The results of these A/B tests on long-term outcomes and proxy metrics are logged, serving as a history of past candidate launches that we may use to guide the choice of future proxy metrics to use for decision making.
This perspective has been studied in the technology research literature previously, for example in Richardson et al. (2023) and Wang et al. (2022), to develop useful heuristics for choosing a proxy metric for use in future A/B tests. In this work, we define a precise statistical framework for choosing a proxy metric based on this historical A/B test data, and develop a method for optimizing a composite proxy, an affine combination of base proxy metrics, that can be used as a primary outcome for future A/B tests.
The central contributions of our paper are the following:
* short-term sensitivity of the proxy metric and directional alignment with the long-term outcome
- into a single objective (see Equation (2)).
* We show this new notion of _proxy quality_ can be used to construct optimal proxy metrics for new A/B tests via an efficient two-step procedure. The first step reduces the construction of an optimal weighted combination of base proxies, that maximize our definition of proxy quality, to a classic portfolio optimization problem. This optimization problem is a function of the latent variability of the unobservable population treatment effects and the noise level of the new experiment under consideration (see Section 2.1.2). We then use a hierarchical model to denoise the observed treatment effects on the proxy and long-term outcome in a historical corpus of A/B tests to extract the variation in the unobserved population treatment effects (see Section 2.2). The variance estimates of the population TEs are then used as plug-ins to the aforementioned optimization.
* We highlight the _adaptivity_ of our proxy metric procedure to the inherent noise level of each experiment for which it will be used. In our framework the optimal proxy metric for a given experiment is not apriori fixed. Rather it should depend on the sample size (or effective noise level) of the randomized experiment for which it is deployed in order to profitably trade-off bias from disagreement with the long-term outcome and intrinsic variance (see Section 2.1.1 and Figure 3).
* Finally, we instantiate and evaluate our framework on a set of 307 real A/B tests from an industrial recommendation engine showing how the proxy metrics we construct can improve decision-making in a large-scale system (see Section 3).
### Statistical Framework
Consider a corpus of \(K\) randomized experiments (or A/B tests) where the \(i\in\{1,\ldots,K\}\)-th experiment is of sample size \(n_{i}\). In each experiment, there is a specific intervention that has some population treatment effect (TE) that we denote \(\Delta_{i}\).1 In the experiment, we measure an estimated TE \(\hat{\Delta}_{i}\) on the subset of the population included in our experiment. Note that the entire population may be included in the experiment, but \(\hat{\Delta}_{i}\) remains a random quantity, given \(\Delta_{i}\), because of the random assignment of treatments in the experiment.
Footnote 1: This may be an average TE (ATE) or relative ATE, however, we will not emphasize the differences between these two as they are immaterial for this work.
To differentiate between the TE on the long-term outcome and the proxy metrics, we will attach a superscript \(N\) as in \(\Delta_{i}^{N}\) or \(\hat{\Delta}_{i}^{N}\) for the long-term outcome (we use \(N\) since these are sometimes referred to as north star metrics), and a superscript \(P\) as in \(\boldsymbol{\Delta}_{i}^{P}\) and \(\hat{\boldsymbol{\Delta}}_{i}^{P}\) for the proxy metrics. Note that there may be multiple base proxy metrics, so
\(\mathbf{\Delta}_{i}^{P}\in\mathbb{R}^{d}\). Throughout, we assume that \(n_{i}\) is large enough, and that the experimental design is sufficiently regular such that conditional on \((\Delta_{i}^{N},\mathbf{\Delta}_{i}^{P})\), \((\hat{\Delta}_{i}^{N},\hat{\mathbf{\Delta}}_{i}^{P})\) is well-approximated by a Normal distribution centered around the population \(\text{TE}\)2 (due to the central limit theorem); and that the joint (within-experiment) covariance of \((\hat{\Delta}_{i}^{N},\hat{\mathbf{\Delta}}_{i}^{P})\), denoted as \(\mathbf{\Xi}\), has a good estimator, denoted as \(\mathbf{\hat{\Xi}}\). Our discussion is agnostic to the precise estimator of the TEs and their covariances. We only require black-box access to their values. For the historical corpus of \(K\) randomized experiments, we assume that these triplicates of measurements \(\{(\hat{\Delta}_{i}^{N},\hat{\mathbf{\Delta}}_{i}^{P},\hat{\mathbf{\Xi}}_{i})\}_{i=1}^ {K}\) are available.
Footnote 2: Note that in our framework, the notion of proxy quality in Section 2.1.1 and Section 2.1.2 only relies on low-order moments and doesn’t explicitly require Gaussianity but the estimation procedure in Section 2.2 makes explicit use of this structure.
Our goal in this paper is to revisit the proxy metric problem: the selection of short-term proxy metrics (or a weighted combination thereof) that track the long-term outcome in a new \(K+1\)st experiment where measurements of the long-term outcome are unavailable, but measurements of short-term proxy metrics are. In order to develop a statistical framework to construct proxies in a new experiment, we leverage a meta-analytic approach to model the relationship between different experiments. To this end, we assume the population TEs for each experiment are drawn i.i.d. from a common joint distribution \(\mathcal{D}\),
\[\begin{pmatrix}\Delta_{i}^{N}\\ \mathbf{\Delta}_{i}^{P}\end{pmatrix}\sim\mathcal{D}(\cdot)\ \ \text{i.i.d.} \tag{1}\]
supported over \(\mathbb{R}^{d+1}\). We acknowledge this assumption is strong and not suitable for all applications. However, in our motivating application of interest - studying a corpus of AB tests from a large technology company - historical intuition and various tests do not provide significant evidence this assumption is violated. The approach of placing a distributional prior on the population ATE in similar settings of A/B testing at large-scale technology companies (Deng and Shi, 2016; Deng) as well as other meta-analytic studies of RCTs (Elliott et al., 2015; Elliott, 2023) has also been advocated for as a useful assumption in prior work.
## 2 Methods
With the above setting in place we first define the measure of quality of a proxy metric which relates the estimated TE on the short-term proxies to the population TE on the long-term outcome for a new experiment. Subsequently, we show how the relevant latent parameters contained in the definition of proxy quality can be efficiently estimated via a hierarchical model.
### Optimal Proxy Metrics
In order to judge the quality of a proxy metric we first define a new notion of utility for a proxy metric. Our key insight is that in a new experiment3 where \(\left(\Delta^{N}\quad\mathbf{\Delta}^{P}\right)^{\top}\sim\mathcal{D}(\cdot)\), the _observed_ TEs of the short-term proxies \(\hat{\mathbf{\Delta}}^{P}\) should closely track the _latent_ TE of the long-term outcome (see Figure 1).4 This is because decisions that are intended to move \(\Delta^{N}\) will be made on the basis of \(\hat{\mathbf{\Delta}}^{P}\). Thus, we would like these quantities to be well-correlated.
Footnote 3: In the following since we assume all the experiments are i.i.d. we suppress index notation on this arbitary experiment drawn from \(\mathcal{D}(\cdot)\).
Footnote 4: Note in a new experiment the estimated treatment effect on the long-term outcome \(\hat{\Delta}^{N}\) may be unavailable.
#### 2.1.1 Proxy Quality of a Single Short-Term Metric in a New Experiment
For simplicity, we first consider the case when the vector-valued sequence of proxies reduces to a single scalar proxy. In order to capture the intuition that the short-term estimated proxy TE, \(\hat{\Delta}^{P}\) should track the population long-term outcome TE, \(\Delta^{N}\) we define the _proxy quality_ as the correlation between the aforementioned quantities. The correlation is a simple and natural measure which captures the predictive relationship between the proxy metric and long-term outcome. Under stronger conditions in Section 2.2, we also argue that optimizing for this measure of proxy quality minimizes the probability of a (signed) decision error or surrogate paradox.
In our setting we consider the case where the estimated TEs are unbiased estimators of their underlying latent population quantities - so we can parameterize \(\hat{\Delta}^{P}=\Delta^{P}+\sqrt{\Xi^{PP}}\cdot\epsilon\), where \(\epsilon\) is an independent random zero-mean,
unit-variance random variable. Hence we can define and simplify the _proxy quality_ as:
\[\mathrm{corr}(\Delta^{N},\hat{\Delta}^{P})=\frac{\mathrm{Cov}(\Delta^{N},\Delta^{ P})}{\sqrt{\mathrm{Var}(\Delta^{N})(\mathrm{Var}(\Delta^{P})+\Xi^{P,P})}}=\frac{ \mathrm{corr}(\Delta^{P},\Delta^{N})}{\sqrt{1+\frac{\Xi^{P,P}}{\mathrm{Var}( \Delta^{P})}}}. \tag{2}\]
In our setting, the definition of proxy quality decomposes the predictive relationship between the estimated proxy TE and population long-term outcome TE into latent predictive correlation \(\mathrm{corr}(\Delta^{P},\Delta^{N})\) - a property of the distribution \(\mathcal{D}(\cdot)\) - and an effective inverse signal-to-noise ratio \(\Xi^{PP}/\mathrm{Var}(\Delta^{P})\) - which is also a function of the noise level of the experiment. We now make several comments on the aforementioned quantities.
* The latent predictive correlation \(\mathrm{corr}(\Delta^{P},\Delta^{N})\) tracks the alignment between the population proxy metric TE and the population long-term outcome TE. In particular, this correlation is reflective of the intrinsic predictive quality of a fixed proxy metric. This quantity is not easily accessible since we do not directly observe data sampled from \(\mathcal{D}(\cdot)\). We return to the issue of estimating such quantities in Section 2.2.
* The quantity \(\Xi^{PP}/\mathrm{Var}(\Delta^{P})\) computes the ratio of the within-experiment noise in the estimated proxy metric TE--due to fluctuations across experimental units and treatment assignments--to the latent variation of the population proxy metric TE across experiments. For the former quantity we expect \(\Xi^{PP}\) to depend on the size of the randomized experiment in consideration (i.e. \(\Xi^{PP}\sim\frac{1}{n}\), where \(n\) is the sample size of the experiment), since it is a variance over independent treatment units. Meanwhile \(\mathrm{Var}(\Delta^{P})\) captures how easily the population proxy metric TE moves in the experiment population \(\mathcal{D}(\cdot)\). In a (large enough) given experiment, \(\Xi^{PP}\) is easily estimated by \(\hat{\Xi}^{PP}\) using the sample covariance estimator. Meanwhile, \(\mathrm{Var}(\Delta^{P})\) is difficult to measure directly, just like \(\mathrm{corr}(\Delta^{P},\Delta^{N})\). We later show how to use a hierarchical model to estimate these parameters (see Section 2.2). Finally, it is worth noting this ratio term in the denominator is also closely related to a formal definition of metric sensitivity which appears in the A/B testing literature [6].
Together the numerator and denominator in Equation (2) trade off two (often competing) desiderata into a single objective: the numerator favors alignment with the population TE on the long-term outcome while the denominator downweights this by the signal-to-noise ratio of the proxy metric.5 One unique property of this proxy quality measure is that, given a set of base proxies, the "optimal" single proxy is not an intrinsic property of the proxy metric or distribution of treatment effects captured by \(\mathcal{D}(\cdot)\). Rather, it also depends on the experimental design. Specifically, it is a function of the experiment sample size \(n\), which will control the size of \(\Xi^{PP}\). This behavior represents a form of bias-variance trade-off. For large sample sizes, as \(\Xi^{PP}\to 0\), Equation (2) will favor less biased metrics whose population-level TEs are aligned with the long-term outcome (i.e. the numerator is large). Meanwhile, for small sample sizes where \(\Xi^{PP}\) is large, Equation (2) will favor less noisy metrics with a high signal-to-noise ratio so the denominator is small.
Figure 1: In a new experiment, we view the observed TEs as being generated from their corresponding (unobserved) latent values by a noisy channel which adds independent, mean-zero experimental noise with covariance \(\mathbf{\Xi}\). In this new experiment the noisy, observed long-term outcome \(\hat{\Delta}^{N}\) is inaccessible. We seek to find noisy proxy metrics whose TEs closely track the population TEs on the long-term outcome.
#### 2.1.2 Composite Proxy Quality in a New Experiment
The previous discussion on assessing the quality of a single proxy metric captures many of the important features behind our approach. However, in practice, we are often not restricted to picking a single proxy metric to approximate the long-term outcome. Rather, we are free to construct a **composite proxy metric** which is a convex combination of the TEs of a set of base proxy metrics to best predict the effect on the long-term outcome.
In our framework, the natural extension to the vector-valued setting takes a convex combination of the proxies \(\mathrm{corr}(\mathbf{w}^{\top}\mathbf{\hat{\Delta}}^{P},\Delta^{N})\) for a normalized weight vector \(\mathbf{w}\), instead of restricting ourselves a single proxy. However, beyond just defining the quality of a weighted proxy metric, we can also optimize for the quality of this weighted sum of base proxy metrics:
\[\max_{\mathbf{w}\in\mathbb{R}^{d}}\mathrm{corr}(\mathbf{w}^{\top}\mathbf{\hat {\Delta}}^{P},\Delta^{N}):\mathbf{1}^{\top}\mathbf{w}=1,\mathbf{w}\geq 0. \tag{3}\]
Again, the estimated TEs are unbiased estimators of their underlying latent population quantities, so we can parameterize \(\mathbf{\hat{\Delta}}^{P}=\mathbf{\Delta}^{P}+\left(\mathbf{\Xi}^{PP}\right)^ {1/2}\cdot\mathbf{\epsilon}\), where \(\mathbf{\epsilon}\) is an independent random zero-mean, identity-covariance random vector. So the objective expands to:
\[\max_{\mathbf{w}\in\mathbb{R}^{d}}\frac{1}{\sqrt{\mathrm{Var}(\Delta^{N})}} \frac{\mathbf{w}^{\top}\mathrm{Cov}(\Delta^{N},\mathbf{\Delta}^{P})}{\sqrt{ \mathbf{w}^{\top}(\mathrm{Cov}(\mathbf{\Delta}^{P},\mathbf{\Delta}^{P})+ \mathbf{\Xi}^{P,P})\mathbf{w}}}:\mathbf{1}^{\top}\mathbf{w}=1,\mathbf{w}\geq 0. \tag{4}\]
Essentially all considerations noted in the previous section translate to the vector-valued setting _mutatis mutandis_. In particular, the numerator in Equation (4) captures the alignment between the true latent weighted proxy and the population long-term outcome, while the denominator downweights the numerator by the effective noise in each particular experiment. As before we expect \(\mathbf{\Xi}^{P,P}\sim\frac{1}{n}\) with the sample size, \(n\), of the experiment. Hence, the optimal weights for a given experiment will adapt to the noise level (or equivalently sample size) of the experiment run.
The formulation in Equation (4) raises the question of how to efficiently compute \(\mathbf{w}\). Indeed, at first glance the optimization problem as phrased in Equation (4) is non-convex. Fortunately, the objective in Equation (4) (up to a constant pre-factor) maps exactly onto the Sharpe ratio (or reward-to-volatility ratio) maximization problem often encountered in portfolio optimization (Sharpe, 1966, 1998). As is well-known in the optimization literature, the program in Equation (4) can be converted to an equivalent (convex) quadratic programming problem which can be efficiently solved (Cornuegols and Tutunci, 2006, Section 8.2). We briefly detail this equivalence explicitly in Appendix B.
The portfolio perspective also lends an additional interpretation to the objective in Equation (4). If we analogize each specific proxy metric as an asset to invest in, then \(\mathrm{Cov}(\Delta^{N},\mathbf{\Delta}^{P})\) is the returns vectors of our assets, and \(\mathrm{Cov}(\mathbf{\Delta}^{P},\mathbf{\Delta}^{P})+\mathbf{\Xi}^{P,P}\) is their effective covariance-so \(\mathbf{w}^{\top}(\mathrm{Cov}(\mathbf{\Delta}^{P},\mathbf{\Delta}^{P})+ \mathbf{\Xi}^{P,P})\mathbf{w}\) captures the risk of our portfolio of proxies. Just as in portfolio optimization, where two highly-correlated assets should not be over-invested in, if two proxy metrics are both strongly aligned with the long-term outcome, but are themselves very correlated, the objective in Equation (4) will not assign high weights to both of them.
### Estimation of Latent Parameters via a Hierarchical Model
As the last piece of our framework, we finally turn to the question of obtaining estimates of the unobservable latent quantities arising in Equation (4). While \(\mathbf{\Xi}^{PP}\) is easily estimated from the within-experiment sample covariance \(\mathbf{\hat{\Xi}}^{PP}\), the quantities \(\mathrm{Cov}(\Delta^{N},\mathbf{\Delta}^{P})\), \(\mathrm{Cov}(\mathbf{\Delta}^{P},\mathbf{\Delta}^{P})\) are tied to the latent, unobservable population TEs of the proxy metrics and long-term outcome.
In order to gain a handle on these quantities, we take a meta-analytic approach which combines two key pieces. First, as in our previous discussion, we require the setting described in Equation (1) - that is we assume the true population TEs are drawn i.i.d. from a common joint distribution. While only this assumption was needed for our previous discussion, we now introduce additional parametric structure in the form of an explicit generative model to allow for tractable estimation of the parameters \(\mathrm{Cov}(\Delta^{N},\mathbf{\Delta}^{P})\) and \(\mathrm{Cov}(\mathbf{\Delta}^{P},\mathbf{\Delta}^{P})\). Second, we assume access to a pool of homogeneous RCTs for which unbiased estimates of the TE on the short-term proxy metrics and long-term outcome are available (i.e. \((\hat{\Delta}^{N}_{i},\mathbf{\hat{\Delta}}^{P}_{i})\) for \(i\in\{1,\ldots,K\}\)). With these two pieces we can construct a hierarchical (or
linear mixed) model to estimate the latent parameters:
\[\begin{pmatrix}\Delta_{i}^{N}\\ \mathbf{\Delta}_{i}^{P}\end{pmatrix} \sim\mathrm{MVN}\left(\begin{pmatrix}\mu_{P}^{N}\\ \mathbf{\mu}^{P}\end{pmatrix},\mathbf{\Lambda}\right) \tag{5}\] \[\begin{pmatrix}\hat{\Delta}_{i}^{N}\\ \hat{\mathbf{\Delta}}_{i}^{P}\end{pmatrix}\mid\begin{pmatrix}\Delta_{i}^{N}\\ \mathbf{\Delta}_{i}^{P}\end{pmatrix} \sim\mathrm{MVN}\left(\begin{pmatrix}\Delta_{i}^{N}\\ \mathbf{\Delta}_{i}^{P}\end{pmatrix},\mathbf{\Xi}_{i}\right) \tag{6}\]
which due to the joint Gaussianity we can marginalize as:
\[\begin{pmatrix}\hat{\Delta}_{i}^{N}\\ \hat{\mathbf{\Delta}}_{i}^{P}\end{pmatrix}\sim\mathrm{MVN}\left(\begin{pmatrix} \mu^{N}\\ \mathbf{\mu}^{P}\end{pmatrix},\mathbf{\Sigma}_{i}:=\mathbf{\Lambda}+\mathbf{\Xi}_{i}\right), \quad\forall i\in[K]. \tag{7}\]
We use the notation \(\mathbf{\Lambda}\) to capture the latent covariance of the joint distribution \(\mathcal{D}(\cdot)\) which we parameterize by a multivariate normal (i.e. MVN). So in our case, \(\mathrm{Cov}(\Delta^{N},\mathbf{\Delta}^{P})=\mathbf{\Lambda}^{NP}\) and \(\mathrm{Cov}(\mathbf{\Delta}^{P},\mathbf{\Delta}^{P})=\mathbf{\Lambda}^{PP}\) for \(\begin{pmatrix}\Delta^{N}&\mathbf{\Delta}^{P}\end{pmatrix}^{\top}\sim\mathcal{D}(\cdot)\). Moreover, for the purposes of inference we simply use the plug-in estimate \(\hat{\mathbf{\Xi}}\approx\mathbf{\Xi}\) which is routinely done in similar hierarchical modeling approaches (Gelman et al., 1995). While using multivariate normality in Equation (5) is an assumption (albeit we believe reasonable in our case), it is not essential to the content of our results. Our proxy quality definition relies only on inferring low-order moments of \(\mathcal{D}(\cdot)\) for which this parametric structure is convenient. The second approximation that the noisy TEs are multivariate normal around their true latent values (Equation (6)) is well-justified by the central limit theorem in our case, since the experiments we consider all have at least \(O(10^{5})\) treatment units. Since inference in this model is not closed-form, we implement the aforementioned generative model in the open-source probabilistic programming language NumPyro(Phan et al., 2019) to extract the latent parameters. Additional details on the inference procedure are deferred to Appendix C.
Figure 2 provides an example where the inference procedure is used to extract the latent population variation in a synthetically generated dataset. Although this example is synthetic (and exaggerated), empirically in our corpus we observe many base proxy metrics with correlations to the long-term outcome of \(\sim 0.6\) in their experimental noise matrix \(\hat{\mathbf{\Xi}}\). Thus, the example shows a case where the raw correlation may provide an over-optimistic estimate of the underlying alignment between a proxy metric and long-term outcome. The denoising model we fit helps mitigate the impact of correlated within-experiment noise in our setting. We schematically detail the end-to-end algorithm which composes the denoising model fit and portfolio optimization to construct a proxy for a new A/B test in Algorithm 1.
Figure 2: The panel visualizes the denoising effect of fitting a hierarchical model to raw TEs to uncover their latent variation on synthetic data. We generate 1500 synthetic datapoints sampled from the model in Equation (5) with one proxy metric. Each datapoint represents a synthetic TE measurement from a single A/B test. We use parameters with \(\begin{pmatrix}\mu^{N}\\ \mu^{P}\end{pmatrix}=\begin{pmatrix}0.0\\ 0.0\end{pmatrix}\), \(\mathbf{\Lambda}=.01\cdot\begin{pmatrix}1&0.2\\ 0.2&1\end{pmatrix}\) to generate data in Figure 2(a). We add Gaussian noise with covariance \(\mathbf{\Xi}=.02\cdot\begin{pmatrix}1&0.7\\ 0.7&1\end{pmatrix}\) to them in Figure 2(b). Finally, we fit the generative model to the observed data in Equation (7) using the within-experiment covariances \(\mathbf{\Xi}\) in Figure 2(c). Figure 2(c) illustrates how the hierarchical model denoises the raw observed TEs to disentangle the latent variation in the population from the experimental noise in each synthetic A/B test.
Lastly, with the generative model in Equations (5) and (6) in place we can provide an alternative interpretation of our definition of composite proxy quality. Under the condition that \(\begin{pmatrix}\mu^{N}\\ \boldsymbol{\mu}^{P}\end{pmatrix}=\boldsymbol{0}^{6}\), the probability of a signed alignment (or equivalently the complement of a surrogate paradox Elliott et al. (2015)) can be simplified too,
\[\mathbb{P}(\Delta^{N}>0,\mathbf{w}^{\top}\hat{\boldsymbol{\Delta}}^{\mathbf{P} }>0)=\mathbb{P}(\Delta^{N}<0,\mathbf{w}^{\top}\hat{\boldsymbol{\Delta}}^{ \mathbf{P}}<0)=\frac{1}{4}+\frac{\sin^{-1}(\rho)}{2\pi} \tag{8}\]
for \(\rho=\frac{1}{\sqrt{\Lambda^{NN}}}\frac{\mathbf{w}^{\top}\boldsymbol{\Lambda}^ {NP}}{\sqrt{\mathbf{w}^{\top}(\boldsymbol{\Lambda}^{PP}+\boldsymbol{\Xi}^{PP} )\mathbf{w}}}\). This computation relies on the joint Gaussianity of the model and centering condition to explicitly compute the alignment probability. Given \(\rho\) is our definition of the vector-valued proxy quality and the inverse-sine function is monotone increasing, optimizing the proxy quality over \(\mathbf{w}\) can be interpreted as minimizing the probability of a signed decision error in this setting.
## 3 Results
In this section, we turn to evaluating the performance of our composite proxy procedure against several baselines. As raw proxy metrics to consider in our evaluations we use a small set of 3 hand-selected proxy metrics which capture different properties domain experts believe are relevant to long-term user satisfaction in our setting. We first highlight a unique feature of our proxy procedure - its adaptivity to the sample size of the experiment for which it will be applied. We then perform a comparison of our new proxy procedure against the raw proxy metrics and a baseline procedure (Richardson et al., 2023) appearing in the literature.
### Proxy Quality and Sample Size Dependence
One unique feature of our procedure is its adaptivity to the noise level (or effectively sample size of the experiment); recall in Equation (4) the optimal weights will depend on the latent parameters which are inferred from the pool of homogeneous RCTs on which they are fit, but also the experiment noise estimate \(\hat{\boldsymbol{\Xi}}\) which depends on the new A/B test it is to be used for. While in practice one could recompute a proxy metric depending on the _aposteriori_ results of each A/B test (so \(\hat{\boldsymbol{\Xi}}\) is known), it is also desirable to be able to fit a proxy for each A/B test _apriori_, without knowledge of its results. To do so, we found that in our application, \(\hat{\boldsymbol{\Xi}}_{i}^{PP}\) could be estimated with reasonable accuracy purely on the basis of historical data of _other A/B tests_ in our population of experiments by postulating a scaling of the form \(\boldsymbol{\Xi}_{i}^{PP}=\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}/n_{i}\). Here the reference matrix \(\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}\) can be thought of as the within experiment variance of an A/B test in the population with one sample. The ansatz \(\hat{\boldsymbol{\Xi}}_{i}^{PP}=\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}/n_{i}\), combines two observations. The first is that the variance of a TE estimate decays as \(\sim\frac{1}{n_{i}}\) in the number of treatment units \(n_{i}\), which is immediate from the independence of treatment units. However, the second is that the constant prefactor in the variance \(\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}\) is approximately the same across different A/B tests in our corpus. The reference matrix \(\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}\) can then be estimated as a weighted average of \(\hat{\boldsymbol{\Xi}}_{i}\) from the corpus. Additional details and verification of these hypotheses are provided in Appendix A. The upshot of this approach is that the computation of the optimal weights in Equation (4) for a new A/B test can then be done using only the sample size (\(n_{i}\)) of this new experiment (i.e. before the new experiment is "run").
To understand the dependence of our new composite proxies weighting on the experiment sample size, we fit the latent parameters \(\boldsymbol{\Lambda}\) (from the hierarchical model in Equations (5) and (6)) and \(\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}\) on the entire corpus for the results in Figure 3. We then use the scaling \(\boldsymbol{\Xi}=\boldsymbol{\Xi}_{\mathrm{ref}}^{PP}/n\) to estimate the optimal weights of our new composite proxy from Equation (4) for different sample sizes \(n\) for a hypothetic new A/B test. Figure 3 shows how as the sample size increases
the new composite proxy smoothly increases its weighting on raw metrics which are noisier but more strongly correlated with the long term outcome. Moreover, while the Auxiliary Metric 3 is an intuitively reasonable metric, its value as determined by the measure of proxy quality is dominated by a mixture of the other two components.
### Held-out Evaluation of Proxy Procedures
The primary difficulty of this evaluation is that in TE estimation there is the lack of ground-truth "labels" of the treatment effect (i.e. in our framework the population latent TEs such as \(\Delta^{N}\) are never observed). However, in our setting we do have access to a large corpus of 307 A/B tests as noted earlier. Hence, we use held-out/cross-validated evaluations of certain criterion which depend on the noisy metrics aggregated over an evaluation set, to gauge the performance of proxy metrics fit on a training set.
We consider several relevant criteria for performance which have been used in the literature. Two important measures which appear in (Richardson et al., 2023) are the proxy score and sensitivity. To define the criteria, recall that a TE metric is often used to make a downstream decision by thresholding its t-statistic, \(\mathrm{tstat}=\frac{\Delta}{\sqrt{\mathrm{Var}(\hat{\Delta})}}\) as \(\mathrm{tstat}>2\rightarrow+\) (positive), \(-2<\mathrm{tstat}<2\to 0\) (neutral), and \(\mathrm{tstat}<-2\rightarrow-\) (negative). Given a corpus of A/B tests we can then compute the decisions induced by a short-term proxy metric, the decisions induced by the long-term outcome and check the number of A/B tests for which they align. After normalization, the number of detections (both metrics decisions are positive or negative) minus the number of mistakes (one metric decision is positive while the other is negative) defines the proxy score. Similarly, for a short-term proxy metric we can compute its sensitivity - which is the number of times it triggers a statistically significant decision by being positive or negative. Loosely speaking, these two criterion function like the notions of precision and recall in information retrieval. Ideally, a short-term metric would maximize both quantities by being sensitive and triggering often (so as to not miss any A/B tests where the TE for the long-term outcome is significant) but not over triggering and leading to many false positives (or negatives). Additional details on these metrics are provided in the Appendix D7. As another measure of performance, we also compute and
Figure 3: The optimal weighting dependence on sample size for our new composite proxy, represents a bias-variance trade-off. For large sample sizes the weighting favors potentially noisier metrics that are more aligned with the long term outcome. However, for smaller sample sizes the optimal weighting backs off to metrics which are less noisy but also less aligned to the long term outcome.
report our definition of our composite proxy quality for a given composite proxy's learned weights.
Footnote 7: The \(\mathbf{\Xi}_{\mathrm{ref}}^{PP}\) is the \(\mathbf{\Xi}_{\mathrm{ref}}^{PP}\
Within-Experiment Covariance Scaling
As we note before, one interesting feature of our composite proxy quality procedure is its dependence on the noise level of \(\mathbf{\Xi}\) of the randomized experiment it will be applied too. Recall in Equation (4) the optimal weights will depend not only on the latent parameters which are instrinsic properties of \(\mathcal{D}(\cdot)\), but the experiment noise estimate \(\hat{\mathbf{\Xi}}\) which depends on the particular A/B test too which it is applied.
While in practice, one could recompute a composite proxy metric after an A/B test is run (so \(\hat{\mathbf{\Xi}}\) is known), in many applications it is also desirable to be able to fit weights for a composite proxy metric before each A/B test is run. In order to do this we found that we could build a simple predictive model for the experimental noise level \(\mathbf{\Xi}_{i}^{PP}\) in a given experiment on the basis of historical data of _other A/B tests_ in our population of experiments. We did so by making an ansatz of the form \(\mathbf{\Xi}_{i}^{PP}=\mathbf{\Xi}_{\mathrm{ref}}^{PP}/n_{i}\), where \(\mathbf{\Xi}_{\mathrm{ref}}^{PP}\) can be thought of as the within experiment variance of an A/B test in the population with one sample. This ansatz follows from two facts. The first is that the variance of a TE estimate decays as \(\sim\frac{1}{n_{i}}\) in the number of treatment units \(n_{i}\), which is immediate from the independence of treatment units. However, the second - that the constant prefactor in the variance \(\mathbf{\Xi}_{\mathrm{ref}}^{PP}\) is approximately the same across different A/B tests - is an empirical observation due to underlying homogeneity in the population of A/B tests. This approximate homogeneity is evidenced in Figure 4.
On the basis of this ansatz for each A/B test,
\[\mathbf{\Xi}_{i}=\frac{\mathbf{\Xi}_{\mathrm{ref}}}{n_{i}},\quad\{1, \dots,K\}\]
we can construct an estimator for the reference constant matrix \(\mathbf{\Xi}_{\mathrm{ref}}\) as,
\[\hat{\mathbf{\Xi}}_{\mathrm{ref}}=\sum_{i=1}^{K}\gamma_{i}n_{i}\hat{ \mathbf{\Xi}}_{i} \tag{9}\]
for some convex combination of weights \(\gamma_{i}\). While we can use an equal weighting scheme where \(\gamma_{i}=\frac{1}{K}\), since the sample variance estimates \(\hat{\mathbf{\Xi}}_{i}\) themselves are noisy, we instead use a precision weighted combination of them to reduce variance by taking \(\gamma_{i}\propto n_{i}\).
With this estimate \(\hat{\mathbf{\Xi}}_{\mathrm{ref}}\) in hand, for a new \(K+1\)st A/B test with sample size \(n_{K+1}\) we can approximates its within-experiment covariance as,
\[\hat{\mathbf{\Xi}}_{K+1}\approx\frac{\hat{\mathbf{\Xi}}_{\mathrm{ref}}}{ n_{K+1}}\]
Figure 4: Both displays show the within-experiment marginal sample variance (blue dots) for two different metrics computed across 307 different A/B tests and their corresponding power-law fit (red line). Despite the underlying A/B tests being different, we found that the variance were reasonably well modeled by a single inverse-power law with the same constant prefactor over the entire population.
under the implicit homogeneity assumption we make.
## Appendix B Sharpe Ratio and Portfolio Optimization
Under the mild condition (which is always satisfied for us that at least one element of \(\mathbf{\Lambda}^{NP}\) is positive), we can transform the Sharpe ratio optimization objective to an equivalent convex quadratic program:
\[\min_{\mathbf{\mathbf{x}}\in\mathbb{R}^{d}}\mathbf{\mathbf{x}}^{\top}\mathbf{ \mathbf{\Sigma}}\mathbf{\mathbf{x}}\quad:\quad\mathbf{\mathbf{x}}\geq 0,\quad\mathbf{ \mathbf{r}}^{\top}\mathbf{\mathbf{x}}=1. \tag{10}\]
where \(\mathbf{\mathbf{\Sigma}}=\mathbf{\Lambda}^{PP}+\hat{\mathbf{\Xi}}^{PP}\) and \(\mathbf{\mathbf{r}}=\mathbf{\Lambda}^{NP}\). The solution to the original problem in Equation (4) can be recovered by normalizing as \(\mathbf{\mathbf{w}}=\frac{\mathbf{\mathbf{x}}}{\|\mathbf{\mathbf{x}}\|_{1}}\). The details of this standard transformation can be found in [10, Section 8.2], although the original reduction is a generalization of the Charnes-Cooper transformation [16] which dates back to at least [14].
## Appendix C Inference in Hierarchical Model
In order to extract estimates of the latent parameter we perform full Bayesian inference over the generative model in Equation (7) using Numpyro Phan et al. [2019] which uses the NUTS sampler to perform MCMC on the posterior. We found Bayesian inference to be more stable then estimating the MLE of the model. We augmented the generative model in Equation (7) with the weak priors:
\[\mathbf{\mu} \sim\mathcal{N}(0,1000\cdot\text{meanscale}\cdot\mathbf{\mathbf{I}}_{ d+1})\] \[\mathbf{\mathbf{s}} \sim 3\cdot\text{Half-Cauchy}(\text{devscale})\] \[\mathbf{\mathbf{C}} \sim\text{LKJ}(\text{concentration}=1)\] \[\mathbf{\Lambda} =\sqrt{\mathbf{\mathbf{s}}}^{\top}\circ\mathbf{\mathbf{C}}\circ\sqrt{ \mathbf{\mathbf{s}}}\]
where we use the operator \(\circ\) to denote coordinatewise broadcasted multiplication. Here the vector-valued parameters meanscale, and devscale are set to match the overall scales of the raw mean and raw covariance of the corpus A/B tests. We found the overall inferences to be robust to the choice of scales in the Half-Cauchy prior on the pooled variance parameter and normal prior mean, which are both weakly-informative. [10] and [17] both argue for the use of the Half-Cauchy prior for the top-level scale parameter in hierarchical linear models as opposed to the more traditional use of the Inverse-Wishart prior on both empirical and theoretical grounds. The choice of the LKJ prior with concentration parameter set to 1 is essentially a uniform prior over the space of correlation matrices [10, 11].
Inference in this model was performed using the default configuration of the NUTS sampler in Numpyro [20]. We also found it useful to initialize the parameters \((\mu^{N},\mathbf{\mu}^{P})\) and \(\mathbf{\Lambda}\) to the scales of the raw mean and raw covariance of the corpus A/B tests. We diagnosed convergence and mixing of the NUTS sampler using standard diagnostics such as the r-hat statistic [10]. In all our experiments we found the sampler mixed efficiently and we achieved a perfect r-hat statistic for all parameters of 1.0. For each MCMC run we generated 10000 burn-in samples and 50000 MCMC samples for 4 parallel chains. We used the posterior means of the samples to extract estimates of \(\mathbf{\Lambda}\) for use in our proxy quality score.
## Appendix D Proxy Score and Sensitivity
Here we explain several performance criterion we use for proxy metrics which are further detailed in the literature [10]. In order to define both quantities, recall that a TE metric is often used to make a downstream decision by thresholding its t-statistic, \(\mathrm{tstat}=\frac{\hat{\Delta}}{\sqrt{\mathrm{Var}(\hat{\Delta})}}\) as \(\mathrm{tstat}>2\rightarrow+\) (positive), \(-2<\mathrm{tstat}<2\to 0\) (neutral), and \(\mathrm{tstat}<-2\rightarrow-\) (negative). The formal definitions of proxy score and sensitivity are most easily defined in the context of a contingency table visualized in Figure 5 which takes these decisions as inputs. The contingency table tabulates the decisions induced by a particular observed short term proxy metric and the observed long-term north star metric jointly over 554 A/B tests.
The green cells in Figure 5 represent cases where the proxy and north star are both statistically significant and move in the same direction (i.e. _Detections_). The red cells in Figure 5 again represent cases where the proxy and north star are both statistically significant, but where proxy and north star are misaligned (i.e. _Mistakes_). The remaining cells correspond to cases where at least one of the metrics is not statistically significant. The relative importance of these cells is more ambiguous.
In this setting, the sensitivity can be defined as:
\[\text{Metric Sensitivity}=\frac{\text{Number of experiments where the proxy is is significant}}{\text{Number of total experiments}}.\]
Here the numerator can be obtained by summing over the first and last rows of the table. The proxy score can be defined as,
\[\text{Proxy Score}=\frac{\text{Detections -- Mistakes}}{\text{Number of experiments where the north star is significant}}.\]
The denominator here can be obtained by summing over the first column and last column. The sensitivity metric captures the ability of a metric to detect a statistically significant effect - which inherently takes into account its inherent moveability and susceptibility to experimental noise. Given that north star metrics are often noisy and slow to react in the short-term the goal of a proxy is to be sensitive.
The proxy score rewards metrics that are both sensitive and directionally aligned with the north star. Sensitive metrics need only populate the first and third rows of the contingency table. However, metrics in the first and third rows can only increase the proxy score if they are in the same direction as the long-term north star. A similar score, called _Label Agreement_, has been used by Dmitriev and Wu (2016).
|
2309.00124 | Large-Scale Geometry of Pure Mapping Class Groups of Infinite-Type
Surfaces | The work of Mann and Rafi gives a classification surfaces $\Sigma$ when
$\textrm{Map}(\Sigma)$ is globally CB, locally CB, and CB generated under the
technical assumption of tameness. In this article, we restrict our study to the
pure mapping class group and give a complete classification without additional
assumptions. In stark contrast with the rich class of examples of Mann--Rafi,
we prove that $\textrm{PMap}(\Sigma)$ is globally CB if and only if $\Sigma$ is
the Loch Ness monster surface, and locally CB or CB generated if and only if
$\Sigma$ has finitely many ends and is not a Loch Ness monster surface with
(nonzero) punctures. | Thomas Hill | 2023-08-31T20:24:15Z | http://arxiv.org/abs/2309.00124v1 | # Large-scale geometry of pure mapping class groups of infinite-type surfaces
###### Abstract.
The work of Mann and Rafi gives a classification surfaces \(\Sigma\) when \(\operatorname{Map}(\Sigma)\) is globally CB, locally CB, and CB generated under the technical assumption of _tameness._ In this article, we restrict our study to the pure mapping class group and give a complete classification without additional assumptions. In stark contrast with the rich class of examples of Mann-Rafi, we prove that \(\operatorname{PMap}(\Sigma)\) is globally CB if and only if \(\Sigma\) is the Loch Ness monster surface, and locally CB or CB generated if and only if \(\Sigma\) has finitely many ends and is not a Loch Ness monster surface with (nonzero) punctures.
## 1. Introduction
The mapping class group of a surface \(\Sigma\) is the group of orientation-preserving homeomorphisms up to isotopy and is denoted by \(\operatorname{Map}(\Sigma)\). The subgroup that fixes the ends of the surface pointwise is called the pure mapping class group and is denoted by \(\operatorname{PMap}(\Sigma)\). These groups form a short exact sequence with the set of homeomorphisms of the end space:
\[1\to\operatorname{PMap}(\Sigma)\to\operatorname{Map}(\Sigma)\to\operatorname{ Homeo}(E(\Sigma),E_{G}(\Sigma))\to 1\]
From this sequence, we see that understanding \(\operatorname{Map}(\Sigma)\) involves understanding two components: \(\operatorname{Homeo}(E(\Sigma),E_{G}(\Sigma))\) thought of as the group of "external symmetries" of the surface, and \(\operatorname{PMap}(\Sigma)\) the group of "internal symmetries". The set of ends is homeomorphic to a subset of a Cantor set, rendering \(\operatorname{Homeo}(E(\Sigma),E_{G}(\Sigma))\) a highly complex object. On the other hand, there are many well-established results concerning pure mapping class groups of infinite-type surfaces (see [1],[12],[2]).
This article aims to describe the large-scale geometry of pure mapping class groups of infinite-type surfaces. Mapping class groups of _finite_-type surfaces have straightforward large-scale geometry. The Dehn-Lickorish theorem [13, see Chapter 4] establishes that the (pure) mapping class group of a finite-type surface is finitely generated. This finite generating set induces a word metric on \(\operatorname{Map}(\Sigma)\). Similarly, a different set of generators induces a (possibly) different word metric on \(\operatorname{Map}(\Sigma)\). The finiteness of both generating sets allows one to establish a quasi-isometry between these two metric spaces, thereby showing that \(\operatorname{Map}(\Sigma)\) possesses a well-defined _quasi-isometry type_.
Characterizing the large-scale geometry of big mapping class groups, specifically identifying whether they possess a well-defined quasi-isometry type and determining that type, presents a significantly greater challenge compared to mapping class groups of surfaces with finite type. This disparity arises because the conventional techniques of geometric group theory, applicable to finitely generated groups, extend to locally compact, compactly generated topological groups, but big mapping class groups do not fall into this category. Fortunately, Rosendal's work [14] provides a framework to extend the tools of geometric group theory to big mapping class groups and, more generally, to Polish groups. This framework replaces the compactness conditions with the notion of _locally coarsely bounded_ (abbreviated locally CB) and _CB-generated_ (i.e., generated by a coarsely bounded set).
Under the additional assumption of tameness, Mann and Rafi described the large-scale geometry of the full mapping class group by classifying which surfaces have a CB, locally CB, or CB-generated mapping class group [13]. Within their classification, there are infinitely many surfaces that have a CB mapping class group, and an even larger class of surfaces have a locally CB and CB-generated mapping class group. In contrast, we show that a relatively small class of surfaces have a CB, locally CB, or CB-generated _pure_ mapping class group. Furthermore, by restricting to the pure mapping class group, we give a CB classification of \(\mathrm{PMap}(\Sigma)\) without the additional tameness condition:
**Theorem 1.1**.: _Let \(\Sigma\) be an infinite-type surface. Then \(\mathrm{PMap}(\Sigma)\) is_
1. _globally CB if and only if_ \(\Sigma\) _is the Loch Ness monster surface;_
2. _locally CB if and only if_ \(|E(\Sigma)|<\infty\) _and_ \(\Sigma\) _is not a Loch Ness monster surface with (nonzero) punctures;_
3. _CB-generated if and only if_ \(\mathrm{PMap}(\Sigma)\) _is locally CB._
Many of the results of Mann-Rafi restrict nicely to the pure mapping class group and serve as tools to prove Theorem 1.1, including using nondisplaceable subsurfaces to disprove the CB-ness of the mapping class group. However, a fundamental difference in the setting of \(\mathrm{PMap}(\Sigma)\) is that a surface with more than three (and possibly infinitely many) ends always contains a nondisplaceable subsurface:
**Proposition 1.2**.: _Let \(\Sigma\) be an infinite-type surface, possibly with an infinite end space. If \(|E(\Sigma)|\geq 3\) then \(\Sigma\) contains a nondisplaceable subsurface with respect to \(\mathrm{PMap}(\Sigma)\)._
This proposition drastically reduces the possible candidates of surfaces with a CB pure mapping class group, and the remaining possibilities are considered case-by-case. Proposition 1.2 is also an important new tool used to classify locally CB and CB-generated pure mapping class groups.
### Acknowledgements
The author extends appreciation to Priyam Patel for her insightful project suggestion, invaluable feedback, and mentorship. The author also acknowledges Sanghoon Kwak for his extensive and insightful conversations, along with his meticulous comments and feedback on multiple
drafts of the paper. Appreciation also goes to Mladen Bestvina and Rebecca Rechkin for their helpful discussions that significantly contributed to the development of this project. The author acknowledges support from RTG DMS-1840190.
## 2. Preliminaries
### Surfaces and Mapping Class Groups
Surfaces are 2-dimensional manifolds, and they come in two flavors: finite-type surfaces and infinite-type surfaces. **Finite-type surfaces** are those with a finitely generated fundamental group. These surfaces are classified by their genus \(g\), number of boundary components \(b\), and number of punctures \(n\). On the other hand, those that do not have a finitely generated fundamental group are called **infinite-type surfaces**. For infinite-type surfaces, the triple \((g,b,n)\) does not contain enough information to distinguish between distinct surfaces. For example, the Loch Ness monster surface and the ladder surface both have \((g,b,n)=(\infty,0,0)\), see Figure 1. The classification of infinite-type surfaces also depends on the _end space_ of the surface. Roughly speaking, the end space represents the distinct ways to move toward infinity on the surface.
Put precisely, an **end** is an equivalence class of nested connected subsurfaces of \(S\), denoted by \([U_{1}\supset U_{2}\supset\cdots]\), such that each \(U_{i}\) has a compact boundary, and any compact subsurface \(K\subset S\) is disjoint from \(U_{n}\) for large enough \(n\). Two sequences \(U_{1}\supset U_{2}\supset\cdots\) and \(V_{1}\supset V_{2}\supset\cdots\) are equivalent if they are both eventually contained in one another; that is, if for any \(n\) there is an \(m\) such that \(U_{m}\subset V_{n}\) and \(V_{m}\subset U_{n}\). An end \([U_{1}\supset U_{2}\supset\cdots]\) is called **planar** if the \(U_{i}\) are all eventually homeomorphic to an open subset of \(\mathbb{R}^{2}\). If instead all \(U_{i}\) in the sequence have infinite genus, the end is said to be **accumulated by genus** or **non-planar**. The collection of all ends is a topological space called the **end space** of \(\Sigma\) and is denoted by \(E(\Sigma)\). The subset of ends accumulated by genus is denoted by \(E_{G}(\Sigma)\). As a topological space, the points of \(E(\Sigma)\) are ends \([U_{1}\supset U_{2}\supset\cdots]\), and open sets of \(E(\Sigma)\) can be thought of as subsurfaces of \(\Sigma\) with compact boundary. To be precise, the collection of all ends eventually contained in a subsurface \(U\subset\Sigma\) with compact boundary, forms a basis for the topology of \(E(\Sigma)\). An end \([U_{1}\supset U_{2}\supset\cdots]\) is said to be contained in the open set \(U\) if \(U_{i}\subset U\) for
Figure 1. The ladder surface (left) has two distinct ends accumulated by genus, while the Loch Ness monster surface (center) has only one. The Cantor tree surface (right) is a sphere with a Cantor set removed and has uncountably many planar ends.
large enough \(i\). With the additional data of the end space, infinite-type surfaces can be completely classified by the homeomorphism type of the tuple \((g,b,E(\Sigma),E_{G}(\Sigma))\), see [10]. We will only consider surfaces with empty boundary.
The group of orientation-preserving homeomorphisms up to isotopy is called the **mapping class group** and is denoted by \(\operatorname{Map}(\Sigma)\). The subgroup of \(\operatorname{Map}(\Sigma)\) that restricts to the identity on \(E(\Sigma)\) is called the **pure mapping class group** and is denoted by \(\operatorname{PMap}(\Sigma)\). Mapping class groups of infinite-type surfaces are often called _big_ mapping class groups.
### Large-Scale Geometry and Coarse Boundedness
Describing the large-scale geometry, or well-defined quasi-isometry type, of big mapping class groups is more challenging than for finite-type surfaces, as standard geometric group theory tools for finitely generated groups do not directly apply to big mapping class groups. Rosendal's work [14] introduces the notion of _locally coarsely bounded_ and _CB-generated_ groups, providing a framework to extend the tools of geometric group theory to a larger class of groups. This subsection aims to summarize Rosendal's key contributions.
**Definition 2.1**.: A subset \(A\) of a topological group \(G\) is **globally coarsely bounded** in \(G\), or simply **CB**, if for every continuous isometric action of \(G\) on a given metric space \(X\), the orbit \(A\cdot x\) is bounded for all \(x\in X\).
A topological group is **Polish** if it is separable and completely metrizable. Both \(\operatorname{Map}(\Sigma)\) and \(\operatorname{PMap}(\Sigma)\) are examples of Polish groups [1]. For Polish groups, Rosendal provides a useful alternative criterion for coarse boundedness:
**Fact 2.2** ([14, Prop. 2.7 (5)]).: _A subset \(A\) of a Polish group \(G\) is CB if and only if for every neighborhood of the identity \(\mathcal{V}\) there is a finite subset \(\mathcal{F}\subset G\) and \(k\geq 1\), so that \(A\subset(\mathcal{FV})^{k}\)._
In the remark below, we use Fact 2.2 to show that length functions on CB Polish groups are always bounded. This is key in the proof of Proposition 2.15, where the existence of an unbounded length function is used to disprove CB-ness. Recall that a **length function** on a group \(G\) is a continuous map \(\ell\colon G\to[0,\infty)\) that satisfies \(\ell(\operatorname{Id})=0\), \(\ell(g)=\ell(g^{-1})\), and \(\ell(gh)\leq\ell(g)+\ell(h)\) for all \(g,h\in G\).
**Remark 2.3**.: If \(G\) is a CB Polish group, then any length function \(\ell\colon G\to[0,\infty)\) is bounded. Indeed, take \(\mathcal{V}=B_{r}(\operatorname{Id})\), then by CB-ness of \(G\) there is a finite set \(\mathcal{F}\) and \(k\geq 0\) so that \(G\subset(\mathcal{F}\cdot B_{r}(\operatorname{Id}))^{k}\). This means
\[\ell(\varphi)\leq\left(\max_{f\in\mathcal{F}}\ell(f)+r\right)\cdot k,\]
for all \(\varphi\in G\).
**Definition 2.4**.: A group is **locally CB** if it has a CB neighborhood of the identity.
When discussing the (local) CB-ness of a given set, it is essential to keep track of the ambient group since it is possible that \(H\leq G\) might be CB in \(G\), but \(H\) might not be CB as a subset of itself. For instance, consider the following example.
**Example 2.5**.: Let \(\Sigma\) be a surface for which \(\operatorname{Map}(\Sigma)\) is CB, and let \(\gamma\) be a non-separating simple closed curve in \(\Sigma\). Consider the group generated by a Dehn twist about \(\gamma\), namely \(\langle T_{\gamma}\rangle\leq\operatorname{Map}(\Sigma)\). Because \(\operatorname{Map}(\Sigma)\) is assumed to be CB, any subset \(A\subseteq\operatorname{Map}(\Sigma)\), and in particular \(\langle T_{\gamma}\rangle\) must also be CB in \(\operatorname{Map}(\Sigma)\). Thus, \(\langle T_{\gamma}\rangle\) is CB in \(\operatorname{Map}(\Sigma)\), but \(\langle T_{\gamma}\rangle\) is not CB as a subset of itself since \(\langle T_{\gamma}\rangle\cong\mathbb{Z}\) and the continuous isometric action of \(\mathbb{Z}\) on \(\mathbb{R}\) by translations has unbounded orbits.
When \(G\) is a Polish group and \(H\leq G\) is a _finite index_ open Polish subgroup, this distinction can be blurred in light of the following fact (see [1, Proposition 2.20] for a proof).
**Fact 2.6** ([14, Proposition 5.67]).: _Let \(G\) be a Polish group and \(H\leq G\) a finite index open Polish subgroup. Then \(H\) is CB if and only if \(G\) is CB. Furthermore, \(H\) is coarsely embedded in \(G\); that is to say, for \(A\subset H\), \(A\) is CB in \(H\) if and only if \(A\) is CB in \(G\)._
**Definition 2.7**.: A group is **CB-generated** if it admits a CB generating set.
The relationship between CB, locally CB, and CB-generated Polish groups can be summarized in the diagram of implications below:
When a group is CB, it is CB-generated since the whole group can be taken as the CB-generating set, so implication (1) holds. Because CB-generated groups have a well-defined quasi-isometry type by [14], this means in particular that globally CB groups have trivial quasi-isometry type. Implication (2) is a result of Rosendal that every CB-generated Polish group is locally CB [14, Theorem 1.2].
Of course, combining (1) and (2) results in implication (3); however, (3) can also be seen directly. If a group is globally CB, then it is locally CB since the whole group can be taken as the CB neighborhood of the identity.
### CB Classification of \(\operatorname{Map}(\Sigma)\)
This section will briefly summarize some of the results and tools of Mann-Rafi's CB classification of full mapping class groups [15].
**Theorem 2.8** ([15, Theorem 1.7]).: _Suppose that \(\Sigma\) is either tame or has a countable end space. Then \(\operatorname{Map}(\Sigma)\) is CB if and only if \(\Sigma\) has infinite or zero genus and \(E(\Sigma)\) is self-similar or telescoping._
An end space \(E(\Sigma)\) is self-similar if for any decomposition \(E(\Sigma)=E_{1}\sqcup E_{2}\sqcup\cdots\sqcup E_{n}\) there is a clopen subset \(D\subset E_{i}\) for some \(i\), such that \((D,D\cap E_{G}(\Sigma))\cong(E(\Sigma),E_{G}(\Sigma))\). Telescoping is a variation of the self-similar condition (see [12, Definition 3.3]).
**Remark 2.9**.: A telescoping surface necessarily has infinitely many ends (see [12, Proposition 3.7]). Likewise, self-similar end spaces contain either a single end or infinitely many ends.
The definition of tameness space mentioned above in Theorem 2.8 is a technical condition about the 'local self-similarity' of certain ends of a surface. We avoid discussing this condition since our main focus will be on surfaces with a finite end space, and tameness will always be satisfied for those surfaces.
**Remark 2.10**.: If \(|E(\Sigma)|<\infty\), then \(\Sigma\) is tame.
**Theorem 2.11** ([12, Theorem 5.7] ).: Map\((\Sigma)\) _is locally CB if and only if there is a finite-type subsurface \(K\subset\Sigma\) (possibly empty) such that the connected components of \(\Sigma\setminus K\) have infinite-type and either 0 or infinite genus, and partition \(E(\Sigma)\) into finitely many clopen sets_
\[E(\Sigma)=\left(\bigsqcup_{A\in\mathcal{A}}A\right)\sqcup\left(\bigsqcup_{P \in\mathcal{P}}P\right)\]
_such that:_
1. _Each_ \(A\in\mathcal{A}\) _is self-similar,_ \(\mathcal{M}(A)\subset\mathcal{M}(E)\)_, and_ \(\mathcal{M}(E)=\bigsqcup_{A\in\mathcal{A}}\mathcal{M}(A)\)_._
2. _Each_ \(P\in\mathcal{P}\) _is homeomorphic to a clopen subset of some_ \(A\in\mathcal{A}\)_._
3. _For any_ \(x_{A}\in\mathcal{M}(A)\)_, and any neighborhood_ \(V\) _of the end_ \(x_{A}\) _in_ \(\Sigma\)_, there is_ \(f_{V}\in\mathrm{Map}(\Sigma)\) _so that_ \(f_{V}(V)\) _contains the complementary component to_ \(K\) _with end space_ \(A\)_._
_Moreover, the set of mapping classes restricting to the identity on \(K\), denoted by \(\mathcal{V}_{K}\), is a CB neighborhood of the identity. Additionally, \(K\) can always be taken to have genus 0 if \(\Sigma\) has infinite genus and genus equal to \(\Sigma\) otherwise. If \(\Sigma\) has finitely many isolated planar ends, these may be taken as punctures of \(K\)._
**Example 2.12**.: The ladder surface is an example of a surface with a locally CB, but not a globally CB mapping class group. Because it has finitely many ends, the ladder is a tame surface by Remark 2.10. Since it has only two ends, the ladder surface is not self-similar or telescoping by Remark 2.9. However, the ladder surface is locally CB since it satisfies the modified telescoping condition in Theorem 2.11 (3).
**Theorem 2.13** ([12, Theorem 1.6] ).: _For a tame surface with a locally (but not globally) CB mapping class group, \(\mathrm{Map}(\Sigma)\) is CB generated if and only if \(E(\Sigma)\) is finite-rank and not of limit type._
See [12, Definition 6.2 and 6.5] for the definitions of finite rank and limit type, respectively. For our purpose, it is relevant that both the infinite
rank and limit type conditions require the space to have an infinite number of ends.
### Nondisplaceable Subsurfaces
For two subsurfaces \(S\) and \(S^{\prime}\), we say \(S\cap S^{\prime}\neq\emptyset\) if every subsurface homotopic to \(S\) intersects every subsurface homotopic to \(S^{\prime}\).
**Definition 2.14**.: A connected finite-type subsurface \(S\subset\Sigma\) is called **nondisplaceable** with respect to the group \(G\leq\operatorname{Map}(\Sigma)\) if \(g(S)\cap S\neq\emptyset\) for all \(g\in G\).
Using the presence of a nondisplaceable subsurface in \(\Sigma\) to conclude that \(\operatorname{Map}(\Sigma)\) is not CB is one of the key tools of Mann-Rafi. This is also important in disproving the CB-ness of \(\operatorname{PMap}(\Sigma)\) below.
**Proposition 2.15** (cf. [16, Theorem 1.9]).: _If \(\Sigma\) has a nondisplaceable subsurface with respect to \(\operatorname{PMap}(\Sigma)\), then \(\operatorname{PMap}(\Sigma)\) is not globally CB._
The proof of Proposition 2.15 follows exactly as in Mann-Rafi, but with \(\operatorname{PMap}(\Sigma)\) instead of \(\operatorname{Map}(\Sigma)\). Below, we will give a proof sketch of their argument and justify the replacement of \(\operatorname{Map}(\Sigma)\) with \(\operatorname{PMap}(\Sigma)\). The proof aims to exploit the unboundedness of the action of pseudo-Anosov elements on the curve complex of a finite-type subsurface proven by Masur-Minsky [14]. Curve graphs and subsurface projections will play a role in the proof.
The **curve graph** of a surface \(\Sigma\), denoted by \(\mathcal{C}(\Sigma)\) is the graph whose vertices correspond with the isotopy classes of simple closed curves in \(\Sigma\) where two vertices \(\alpha\) and \(\beta\) are connected by an edge if \(\alpha\) and \(\beta\) can be realized disjointly. The curve graph can be equipped with the path metric \(d_{\Sigma}\) induced by giving each edge length \(1\).
Let \(S\subset\Sigma\) be a finite-type subsurface. For a curve \(\gamma\in\mathcal{C}(\Sigma)\), we define the **subsurface projection** of \(\gamma\) onto \(S\) as
\[\pi_{S}(\gamma):=\left\{\begin{aligned} \emptyset&\text{ if there is a representative of $\gamma$ disjoint from $S$,}\\ \gamma&\text{ if there is a representative of $\gamma$ contained in $S$,}\\ \text{\rm{surger $\gamma$ along $\partial S$ otherwise.}}\end{aligned}\right.\]
By surgering \(\gamma\) along \(\partial S\), we mean the following. If \(\gamma\cap S\neq\emptyset\) but \(\gamma\not\subset S\) then \(\gamma\cap S\) is a union of arcs \(\{\,\alpha_{i}\,\}\) based in \(\partial S\). Consider the boundary of a regular neighborhood of an arc \(\alpha_{i}\). The result is two arcs that we surger with the boundary of a regular neighborhood of the boundary component(s) of \(S\) that \(\alpha_{i}\) is based at. The projection of \(\pi_{S}(\gamma)\) is the union of all curves obtained from this surgery. See Figure 2 for an example of this surgery.
Figure 2. An example of the surgery used to define the subsurface projection. Here, \(\pi_{S}(\gamma)\) is the disjoint union of the blue curves.
Similarly, if \(\mu\subset\mathcal{C}(\Sigma)\) is a multicurve, we define \(\pi_{S}(\mu):=\bigsqcup_{\gamma\in\mu}\pi_{S}(\gamma)\). For a subsurface \(R\subset\Sigma\), we define \(\pi_{S}(R):=\pi_{S}(\partial R)\).
The path metric \(d_{S}\) on the curve complex \(\mathcal{C}(S)\) of a finite-type subsurface \(S\subset\Sigma\) can be extended to curves \(\gamma_{1},\gamma_{2}\in\mathcal{C}(\Sigma)\) by defining
\[d_{S}(\gamma_{1},\gamma_{2}):=\max_{\alpha_{i}\in\pi_{S}(\gamma_{i})}d_{S}( \alpha_{1},\alpha_{2})\]
and similarly for multicurves \(\mu_{1},\mu_{2}\subset\mathcal{C}(\Sigma)\), \(d_{S}(\mu_{1},\mu_{2}):=\max_{\gamma_{i}\in\pi_{S}(\mu_{i})}d_{S}(\gamma_{1}, \gamma_{2})\).
Proof of Proposition 2.15.: The idea of Mann-Rafi's argument is to construct an unbounded length function \(\ell\colon\operatorname{Map}(\Sigma)\to\mathbb{Z}\), which in light of Remark 2.3 is enough to show that \(\operatorname{Map}(\Sigma)\) is not CB.
We now show that their argument goes through with \(\operatorname{PMap}(\Sigma)\) in place of \(\operatorname{Map}(\Sigma)\). Let \(S\subset\Sigma\) be a nondisplaceable subsurface and let \(\mathcal{I}=\{\,f(S)\mid f\in\operatorname{PMap}(\Sigma)\,\}\). Let \(\mu_{S}\) be a multi-curve that fills \(S\). Because \(S\) is nondisplaceable, \(R\cap S\neq\emptyset\) for all \(R\in\mathcal{I}\), so \(\mu_{R}:=\pi_{R}(\mu_{S})\) is nonempty for all \(R\in\mathcal{I}\). Define a length function
\[\ell\colon\operatorname{PMap}(\Sigma)\to\mathbb{Z}^{+}\qquad\text{given by}\qquad\ell(\varphi):=\max_{R\in\mathcal{I}}d_{\varphi(R)}(\varphi(\mu_{R}),\mu_{ \varphi(R)})\]
where \(d_{\varphi(R)}\) is the path metric in the curve complex \(C(\varphi(R))\). Intuitively, the function \(\ell\) is measuring the maximum distance in the curve complex of \(\varphi(R)\) between curves in \(\mu_{R}\) transformed by \(\varphi\) and curves in the intersection of \(S\) and \(\varphi(R)\).
Mann-Rafi prove \(\ell\) is a continuous length function and demonstrate that \(\ell\) is unbounded by taking a \(\varphi\in\operatorname{PMap}(\Sigma)\) that preserves \(S\) and restricts to a pseudo-Anosov on \(S\). Then
\[\ell(\varphi^{n})\geq d_{\varphi^{n}(S)}(\varphi^{n}(\mu_{S}),\mu_{\varphi^{n} (S)})=d_{S}(\varphi^{n}(\mu_{S}),\mu_{S})\geq d_{S}(\varphi^{n}(\gamma),\gamma)\]
where \(\gamma\) is any essential curve in \(S\), and the middle equality holds because \(\varphi\) preserves \(S\). By the work of Masur-Minsky [10]\(d_{S}(\varphi^{n}(\gamma),\gamma)\to\infty\) as \(n\to\infty\) for such a map \(\varphi\), so \(\ell\) is unbounded.
## 3. CB Classification of \(\operatorname{PMap}(\Sigma)\)
### Globally CB Classification
We show below that one of the striking differences between \(\operatorname{PMap}(\Sigma)\) and \(\operatorname{Map}(\Sigma)\) is that surfaces with at least three ends contain a nondisplaceable subsurface with respect to \(\operatorname{PMap}(\Sigma)\). In contrast, there are many examples of surfaces with infinitely many ends that don't have a nondisplaceable subsurface with respect to \(\operatorname{Map}(\Sigma)\). For example, the Cantor tree surface pictured in Figure 1 has a globally CB mapping class group, and therefore, it does not have any nondisplaceable subsurfaces with respect to the (full) mapping class group.
**Proposition 1.2**.: _Let \(\Sigma\) be an infinite-type surface, possibly with an infinite end space. If \(|E(\Sigma)|\geq 3\) then \(\Sigma\) contains a nondisplaceable subsurface with respect to \(\operatorname{PMap}(\Sigma)\)._
Proof.: First, we claim that there is a compact subsurface \(K\subset\Sigma\) whose complementary components partition \(E(\Sigma)\) into three nonempty disjoint subsets.
To see why, suppose that \(x_{1},x_{2},x_{3}\in E(\Sigma)\) are distinct ends. Recall that by definition, \(x_{1}\) is an equivalence class of nested connected subsurfaces with compact boundary \(x_{1}=[S_{1}^{0}\supset S_{1}^{1}\supset S_{1}^{2}\supset\cdots]\). Because the ends \(x_{1},x_{2},x_{3}\) are distinct, eventually there is a subsurface, say \(S_{1}^{i}\), in the sequence \(S_{1}^{0}\supset S_{1}^{1}\supset S_{1}^{2}\supset\cdots\) that does not contain \(x_{2}\) or \(x_{3}\). By definition, \(S_{1}^{i}\) is a connected surface with compact boundary.
As pictured in Figure 3, we can find a separating simple closed curve \(\gamma_{1}\), which separates all the boundary components of \(S_{1}^{i}\) from the rest of the subsurface \(S_{1}^{i}\). Cutting the original surface \(\Sigma\) along \(\gamma_{1}\), we obtain two new subsurface, each with a single boundary component \(\gamma_{1}\): one containing the boundary of \(S_{1}^{i}\) call it \(\Sigma^{\prime}\), and another subsurface, call it \(S_{1}\), that has \(E(S_{1})=E(S_{1}^{i})\). Thus the curve \(\gamma_{1}\) partitions the end space of \(\Sigma\) into two subsets \(E(\Sigma)=E(S_{1})\sqcup E(\Sigma^{\prime})\), where \(x_{1}\in E(S_{1})\) and \(x_{2},x_{3}\in E(\Sigma^{\prime})\).
We can apply the same strategy to the end \(x_{2}\in E(\Sigma^{\prime})\) and find a subsurface \(S_{2}\subset\Sigma^{\prime}\) that has a single boundary component \(\gamma_{2}\), and \(x_{2}\in E(S_{2})\) but \(x_{3}\not\in E(S_{2})\). To simplify notation, let \(\Sigma^{\prime\prime}:=\Sigma\setminus\operatorname{int}(S_{1}\cup S_{2})\). By construction \(\Sigma^{\prime\prime}\) is a connected subsurface with two boundary components \(\gamma_{1}\) and \(\gamma_{2}\), and \(x_{3}\in E(\Sigma^{\prime\prime})\). Finally, let \(\gamma_{3}\) be a separating simple closed curve that cuts \(\Sigma^{\prime\prime}\) into two components \(K\) and \(S_{3}\), where \(K\) is a compact subsurface with three boundary components \(\gamma_{1},\gamma_{2}\), and \(\gamma_{3}\), and \(S_{3}\) is a subsurface with \(E(S_{3})=E(\Sigma^{\prime\prime})\). Figure 4 illustrates this setup.
In summary, we have constructed a compact subsurface \(K\) such that
\[\Sigma\setminus\operatorname{int}(K)=S_{1}\sqcup S_{2}\sqcup S_{3}\]
with \(E(S_{i})\neq\emptyset\) for each \(i\), and \(E(\Sigma)=E(S_{1})\sqcup E(S_{2})\sqcup E(S_{3})\).
We now aim to prove that \(K\) is nondisplaceable with respect to \(\operatorname{PMap}(\Sigma)\). Let \(f\in\operatorname{PMap}(\Sigma)\), and then \(\Sigma\setminus\operatorname{int}(f(K))=S_{1}^{\prime}\sqcup S_{2}^{\prime} \sqcup S_{3}^{\prime}\) where up to relabeling \(S_{i}^{\prime}=f(S_{i})\). We claim that
\[E(S_{i}^{\prime})=E(S_{i}). \tag{1}\]
By the relabeling, \(E(S_{i}^{\prime})=E(f(S_{i}))\). Since homeomorphisms \(S_{i}\to S_{i}^{\prime}\) induce homeomorphisms of the end spaces \(E(S_{i})\to E(S_{i}^{\prime})\), it follows
Figure 3. Since \(S_{1}^{i}\) is connected and has compact boundary, it can be visualized with the boundary components lined up as above, up to homeomorphism. The separating curve \(\gamma_{1}\) is chosen to cut off only the boundary components of \(S_{1}^{i}\); in particular, \(E(S_{1}^{i})=E(S_{1})\).
\(f(E(S_{i}))\). Finally, since \(f\) is pure, \(f\) fixes the ends point-wise, so \(f(E(S_{i}))=E(S_{i})\). Putting this all together,
\[E(S_{i}^{\prime})=E(f(S_{i}))=f(E(S_{i}))=E(S_{i})\]
which proves Equation (1).
Now, suppose that \(K\) were displaceable. Then \(f(K)\cap K=\emptyset\), so \(f(K)\) must be contained in one of the complementary components \(S_{i}\). Without loss of generality, assume \(f(K)\subset S_{1}\). Because \(S_{2},S_{3}\), and \(K\) are all disjoint from \(f(K)\) and \(S_{2}\cup S_{3}\cup K\) is connected, the subsurface \(S_{2}\cup S_{3}\cup K\) must be contained in one of the complementary components of \(f(K)\). But this is not possible, since for example if \(S_{2}\cup S_{3}\cup K\subset S_{3}^{\prime}\), then \(E(S_{2})\subset E(S_{3}^{\prime})\), but by Equation (1), \(E(S_{2})=E(S_{2}^{\prime})\) contradicting \(E(S_{2}^{\prime})\cap E(S_{3}^{\prime})=\emptyset\). Similar arguments show \(S_{2}\cup S_{3}\cup K\not\subset S_{j}^{\prime}\) for \(j=1,2\).
Equipped with Proposition 2.15, we are poised to begin the proof of Theorem 1.1.
**Theorem 1.1** (a).: _Let \(\Sigma\) be an infinite-type surface. Then \(\mathrm{PMap}(\Sigma)\) is globally CB if and only if \(\Sigma\) is the Loch Ness monster surface._
_Proof._ By Proposition 2.15 and Proposition 1.2, if \(|E(\Sigma)|\geq 3\), then \(\Sigma\) has a nondisplaceable subsurface and therefore \(\mathrm{PMap}(\Sigma)\) is not CB. There are three infinite-type surfaces with \(|E(\Sigma)|\leq 2\), namely the ladder surface,
the once-punctured Loch Ness monster surface, and the Loch Ness monster surface, which we will consider case by case.
* **Case 1: \(\Sigma\) is the ladder surface.** For surfaces with \(|E_{G}(\Sigma)|\geq 2\), Aramayona, Patel and Vlamis [1] define a _flux map_, which is a continuous surjection \(\operatorname{PMap}(\Sigma)\twoheadrightarrow\mathbb{Z}\). Because the action of \(\mathbb{Z}\) on \(\mathbb{R}\) by translations is continuous and by isometries, but the orbits of \(x\in\mathbb{R}\) are unbounded, \(\mathbb{Z}\) is not CB. It follows that \(\operatorname{PMap}(\Sigma)\) is not CB either, by pulling back the action of \(\mathbb{Z}\) on \(\mathbb{R}\) to an action of \(\operatorname{PMap}(\Sigma)\) on \(\mathbb{R}\).
* **Case 2: \(\Sigma\) is the once-punctured Loch Ness monster surface.** A subsurface containing the isolated puncture is nondisplaceable with respect to \(\operatorname{Map}(\Sigma)\), and since \(\Sigma\) has two distinct ends of different type, \(\operatorname{Map}(\Sigma)=\operatorname{PMap}(\Sigma)\) in this case. Hence, \(\Sigma\) is not CB by Proposition 2.15.
* **Case 3: \(\Sigma\) is the Loch Ness monster surface.** Because the Loch Ness monster has only one end, \(\operatorname{PMap}(\Sigma)=\operatorname{Map}(\Sigma)\) and, as before, the CB classification of \(\operatorname{Map}(\Sigma)\) by Mann-Rafi applies. Specifically, \(|E(\Sigma)|=1\) means that the end space is trivially self-similar, so by Theorem 2.8, \(\operatorname{Map}(\Sigma)\) is CB.
Theorem 1.1 (a) is consistent with the results of Mann-Rafi. In particular, when \(|E(\Sigma)|<\infty\), \(\operatorname{PMap}(\Sigma)\) has finite index in \(\operatorname{Map}(\Sigma)\). Because \(\operatorname{Map}(\Sigma)\) is a Polish group and \(\operatorname{PMap}(\Sigma)\) is a clopen Polish subgroup, Fact 2.6 applies, and we can import Mann-Rafi's CB classification of \(\operatorname{Map}(\Sigma)\) in Theorem 2.13 to \(\operatorname{PMap}(\Sigma)\). As pointed out in Remark 2.9, the telescoping condition is only satisfied for surfaces with infinitely many ends, and self-similarity only occurs when there are infinitely many ends or exactly one end. Therefore when \(|E(\Sigma)|<\infty\), the only surface with globally CB \(\operatorname{Map}(\Sigma)\) is the unique infinite-type surface with one end, namely the Loch Ness Monster surface.
### Locally CB Classification
We now turn our attention to classifying the surfaces for which \(\operatorname{PMap}(\Sigma)\) is locally CB. For a finite subsurface \(K\subset\Sigma\), let
\[\mathcal{V}_{K}:=\left\{\,f\in\operatorname{PMap}(S)\mid f_{|K}=\operatorname {Id}\,\right\}.\]
If \(K=\emptyset\), then \(\mathcal{V}_{K}=\operatorname{PMap}(\Sigma)\).
The proof of the proposition below follows the same argument as a result of Mann-Rafi for \(\operatorname{Map}(\Sigma)\).
**Proposition 3.1** (cf. [16, Lemma 5.2]).: _Let \(K\subset\Sigma\) be a finite-type (possibly empty) subsurface such that each component of \(\Sigma\setminus K\) is infinite-type. If \(\Sigma\setminus K\) contains a nondisplaceable subsurface with respect to \(\operatorname{PMap}(\Sigma)\), then \(\mathcal{V}_{K}\) is not CB in \(\operatorname{PMap}(\Sigma)\). If \(\mathcal{V}_{K}\) is not CB in \(\operatorname{PMap}(\Sigma)\) for all such \(K\), then \(\operatorname{PMap}(\Sigma)\) is not locally CB._
Proof.: First, let \(S\subset\Sigma\setminus K\) be a nondisplaceable subsurface with respect to \(\operatorname{PMap}(\Sigma)\). Recall in Proposition 2.15 that \(\operatorname{PMap}(\Sigma)\) was shown to be non-CB when \(S\subset\Sigma\) is nondisplaceable by constructing an unbounded length function on \(\operatorname{PMap}(\Sigma)\). The same argument applies to construct an unbounded length function on \(\mathcal{V}_{K}\). Specifically, consider a psuedo-Anosov \(f\) supported on \(S\subset\Sigma\setminus K\) and extend \(f\) to an element of \(\operatorname{PMap}(\Sigma)\) by extending via
the identity on \(\Sigma\setminus S\). Then \(f\in\mathcal{V}_{K}\) and as in Proposition 2.15 the length function is unbounded on \(\mathcal{V}_{K}\).
To see why \(\mathrm{PMap}(\Sigma)\) is not locally CB if all \(\mathcal{V}_{K}\) are not CB in \(\mathrm{PMap}(\Sigma)\), first recall that \(\{\,\mathcal{V}_{C}\mid C\text{ compact}\,\}\) is a neighborhood basis of the identity in the compact open topology. For each compact subsurface \(C\), there is a finite type subsurface \(K\) that contains \(C\) and whose complementary components \(\Sigma\setminus K\) are infinite-type. Specifically, take \(K\) to be the union of \(C\) and its finite-type complementary components. So \(\mathcal{V}_{K}\subset\mathcal{V}_{C}\). It follows from the definition of CB that if \(\mathcal{V}_{K}\) is not CB in \(\mathrm{PMap}(\Sigma)\), then \(\mathcal{V}_{C}\) is not CB in \(\mathrm{PMap}(\Sigma)\). If all sets \(\mathcal{V}_{K}\) are not CB in \(\mathrm{PMap}(\Sigma)\), then neither are any of the sets \(\mathcal{V}_{C}\) in the neighborhood basis of the identity, and \(\mathrm{PMap}(\Sigma)\) has no CB neighborhood of the identity; in other words, \(\mathrm{PMap}(\Sigma)\) is not locally CB.
We will see that Proposition 3.1 effectively narrows down the possible surfaces with a locally CB pure mapping class group to those with finitely many ends.
**Theorem 1.1** (b).: _Let \(\Sigma\) be an infinite-type surface. Then \(\mathrm{PMap}(\Sigma)\) is locally CB if and only if \(|E(\Sigma)|<\infty\) and either_
1. \(|E_{G}(\Sigma)|=1\) _and_ \(|E(\Sigma)\setminus E_{G}(\Sigma)|=0\) _or,_
2. \(|E_{G}(\Sigma)|>1\) _and_ \(|E(\Sigma)\setminus E_{G}(\Sigma)|\geq 0\)_;_
_in other words, \(\Sigma\) is not a Loch Ness monster surface with (nonzero) punctures._
Proof.: First, consider when \(|E(\Sigma)|=\infty\). Let \(K\subset\Sigma\) be a finite-type subsurface (possibly empty) so that each connected component of \(\Sigma\setminus K\) has infinite-type. If \(K=\emptyset\), then \(\mathcal{V}_{K}=\mathrm{PMap}(\Sigma)\) is not CB by Theorem 1.1 (a). For any \(K\neq\emptyset\), at least one of the connected components of \(\Sigma\setminus K\) must contain infinitely many ends, and by Proposition 1.2 it must contain a nondisplaceable subsurface. So, \(\mathcal{V}_{K}\) is not CB in \(\mathrm{PMap}(\Sigma)\) for all possible \(K\) and by Proposition 3.1, \(\mathrm{PMap}(\Sigma)\) is not locally CB.
Before proceeding, we remark that when \(|E(\Sigma)|<\infty\), we can determine if \(\mathrm{PMap}(\Sigma)\) is locally CB solely by establishing whether \(\mathrm{Map}(\Sigma)\) is locally CB. To see why, let \(\mathcal{V}\) be a CB neighborhood of the identity in \(\mathrm{Map}(\Sigma)\). The restriction of \(\mathcal{V}\) to \(\mathrm{PMap}(\Sigma)\) is CB in \(\mathrm{PMap}(\Sigma)\) by Fact 2.6 since \(|E(\Sigma)|<\infty\). Furthermore, \(\mathcal{V}\) restricts to an identity neighborhood in \(\mathrm{PMap}(\Sigma)\), so \(\mathrm{PMap}(\Sigma)\) is locally CB.
Now when \(|E_{G}(\Sigma)|=1\), and \(|E(\Sigma)\setminus E_{G}(\Sigma)|=0\), the surface \(\Sigma\) is the Loch Ness monster surface which is globally CB by Theorem 1.1 (a) and hence locally CB. On the other hand, when \(|E_{G}(\Sigma)|=1\) and \(\Sigma\) has nonzero, but finitely many punctures, we can show that \(\mathrm{Map}(\Sigma)\) is not locally CB by Theorem 2.11. Up to isotopy, there is only subsurface \(K\) satisfying the hypothesis of Theorem 2.11, namely the subsurface of genus \(0\) containing all finitely many punctures pictured in Figure 5.
Let \(V\) be a neighborhood of the end accumulated by genus. In general, \(\Sigma\setminus V\) has some nonzero finite genus, while \(\Sigma\setminus(\Sigma\setminus K)=K\) has zero genus. So
by the classification of surfaces, there is no \(f\in\text{Map}(\Sigma)\) such that \(f(V)\supset\Sigma\setminus K\), and so \(\text{Map}(\Sigma)\) is not locally CB. Hence, when \(|E_{G}(\Sigma)|=1\) and \(|E(\Sigma)\setminus E_{G}(\Sigma)|\neq 0\), \(\text{PMap}(\Sigma)\) is not locally CB.
Finally, we will show \(\text{PMap}(\Sigma)\) is locally CB when \(|E(\Sigma)|<\infty\) and \(|E_{G}(\Sigma)|>1\), by showing that \(\text{Map}(\Sigma)\) is locally CB in this case. Because \(\Sigma\) has finitely many ends, it has, at most, a finite number of punctures. Let \(K\subset\Sigma\) be a finite-type subsurface containing all punctures of \(\Sigma\) (if any), such that all components of \(\Sigma\setminus K\) contain a single end accumulated by genus. These self-similar singleton sets represent the \(A\) sets in Theorem 2.11. There are no \(P\) sets for this surface. See Figure 6.
We now check condition (3) in the statement of Theorem 2.11. For any end accumulated by genus \(x\in E_{G}(\Sigma)\) and any neighborhood \(V\) of \(x\), an iterate of a handle shift \(h\in\text{Map}(\Sigma)\) between \(x\) and some other end \(y\neq x\in E_{G}(\Sigma)\) will shift all of the genus in the component of \(\Sigma\setminus K\) not in \(V\) into the component of \(\Sigma\setminus K\) containing \(y\). Thus, \(h\) maps the neighborhood \(V\) to the entire complementary component of \(\Sigma\setminus K\) containing the end \(x\), and by Theorem 2.11 \(\text{Map}(\Sigma)\) is locally CB, hence \(\text{PMap}(\Sigma)\) is as well.
Figure 5. The unique subsurface \(K\) of the finitely punctured Loch Ness monster surface satisfying the hypothesis of Theorem 2.11 up to isotopy.
Figure 6. A choice of \(K\) that satisfies the hypothesis of Theorem 2.11.
Finally, we show that the surfaces with CB-generated pure mapping class group coincide with those with locally CB pure mapping class group, finishing the proof of Theorem 1.1.
**Theorem 1.1** (c).: _Let \(\Sigma\) be an infinite-type surface. Then \(\mathrm{PMap}(\Sigma)\) is CB-generated if and only if \(\mathrm{PMap}(\Sigma)\) is locally CB._
Proof.: Recall that every CB-generated Polish group is locally CB [10, Theorem 1.2], so we need only prove that if \(\mathrm{PMap}(\Sigma)\) is locally CB, then it is CB-generated. In the case where \(\Sigma\) is the Loch Ness Monster, Theorem 1.1(a) implies that \(\mathrm{PMap}(\Sigma)\) is globally CB and hence CB-generated. Thus, it remains to consider the case where \(\Sigma\) has finitely many ends and \(|E_{G}(\Sigma)|>1\).
Since \(\mathrm{PMap}(\Sigma)\) is locally CB by Theorem 1.1 (b), we know that for \(K\subset\Sigma\) as in Theorem 2.11, \(\mathcal{V}_{K}\) is a CB neighborhood of the identity in \(\mathrm{PMap}(\Sigma)\). Now, let \(D\) be a Lickorish generating set for the finite-type subsurface \(K\), and \(H\) a finite collection of pairwise commuting handle-shifts and chosen by Mann-Rafi in the proof of Theorem 2.13. As both \(D\) and \(H\) are finite, the set \(X=\mathcal{V}_{K}\cup D\cup H\) is CB in \(\mathrm{PMap}(\Sigma)\). We claim that \(X\) generates \(\mathrm{PMap}(S)\). In [11, Lemma 6.21], they show that the set \(X\) together with a collection of "generalized" shift maps generates a group \(G\) that contains \(\mathrm{PMap}(\Sigma)\). However, in our case, \(|E(\Sigma)|<\infty\), so there are no generalized shift maps. Since all elements of \(X\) are pure, we conclude that the CB set \(X\) generates \(\mathrm{PMap}(\Sigma)\).
|
2309.14151 | Structural and universal completeness in algebra and logic | In this work we study the notions of structural and universal completeness
both from the algebraic and logical point of view. In particular, we provide
new algebraic characterizations of quasivarieties that are actively and
passively universally complete, and passively structurally complete. We apply
these general results to varieties of bounded lattices and to quasivarieties
related to substructural logics. In particular we show that a substructural
logic satisfying weakening is passively structurally complete if and only if
every classical contradiction is explosive in it. Moreover, we fully
characterize the passively structurally complete varieties of MTL-algebras,
i.e., bounded commutative integral residuated lattices generated by chains. | Paolo Aglianò, Sara Ugolini | 2023-09-25T14:04:51Z | http://arxiv.org/abs/2309.14151v1 | # Structural and universal completeness
###### Abstract.
In this work we study the notions of structural and universal completeness both from the algebraic and logical point of view. In particular, we provide new algebraic characterizations of quasivarieties that are actively and passively universally complete, and passively structurally complete. We apply these general results to varieties of bounded lattices and to quasivarieties related to substructural logics. In particular we show that a substructural logic satisfying weakening is passively structurally complete if and only if every classical contradiction is explosive in it. Moreover, we fully characterize the passive structurally complete varieties of MTL-algebras, i.e., bounded commutative integral residuated lattices generated by chains.
_E-mail addresses_: agliano@live.com, sara@ilia.csic.es
## 1. Introduction
The main aim of this paper is to explore some connections between algebra and logic; mainly, we try to produce some _bridge theorems_. A bridge theorem is a statement connecting logical (and mostly syntactical) features of deductive systems and properties of classes of algebras; this connection is usually performed using the tools of general algebra and the rich theory that is behind it. The main reason behind this kind of exploration is in the further understanding one can gain by connecting two apparently distant fields. In this way, we can explore logical properties in purely algebraic terms; at the same time statements can be imported from logic that have an important and often new algebraic meaning.
The set of logical problems we want to explore is connected with the concept of _structural completeness_ of a deductive system, in the different ways it can be declined. For a deductive system, being structurally complete means that each of its proper extensions admits new theorems. This notion can be formalized in a more rigorous way, using the concept of _admissible rule_. A rule is admissible in a logic if, whenever there is a substitution making its premises a theorem, such substitution also makes the conclusion a theorem. A logic is then structurally complete if all its admissible rules are derivable in the system. It is well-known that classical logic is structurally complete; intuitionistic logic is not but it satisfies a weaker although still interesting notion: it is _passively_ structurally complete. We will see that this is not just a feature of intuitionism but it can be explained in a much more general framework, and it is connected to the way the contradictions of classical logic are treated. In more details, passive structural completeness means that all rules that do not apply to theorems are derivable. Naturally, the dual notion of _active_ structural completeness also arises, which instead isolates the derivability of those rules for which there exists a substitution making their premises a theorem. The latter notion has been explored in generality in [35]. Structural completeness and its hereditary version have been
## 1. Introduction
In this paper we study the relationship between the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure of the _diverse structure_ of the _diverse structure_ of the _diverse structure of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _diverse structure_ of the _di structure_ of the _diverse structure_ of the _di structure_ of the _diverse structure_ of the _di
## 2. Preliminaries
### Universal algebra and the Blok-Pigozzi connection
Let \(\mathsf{K}\) be a class of algebras; we denote by \(\mathbf{I},\mathbf{H},\mathbf{P},\mathbf{S},\mathbf{P}_{u}\) the class operators sending \(\mathsf{K}\) in the class of all isomorphic copies, homomorphic images, direct products, subalgebras and ultraproducts of members of \(\mathsf{K}\). The operators can be composed in the obvious way; for instance \(\mathbf{SP}(\mathsf{K})\) denotes all algebras that are embeddable in a direct product of members of \(\mathsf{K}\); moreover there are relations among the classes resulting from applying operators in a specific orders, for instance \(\mathbf{PS}(\mathsf{K})\subseteq\mathbf{SP}(\mathsf{K})\) and \(\mathbf{HSP}(\mathsf{K})\) is the largest class we can obtain composing the operators. We will use all the known relations without further notice, but the reader can consult [69] or [21] for a textbook treatment.
If \(\rho\) is a type of algebras, an _equation_ is a pair \(p,q\) of \(\rho\)-terms (i.e. elements of the absolutely free algebra \(\mathbf{T}_{\rho}(\omega)\)) that we write suggestively as \(p\approx q\); a _universal sentence_ or _clause_ in \(\rho\) is a formal pair \((\Sigma,\Gamma)\) that we write as \(\Sigma\Rightarrow\Gamma\), where \(\Sigma,\Gamma\) are finite sets of equations; a universal sentence is a _quasiequation_ if \(|\Gamma|=1\) and it is is _negative_ if \(\Delta=\emptyset\). Clearly an equation is a quasiequation in which \(\Sigma=\emptyset\).
Given any set of variables \(X\), an assignment of \(X\) into an algebra \(A\) of type \(\rho\) is a function \(h\) mapping each variable \(x\in X\) to an element of \(\mathbf{A}\), that extends (uniquely) to a homomorphism (that we shall also call \(h\)) from the term algebra \(\mathbf{T}_{\rho}(\omega)\) to \(\mathbf{A}\). An algebra \(\mathbf{A}\) satisfies an equation \(p\approx q\) with an assignment \(h\) (and we write \(\mathbf{A},h\models p\approx q\)) if \(h(p)=h(q)\) in \(\mathbf{A}\). An equation \(p\approx q\) is _valid_ in \(\mathbf{A}\) (and we write \(\mathbf{A}\models p\approx q\)) if for all assignments \(h\) in \(\mathbf{A}\), \(\mathbf{A},h\models p\approx q\); if \(\Sigma\) is a set of equations then \(\mathbf{A}\models\Sigma\) if \(\mathbf{A}\models\sigma\) for all \(\sigma\in\Sigma\). A universal sentence is _valid_ in \(\mathbf{A}\) (and we write \(\mathbf{A}\models\Sigma\Rightarrow\Delta\)) if for all assignments \(h\) to \(\mathbf{A}\), \(h(p)=h(q)\) for all \(p\approx q\in\Sigma\) implies that there is an identity \(s\approx t\in\Delta\) with \(h(s)=h(t)\); in other words a universal sentence can be understood as the formula \(\forall\mathbf{x}(\bigwedge\Sigma\rightarrow\bigvee\Delta)\). An equation or a universal sentence is _valid_ in a class \(\mathsf{K}\) if it is valid in all algebras in \(\mathsf{K}\).
A class of algebras is a variety if it is closed under \(\mathbf{H},\mathbf{S}\) and \(\mathbf{P}\), a quasivariety if it is closed under \(\mathbf{I}\mathbf{S}\),\(\mathbf{P}\) and \(\mathbf{P}_{u}\) and a universal class if it is closed under \(\mathbf{I}\mathbf{S}\mathbf{P}_{u}\). The following facts were essentially discovered by A. Tarski, J. Los and A. Lyndon in the pioneering phase of model theory; for proof of this and similar statements the reader can consult [26].
**Lemma 2.1**.: _Let \(\mathsf{K}\) be any class of algebras. Then:_
1. \(\mathsf{K}\) _is a universal class if and only if_ \(\mathbf{ISP}_{u}(\mathsf{K})=\mathsf{K}\) _if and only if it is the class of algebras in which a set of universal sentences is valid;_
2. \(\mathsf{K}\) _is a quasivariety if and only if_ \(\mathbf{I}\mathbf{S}\mathbf{P}_{u}(\mathsf{K})=\mathsf{K}\) _if and only if it is the class of algebras in which a set of quasiequations is valid;_
3. \(\mathsf{K}\) _is a variety if and only if_ \(\mathbf{HSP}(\mathsf{K})=\mathsf{K}\) _if and only if it is the class of algebras in which a set of equations is valid._
**Notation 1**.: We will often write \(\mathbf{V}\) for \(\mathbf{HSP}\) and \(\mathbf{Q}\) for \(\mathbf{I}\mathbf{S}\mathbf{P}_{u}\).
For the definition of free algebras in a class \(\mathsf{K}\) on a set \(X\) of generators, in symbols \(\mathbf{F}_{\mathsf{K}}(X)\), we refer to [21]; we merely observe that every free algebra on a class \(\mathsf{K}\) belongs to \(\mathbf{ISP}(\mathsf{K})\). It follows that every free algebra in \(\mathsf{K}\) is free in \(\mathbf{ISP}(\mathsf{K})\) and therefore for any quasivariety \(\mathsf{Q}\), \(\mathbf{F}_{\mathsf{Q}}(X)=\mathbf{F}_{\mathbf{V}(\mathsf{Q})}(X)\).
There are two fundamental results that we will be using many times and deserve a spotlight. Let \(\mathbf{B},(\mathbf{A}_{i})_{i\in I}\) be algebras in the same signature; we say that \(\mathbf{B}\)_embeds_ in \(\prod_{i\in I}\mathbf{A}_{i}\) if \(\mathbf{B}\in\mathbf{IS}(\prod_{i\in I}\mathbf{A}_{i})\). Let \(p_{i}\) be the \(i\)-th projection, or better, the composition of the embedding and the \(i\)-th projection, from \(\mathbf{B}\) to \(\mathbf{A}_{i}\); the embedding is _subdirect_ if for all \(i\in I\), \(p_{i}(\mathbf{B})=\mathbf{A}_{i}\) and in this case we will write
\[\mathbf{B}\leq_{sd}\prod_{i\in I}\mathbf{A}_{i}.\]
An algebra \(\mathbf{B}\) is _subdirectly irreducible_ if it is nontrivial and for any subdirect embedding
\[\mathbf{B}\leq_{sd}\prod_{i\in I}\mathbf{A}_{i}\]
there is an \(i\in I\) such that \(\mathbf{B}\) and \(\mathbf{A}_{i}\) are isomorphic. It can be shown that \(\mathbf{A}\) is _subdirectly irreducible_ if and only if the congruence lattice \(\operatorname{Con}(\mathbf{A})\) of \(\mathbf{A}\) has a unique minimal element different from the trivial congruence. If \(\mathsf{V}\) is a variety we denote by \(\mathsf{V}_{si}\) the class of subdirectly irreducible algebras in \(\mathsf{V}\).
**Theorem 2.2**.:
1. _(Birkhoff_ _[_16_]_) Every algebra can be subdirectly embedded in a product of subdirectly irreducible algebras. So if_ \(\mathbf{A}\in\mathsf{V}\)_, then_ \(\mathbf{A}\) _can be subdirectly embedded in a product of members of_ \(\mathsf{V}_{si}\)_._
2. _(Jonsson's Lemma_ _[_56_]__) Suppose that_ \(\mathsf{K}\) _is a class of algebras such that_ \(\mathsf{V}(\mathsf{K})\) _is congruence distributive; then_ \(\mathsf{V}_{si}\subseteq\mathbf{HSP}_{u}(\mathsf{K})\)_._
If \(\mathsf{Q}\) is a quasivariety and \(\mathbf{A}\in\mathsf{Q}\), a _relative congruence_ of \(\mathbf{A}\) is a congruence \(\theta\) such that \(\mathbf{A}/\theta\in\mathsf{Q}\); relative congruences form an algebraic lattice \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\). Moreover, for an algebra \(\mathbf{A}\) and a set \(H\subseteq A\times A\) there exists the least relative congruence \(\theta_{\mathsf{Q}}(H)\) on \(\mathbf{A}\) containing \(H\). When \(H=\{(a,b)\}\), we just write \(\theta_{\mathsf{Q}}(a,b)\). When \(\mathsf{Q}\) is a variety we simplify the notation by dropping the subscript \(\mathsf{Q}\).
For any congruence lattice property \(P\) we say that \(\mathbf{A}\in\mathsf{Q}\) is _relative_\(P\) if \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) satisfies \(P\). So for instance \(\mathbf{A}\) is _relative subdirectly irreducible_ if \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) has a unique minimal element; since clearly \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) is a meet subsemilattice of \(\operatorname{Con}(\mathbf{A})\), any subdirectly irreducible algebra is relative subdirectly irreducible for any quasivariety to which it belongs. For a quasivariety \(\mathsf{Q}\) we denote by \(\mathsf{Q}_{rsi}\) the class of relative subdirectly irreducible algebras in \(\mathsf{Q}\). We have the equivalent of Birkhoff's and Jonsson's results for quasivarieties:
**Theorem 2.3**.: _Let \(\mathsf{Q}\) be any quasivariety._
1. _(Mal'cev_ _[_62_]__) Every_ \(\mathbf{A}\in\mathsf{Q}\) _is subdirectly embeddable in a product of algebras in_ \(\mathsf{Q}_{rsi}\)_._
2. _(Czelakowski-Dziobiak_ _[_32_]__) If_ \(\mathsf{Q}=\mathbf{Q}(\mathsf{K})\)_, then_ \(\mathsf{Q}_{rsi}\subseteq\mathbf{ISP}_{u}(\mathsf{K})\)_._
The following fact will be used in the sequel.
**Lemma 2.4**.: _Let \(\mathbf{A}\) be an algebra, such that \(\mathbf{V}(\mathbf{A})\) is congruence distributive. Then \(\mathbf{Q}(\mathbf{A})=\mathbf{V}(\mathbf{A})\) if and only if every subdirectly irreducible algebra in \(\mathbf{HSP}_{u}(\mathbf{A})\) is in \(\mathbf{ISP}_{u}\mathbf{A}\)._
Proof.: Suppose first that \(\mathbf{Q}(\mathbf{A})=\mathbf{V}(\mathbf{A})\), and let \(\mathbf{A}\) be a subdirectly irreducible algebra in \(\mathbf{HSP}_{u}(\mathbf{A})\). Thus \(\mathbf{A}\) is subdirectly irreducible in \(\mathbf{V}(\mathbf{A})=\mathbf{Q}(\mathbf{A})\), and by Theorem 2.3\(\mathbf{A}\in\mathbf{ISP}_{u}(\mathbf{A})\).
Conversely assume that every subdirectly irreducible algebra in \(\mathbf{HSP}_{u}(\mathbf{A})\) is in \(\mathbf{ISP}_{u}\mathbf{A}\). Since \(\mathbf{V}(\mathbf{A})\) is congruence distributive, by Theorem 2.2(2) every subdirectly irreducible algebra in \(\mathbf{V}(\mathbf{A})\) is in \(\mathbf{HSP}_{u}(\mathbf{A})\), thus in \(\mathbf{ISP}_{u}\mathbf{A}\). Now every algebra in \(\mathbf{V}(\mathbf{A})\) is subdirectly embeddable in a product of subdirectly irreducible algebras in \(\mathbf{V}(\mathbf{A})\) (Theorem 2.2(1)). Therefore, \(\mathbf{V}(\mathbf{A})\subseteq\mathbf{ISPISP}_{u}(\mathbf{A})\subseteq \mathbf{ISP}_{u}(\mathbf{A})=\mathbf{Q}(\mathbf{A})\) and thus equality holds.
In this work we are particularly interested in quasivarieties that are the equivalent algebraic semantics of a logic in the sense of Blok-Pigozzi [18]. We will spend some time illustrating the machinery of _Abstract Algebraic Logic_ that establishes a Galois connection between _algebraizable logics_ and _quasivarieties of logic_, since it is relevant to understand our results. For the omitted details we refer the reader to [18, 40].
By a _logic_\(\mathcal{L}\) in what follows we mean a substitution invariant consequence relation \(\vdash\) on the set of terms \(\mathbf{T}_{\rho}(\omega)\) (also called _algebra of formulas_) of some algebraic language \(\rho\). In loose terms, to establish the algebraizability of a logic \(\mathcal{L}\) with respect to a quasivariety of algebras \(\mathsf{Q}_{\mathcal{L}}\) over the same language \(\rho\), one needs a finite set of one-variable equations
\[\tau(x)=\{\delta_{i}(x)\approx\varepsilon_{i}(x):i=1,\ldots,n\}\]
over terms of type \(\rho\) and a finite set of formulas of \(\mathcal{L}\) in two variables
\[\Delta(x,y)=\{\varphi_{1}(x,y),\ldots,\varphi_{m}(x,y)\}\]
that allow to transform equations, quasiequations and universal sentences in \(\mathsf{Q}_{\mathcal{L}}\) into formulas, rules and clauses of \(\mathcal{L}\); moreover this transformation must respect both the consequence relation of the logic
and the semantical consequence of the quasivariety. More precisely, for all sets of formulas \(\Gamma\) of \(\mathcal{L}\) and formulas \(\varphi\in\mathbf{T}_{\rho}(\omega)\)
\[\Gamma\vdash_{\mathcal{L}}\varphi\quad\text{iff}\quad\tau(\Gamma)\models_{ \mathsf{Q}_{\mathcal{L}}}\tau(\varphi)\]
where \(\tau(\Gamma)\) is a shorthand for \(\{\tau(\gamma):\gamma\in\Gamma\}\), and also
\[(x\approx y)\dashv\models_{\mathsf{Q}_{\mathcal{L}}}\tau(\Delta(x,y)).\]
A quasivariety \(\mathsf{Q}\) is a _quasivariety of logic_ if it is the equivalent algebraic semantics for some logic \(\mathcal{L}_{\mathsf{Q}}\); the Galois connection between algebraizable logics and quasivarieties of logic is given by
\[\mathcal{L}_{\mathsf{Q}_{\mathcal{L}}}=\mathcal{L}\qquad\qquad\mathsf{Q}_{ \mathcal{L}_{\mathsf{Q}}}=\mathsf{Q}.\]
Not every quasivariety is a quasivariety of logic; for instance no _idempotent quasivariety_, such as any quasivariety of lattices, can be a quasivariety of logics. Nonetheless quasivarieties of logic are plentiful. In fact any ideal determined variety is such, as well as any quasivariety coming from a congruential variety with normal ideals (see [9] for details). Moreover, every quasivariety is _categorically equivalent_ to a quasivariety of logic [65]. This means that if an algebraic concept is expressible through notions that are invariant under categorical equivalence, and it holds for a quasivariety \(\mathsf{Q}\), then it holds for its categorically equivalent quasivariety of logic \(\mathsf{Q}^{\prime}\); and hence in can be transformed into a logical concept in \(\mathcal{L}_{\mathsf{Q}^{\prime}}\) using the Blok-Pigozzi connection.
**Definition 2.5**.: If \(\mathsf{Q}\) is any quasivariety, with an abuse of notation, we will denote by \(\mathcal{L}_{\mathsf{Q}}\) a logic whose equivalent algebraic semantics is categorically equivalent to \(\mathsf{Q}\).
The following result hints at what kind of properties can be transferred by categorical equivalence.
**Theorem 2.6** ([12]).: _Let \(\mathsf{K}\) be a class closed under subalgebras and direct products; If \(\mathsf{K}\) is categorically equivalent to a quasivariety \(\mathsf{Q}\), then \(\mathsf{K}\) is a quasivariety._
Suppose now that \(\mathsf{Q}\) and \(\mathsf{R}\) are quasivarieties and suppose that \(F:\mathsf{Q}\longrightarrow\mathsf{R}\) is a functor between the two algebraic categories witnessing the categorical equivalence. Now, \(F\) preserves all the so-called _categorical properties_, i.e., those notions that can be expressed as properties of morphisms. In particular, embeddings are mapped to embeddings (since in algebraic categories they are exactly the categorical monomorphisms), surjective homomorphisms are mapped to surjective homomorphisms (since they correspond to _regular_ epimorphisms in the categories). Moreover, we observe that direct products are preserved as well, since they can be expressed via families of surjective homomorphisms (see e.g. [21]). Therefore, if \(\mathsf{Q}^{\prime}\) is a subquasivariety of \(\mathsf{Q}\), then the restriction of \(F\) to \(\mathsf{Q}^{\prime}\) witnesses a categorical equivalence between \(\mathsf{Q}^{\prime}\) and
\[\mathsf{R}^{\prime}=\{\mathbf{B}\in\mathsf{R}:\mathbf{B}=F(\mathbf{A})\text { for some }\mathbf{A}\in\mathsf{Q}^{\prime}\}.\]
It follows from Theorem 2.6 that \(\mathsf{R}^{\prime}\) is a subquasivariety of \(\mathsf{R}\), and that \(\mathsf{R}^{\prime}\) is a variety whenever \(\mathsf{Q}^{\prime}\) is such. Given a quasivariety \(\mathsf{Q}\), we denote by \(\Lambda_{q}(\mathsf{Q})\) the lattice of subquasivarieties of \(\mathsf{Q}\). Hence the correspondence sending \(\mathsf{Q}^{\prime}\longmapsto\mathsf{R}^{\prime}\) is a lattice isomorphism between \(\Lambda_{q}(\mathsf{Q})\) in \(\Lambda_{q}(\mathsf{R})\) that preserves all the categorical properties. Moreover, we observe that, since ultraproducts in an algebraic category admit a categorical definition which turns out to be equivalent to the algebraic one (see for instance [37]), the functor \(F\) also map universal subclasses to universal subclasses; more precisely, \(\mathsf{U}\subseteq\mathsf{Q}\) is a universal class if and only if \(F(\mathsf{U})\subseteq\mathsf{R}\) is a universal class.
Let us show an example of how we can use these correspondences, that is also a preview of what we will see in the coming sections; if \(\mathsf{Q}\) is a quasivariety, a subquasivariety \(\mathsf{Q}^{\prime}\) is _equational_ in \(\mathsf{Q}\) if \(\mathsf{Q}^{\prime}=\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\). A quasivariety is _primitive_ if every subquasivariety of \(\mathsf{Q}\) is equational in \(\mathsf{Q}\). It is clear from the discussion above that this concept is preserved by categorical equivalence and that the lattice isomorphism described above sends primitive subquasivarieties in primitive subquasivarieties.
### Projectivity, weak projectivity and exactness
We now introduce the algebraic notions that will be the key tools for our investigation: projective, weakly projective, exact, and finitely presented algebras.
**Definition 2.7**.: Given a class \(\mathsf{K}\) of algebras, an algebra \(\mathbf{A}\in\mathsf{K}\) is _projective_ in \(\mathsf{K}\) if for all \(\mathbf{B},\mathbf{C}\in\mathsf{K}\), any homomorphism \(h:\mathbf{A}\longmapsto\mathbf{C}\), and any surjective homomorphism \(g:\mathbf{B}\longmapsto\mathbf{C}\), there is a homomorphism \(f:\mathbf{A}\longmapsto\mathbf{B}\) such that \(h=gf\).
Determining the projective algebras in a class is usually a challenging problem, especially in a general setting. If however \(\mathsf{K}\) contains all the free algebras on \(\mathsf{K}\) (in particular, if \(\mathsf{K}\) is a quasivariety), projectivity admits a simpler formulation. We call an algebra \(\mathbf{B}\) a _retract_ of an algebra \(\mathbf{A}\) if there is a homomorphism \(g:\mathbf{A}\longmapsto\mathbf{B}\) and a homomorphism \(f:\mathbf{B}\longmapsto\mathbf{A}\) with \(gf=\operatorname{id}_{\mathbf{B}}\) (and thus, necessarily, \(f\) is injective and \(g\) is surjective). The following theorem was proved first by Whitman for lattices [75] but it is well-known to hold for any class of algebras.
**Theorem 2.8**.: _Let \(\mathsf{Q}\) be a quasivariety. Then the following are equivalent:_
1. \(\mathbf{A}\) _is projective in_ \(\mathsf{Q}\)_;_
2. \(\mathbf{A}\) _is a retract of a free algebra in_ \(\mathsf{Q}\)_._
3. \(\mathbf{A}\) _is a retract of a projective algebra in_ \(\mathsf{Q}\)_._
_In particular every free algebra in \(\mathsf{Q}\) is projective in \(\mathsf{Q}\)._
**Definition 2.9**.: Given a quasivariety \(\mathsf{Q}\) we say that an algebra is _finitely presented in \(\mathsf{Q}\)_ if there exists a finite set \(X\) and a finite set \(H\) of pairs of terms over \(X\) such that \(\mathbf{A}\cong\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(H)\).
The proof of the following theorem is standard (but see [47]).
**Theorem 2.10**.: _For a finitely presented algebra \(\mathbf{A}\in\mathsf{Q}\) the following are equivalent:_
1. \(\mathbf{A}\) _is projective in_ \(\mathsf{Q}\)_;_
2. \(\mathbf{A}\) _is projective in the class of all finitely presented algebras in_ \(\mathsf{Q}\)_;_
3. \(\mathbf{A}\) _is a retract of a finitely generated free algebra in_ \(\mathsf{Q}\)_._
As a consequence we stress that if \(\mathsf{Q}\) is a quasivariety and \(\mathsf{V}=\mathbf{V}(\mathsf{Q})\) then all the algebras that are projective in \(\mathsf{Q}\) are also projective in \(\mathsf{V}\) (and vice versa). Moreover, all the finitely generated projective algebras in \(\mathsf{Q}\) lie inside \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\).
**Definition 2.11**.: An algebra \(\mathbf{A}\) is _weakly projective in an algebra \(\mathbf{B}\)_ if \(\mathbf{A}\in\mathbf{H}(\mathbf{B})\) implies \(\mathbf{A}\in\mathbf{S}(\mathbf{B})\); an algebra is _weakly projective in a class \(\mathsf{K}\)_ if it is weakly projective in any algebra \(\mathbf{B}\in\mathsf{K}\).
**Definition 2.12**.: If \(\mathsf{Q}\) is a quasivariety of algebras and \(\mathbf{A}\in\mathsf{Q}\), let \(G_{\mathbf{A}}\) be the set of generators of \(\mathbf{A}\); \(\mathbf{A}\) is _exact_ in \(\mathsf{Q}\) if it is weakly projective in some \(\mathbf{F}_{\mathsf{Q}}(X)\) with \(|X|\geq|G_{\mathbf{A}}|\).
Clearly any projective algebra in \(\mathsf{Q}\) is weakly projective in \(\mathsf{Q}\) and any weakly projective algebra in \(\mathsf{Q}\) is exact in \(\mathsf{Q}\).
Observe also the following consequence of the definition.
**Lemma 2.13**.: _Let \(\mathsf{Q}\) be a quasivariety and let \(\mathbf{A}\) be a finitely generated algebra in \(\mathsf{Q}\); then the following are equivalent:_
1. \(\mathbf{A}\) _is exact in_ \(\mathsf{Q}\)_;_
2. \(\mathbf{A}\in\mathbf{S}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_._
Therefore for finitely generated algebras our definition of exactness coincides with the one in [23]. We close this subsection with a couple of results connecting projectivity and weak projectivity.
**Proposition 2.14**.: _Let \(\mathbf{A}\) be a finite subdirectly irreducible algebra; if \(\mathbf{A}\) is weakly projective in \(\mathbf{Q}(\mathbf{A})\), then it is projective in \(\mathbf{Q}(\mathbf{A})\)._
Proof.: Let \(\mathsf{Q}=\mathbf{Q}(\mathbf{A})\); since \(\mathbf{A}\) is finite, \(\mathsf{Q}\) is locally finite. Let \(\mathbf{F}\) be a finitely generated (hence finite) free algebra in \(\mathsf{Q}\) such that \(\mathbf{A}\in\mathbf{H}(\mathbf{F})\); since \(\mathbf{A}\) is weakly projective, \(\mathbf{A}\) is embeddable in \(\mathbf{F}\) and without loss of generality we may assume that \(\mathbf{A}\leq\mathbf{F}\). Consider the set
\[V=\{\alpha\in\operatorname{Con}_{\mathsf{Q}}(\mathbf{F}):\alpha\cap A^{2}=0_{ \mathbf{A}}\},\]
where we denote by \(0_{\mathbf{A}}\) the minimal congruence of \(\mathbf{A}\). It is easy to see that \(V\) is an inductive poset so we may apply Zorn's Lemma to find a maximal congruence \(\theta\in V\). Clearly \(a\longmapsto a/\theta\) is an embedding of \(\mathbf{A}\) into \(\mathbf{F}/\theta\). We claim that \(\mathbf{F}/\theta\) is relative subdirectly irreducible and to prove so, since everything is finite, it is enough to show that \(\theta\) is meet irreducible in \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{F})\); so let \(\alpha,\beta\in\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) such that \(\alpha\wedge\beta=\theta\). Then
\[0_{\mathbf{A}}=\theta\cap A^{2}=(\alpha\wedge\beta)\cap A^{2}=(\alpha\wedge A ^{2})\wedge(\beta\cap A^{2});\]
But \(\mathbf{A}\) is subdirectly irreducible, so \(0_{A}\) is meet irreducible in \(\operatorname{Con}(\mathbf{A})\); hence either \(\alpha\cap A^{2}=0_{\mathbf{A}}\) or \(\beta\cap A^{2}=0_{\mathbf{A}}\), so either \(\alpha\in V\) or \(\beta\in V\). Since \(\theta\) is maximal in \(V\), either \(\alpha=\theta\) or \(\beta=\theta\), which proves that \(\mathbf{F}/\theta\) is relative subdirectly irreducible. Therefore, by Theorem 2.3(2), \(\mathbf{F}/\theta\in\mathbf{IS}(\mathbf{A})\); since \(\mathbf{F}/\theta\) and \(\mathbf{A}\) are both finite and each one is embeddable in the other, they are in fact isomorphic. Thus \(\mathbf{A}\leq\mathbf{F}\), and there is a homomorphism from \(\mathbf{F}\) onto \(\mathbf{A}\) that maps each \(a\in A\) to itself. This shows that \(\mathbf{A}\) is a retract of \(\mathbf{F}\), and therefore \(\mathbf{A}\) is projective in \(\mathbf{Q}(\mathbf{A})\).
For varieties we have to add the hypothesis of congruence distributivity, since the use of Theorem 2.2(2) is paramount; for the very similar proof see [53, Theorem 9].
**Proposition 2.15**.: _Let \(\mathbf{A}\) be a finite subdirectly irreducible algebra such that \(\mathbf{V}(\mathbf{A})\) is congruence distributive; if \(\mathbf{A}\) is weakly projective in \(\mathbf{V}(\mathbf{A})\), then it is projective in \(\mathbf{V}(\mathbf{A})\)._
We observe that in algebraic categories projectivity is a property preserved by categorical equivalence and the same holds for weak projectivity and exactness. Finally by [42] being finitely presented and being finitely generated are also categorical properties preserved by equivalences.
### Algebraic unification
The main objects of our study, i.e., the notions of universal and structural completeness, are closely related to unification problems. The classical syntactic unification problem given two term \(s,t\) finds a _unifier_ for them; that is, a uniform replacement of the variables occurring in \(s\) and \(t\) by other terms that makes \(s\) and \(t\) identical. When the latter syntactical identity is replaced by equality modulo a given equational theory \(E\), one speaks of \(E\)_-unification_. S. Ghilardi [47] proved that there is a completely algebraic way of studying (\(E\)-)unification problems in varieties of logic, which makes use of finitely presented and projective algebras and thus is invariant under categorical equivalence.
Let us discuss Ghilardi's idea in some detail showing how it can be applied to quasivarieties. If \(\mathsf{Q}\) is a quasivariety and \(\Sigma\) is a finite set of equations in the variables \(X=\{x_{1},\ldots,x_{n}\}\) by a _substitution_\(\sigma\) we mean an assignment from \(X\) to \(\mathbf{F}_{\mathsf{Q}}(\omega)\), extending to a homomorphism from \(\mathbf{F}_{\mathsf{Q}}(X)\) to \(\mathbf{F}_{\mathsf{Q}}(\omega)\).
**Definition 2.16**.: A _unification problem_ for a quasivariety \(\mathsf{Q}\) is a finite set of identities \(\Sigma\) in the language of \(\mathsf{Q}\); \(\Sigma\) is _unifiable_ in \(\mathsf{Q}\) if there is a substitution \(\sigma\) such that \(\mathsf{Q}\models\sigma(\Sigma)\), i.e.
\[\mathsf{Q}\models p(\sigma(x_{1}),\ldots,\sigma(x_{n}))\approx q(\sigma(x_{1} ),\ldots,\sigma(x_{n}))\]
for all \(p\approx q\in\Sigma\). The substitution \(\sigma\) is called a _unifier_ for \(\Sigma\).
Observe that \(\Sigma\) is _unifiable_ in \(\mathsf{Q}\) if and only if it is unifiable in \(\mathbf{V}(\mathsf{Q})\). Let us now present the algebraic approach, where a unification problem can be represented by a finitely presented algebra in \(\mathsf{Q}\).
**Definition 2.17**.: If \(\mathbf{A}\) is in \(\mathsf{Q}\), a _unifier_ for \(\mathbf{A}\) is a homomorphism \(u:\mathbf{A}\longrightarrow\mathbf{P}\) where \(\mathbf{P}\) is a projective algebra in \(\mathsf{Q}\); we say that an algebra is _unifiable in \(\mathsf{Q}\)_ if at least one such homomorphism exists. A quasivariety \(\mathsf{Q}\) is _unifiable_ if every finitely presented algebra in \(\mathsf{Q}\) is unifiable.
**Notation 2**.: When we write \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\), \(\theta_{\mathsf{Q}}(\Sigma)\) is the relative congruence generated in \(\mathbf{F}_{\mathsf{Q}}(X)\) by the set \(\{(p,q):p\approx q\in\Sigma\}\).
The following summarizes the needed results of [47] applied to quasivarieties.
**Theorem 2.18**.: _Let \(\mathsf{Q}\) be a quasivariety, and let \(\Sigma\) be a finite set of equations in the language of \(\mathsf{Q}\) with variables in a (finite) set \(X\); then:_
1. _if_ \(\Sigma\) _is unifiable via_ \(\sigma:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}(Y)\) _then_ \(u_{\sigma}:\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\to\mathbf{F} _{\mathsf{Q}}(Y)\) _defined by_ \[u_{\sigma}(t/\theta_{\mathsf{Q}}(\Sigma))=\sigma(t)\] _is a unifier for_ \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\)_;_
2. _conversely let_ \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\)_. If there is a unifier_ \(u:\mathbf{A}\to\mathbf{P}\)_, where_ \(\mathbf{P}\) _is projective and a retract of_ \(\mathbf{F}_{\mathsf{Q}}(Y)\) _witnessed by an embedding_ \(i:\mathbf{P}\to\mathbf{F}_{\mathsf{Q}}(Y)\)_, the substitution_ \[\sigma_{u}:x\longmapsto i(u(x/\theta_{\mathsf{Q}}(\Sigma)))\] _is a unifier for_ \(\Sigma\) _in_ \(\mathsf{Q}\)_._
Proof.: For the first claim, consider \(\sigma:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}(Y)\) and the natural epimorphism \(\pi_{\Sigma}:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}(X)/\theta_{ \mathsf{Q}}(\Sigma)\). Since \(\theta_{\mathsf{Q}}(\Sigma)\) is the least congruence of \(\mathbf{F}_{\mathsf{Q}}(X)\) containing the set of pairs \(S=\{(p,q):p\approx q\in\Sigma\}\), and given that \(S\subseteq\ker(\sigma)\), by the Second Homomorphism Theorem we can close the following diagram with exactly the homomorphism \(u_{\sigma}\):
The second claim is easily seen, since \(\sigma_{u}\) is defined by a composition of homomorphism and as above the set of pairs \(S=\{(p,q):p\approx q\in\Sigma\}\) is contained in its kernel, which yields that \(\sigma_{u}\) is a unifier for \(\Sigma\) in \(\mathsf{Q}\).
**Corollary 2.19**.: _A finite set of identities \(\Sigma\) is unifiable in \(\mathsf{Q}\) if and only if the finitely presented algebra \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is unifiable in \(\mathsf{Q}\)._
The following observation shows how to characterize unifiability in quasivarieties.
**Definition 2.20**.: For a quasivariety \(\mathsf{Q}\), we let \(\mathbf{F}_{\mathsf{Q}}\) be the _smallest free algebra_, i.e. \(\mathbf{F}_{\mathsf{Q}}(\emptyset)\) (if there are constant operations) or else \(\mathbf{F}_{\mathsf{Q}}(x)\).
We have the following :
**Lemma 2.21**.: _Let \(\mathsf{Q}\) be a quasivariety and let \(\mathbf{A}\in\mathsf{Q}\). Then the following are equivalent:_
1. \(\mathbf{A}\) _is unifiable in_ \(\mathsf{Q}\)_;_
2. _there is a homomorphism from_ \(\mathbf{A}\) _to_ \(\mathbf{F}_{\mathsf{Q}}\)_._
Proof.: Note that (2) trivially implies (1), since \(\mathbf{F}_{\mathsf{Q}}\) is projective. Vice versa, if \(\mathbf{A}\) is unifiable, there is a homomorphism from \(\mathbf{A}\) to some projective algebra \(\mathbf{P}\). Since \(\mathbf{P}\) is a retract of some free algebra in \(\mathsf{Q}\), and \(\mathbf{F}_{\mathsf{Q}}\) is a homomorphic image of every free algebra in \(\mathsf{Q}\), the claim follows.
The above lemma implies for instance that if \(\mathbf{F}_{\mathsf{Q}}\) is trivial, then \(\mathsf{Q}\) is unifiable since every algebra admits a homomorphism onto a trivial algebra. Hence, examples of unifiable algebras include lattices, groups, lattice-ordered abelian groups, residuated lattices. On the other hand, both bounded lattices and bounded residuated lattices (explored in Subsection 5.1 and 5.2 respectively) are unifiable if and only if they admit a homomorphism onto the algebra (over the appropriate signature) with two elements \(0\) and \(1\).
We observe in passing that if \(\mathbf{A}\cong\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}\) is finitely presented unifiable algebra in \(\mathsf{Q}\), witnessed by a unifier \(u:\mathbf{A}\longrightarrow\mathbf{P}\), then \(u\) can be split into a homomorphism onto its image \(u(\mathbf{A})\), and an embedding from \(u(\mathbf{A})\) to \(\mathbf{P}\). By the Third Homomorphism Theorem there is a \(\theta^{\prime}\in\operatorname{Con}(\mathbf{F}_{\mathsf{Q}}(X))\) corresponding to the kernel of the onto homomorphism \(u:\mathbf{A}\longrightarrow u(\mathbf{A})\), \(\theta^{\prime}\geq\theta\), such that \(\mathbf{F}_{\mathsf{V}}(X)/\theta^{\prime}\)
embeds in \(\mathbf{P}\); note that \(\theta^{\prime}\in\operatorname{Con}_{\mathsf{Q}}(\mathbf{F}_{\mathsf{Q}}(X))\), since \(\mathbf{P}\in\mathsf{Q}\). The diagram in Figure 1 shows that indeed \(\mathbf{F}\mathsf{v}(X)/\theta^{\prime}\) is exact.
Let us now introduce the usual notion of order among unifiers. Given two unifiers \(u_{1},u_{2}\) for \(\mathbf{A}\) we say that \(u_{1}\) is _less general then_\(u_{2}\) (and we write \(u_{1}\preceq u_{2}\)), if there is a homomorphism \(h\) that makes the following diagram commute.
Clearly \(\preceq\) is a preordering and so the equivalence classes of the associated equivalence relation (i.e. the unifiers that are _equally general_) form a poset \(U_{\mathbf{A}}\); using the maximal sets of that poset it is possible to define a hierarchy of unification types (see [47]). In particular, the unification type is _unitary_ if there is one maximal element, that is called _the most general unifier_ or _mgu_.
**Definition 2.22**.: We say that a quasivariety \(\mathsf{Q}\) has _projective unifiers_ if every finitely presented unifiable algebra in \(\mathsf{Q}\) is projective, and that it has _exact unifiers_ if every finitely presented unifiable algebra in \(\mathsf{Q}\) is exact.
If \(\mathsf{Q}\) has projective unifiers, then (from the algebraic perspective) the identity map is a unifier, and it is also the most general unifier. Next we have a lemma whose proof is straightforward (modulo Lemma 2.21).
**Lemma 2.23**.: _Let \(\mathsf{Q}\) be a quasivariety; then the following are equivalent:_
1. \(\mathsf{Q}\) _has projective (exact) unifiers;_
2. _for any finitely presented_ \(\mathbf{A}\in\mathsf{Q}\)_,_ \(\mathbf{A}\) _has_ \(\mathbf{F}_{\mathsf{Q}}\) _as a homomorphic image if and only if_ \(\mathbf{A}\) _is projective (exact)._
If \(\mathsf{Q}\) is locally finite, then we have a necessary and sufficient condition.
**Lemma 2.24**.: _Let \(\mathsf{Q}\) be a locally finite quasivariety of finite type, then the following are equivalent:_
1. \(\mathsf{Q}\) _has projective unifiers;_
2. _every finite unifiable algebra in_ \(\mathsf{Q}\) _is projective in the class of finite algebras in_ \(\mathsf{Q}\)_._
Proof.: (1) implies (2) is obvious. Assume (2), let \(\mathbf{A}\) be unifiable and finite and let \(\mathbf{B}\in\mathsf{Q}\) such that \(f:\mathbf{B}\longrightarrow\mathbf{A}\) is a onto homomorphism. Let \(a_{1},\ldots,a_{n}\) be the generators of \(\mathbf{A}\) and let \(b_{1},\ldots,b_{n}\in B\) with \(f(b_{i})=a_{i}\) for \(i=1\ldots n\); if \(\mathbf{B}^{\prime}\) is the subalgebra generated by \(b_{1},\ldots,b_{n}\) then \(f\) restricted to \(\mathbf{B}^{\prime}\) is onto. Clearly \(\mathbf{B}^{\prime}\) is finite. Hence by hypothesis there exists a \(g:\mathbf{A}\longrightarrow\mathbf{B}\) such that \(fg\) is the identity on \(\mathbf{A}\). This shows that \(\mathbf{A}\) is projective in \(\mathbf{B}\) and hence in \(\mathsf{Q}\). Thus (1) holds.
Having exact unifiers is weaker than having projective unifiers:
**Example 2.25**.: The variety \(\mathsf{D}\) of distributive lattices is unifiable since it has no constants and it is idempotent; hence its least free algebra is trivial. But \(\mathsf{D}\) does note have projective unifiers: a distributive lattice is projective if and only if the meet of join irreducible elements is again join irreducible [11], so there are finite non projective distributive lattices. However every finitely presented (i.e. finite) distributive lattice is exact [22].
Figure 1.
**Example 2.26**.: A different example is the variety of \(\mathsf{ST}\) of Stone algebras; a Stone algebra is a pseudocomplemented bounded distributive lattice in the signature \((\wedge,\vee,*,0,1)\) such that \(x^{*}\lor x^{**}\approx 1\) holds. A Stone algebra is unifiable if and only if is has a homomorphism into the two element Boolean algebra if and only if it is nontrivial. While there are nontrivial Stone algebras that are not projective, any nontrivial Stone algebra is exact ([22, Lemma 17]). Hence \(\mathsf{ST}\) has exact unifiers.
Moreover, there are examples of varieties having a most general unifier that do not have projective unifiers.
**Example 2.27**.: From the results in [48], the variety \(\mathsf{SH}\) of Stonean Heyting algebras (that is, Heyting algebras such that \(\neg x\vee\neg\neg x\approx 1\) holds) is such that every unifiable algebra \(\mathbf{A}\in\mathsf{SH}\) has a most general unifier. However, \(\mathsf{SH}\) does not have projective unifiers. The algebra \(\mathbf{F}_{\mathsf{SH}}(x,y,z)/\theta\), where \(\theta\) is the congruence generated by the pair \((\neg x\rightarrow(y\lor z),1)\), is unifiable but not projective. We observe that Ghilardi's argument relies heavily on some properties of Heyting algebras and uses Kripke models, making it difficult to generalize.
Trivial examples show that having projective or exact unifiers is not inherited in general by subvarieties (see for instance [35, Example 7.2]). The following lemma (that we extract from [35, Lemma 5.4]) gives a sufficient condition for having projective unifiers. We write a detailed proof for the reader's convenience.
**Lemma 2.28** ([35]).: _Let \(\mathsf{Q}\) be a quasivariety and let \(\mathsf{Q}^{\prime}\) be a subquasivariety of \(\mathsf{Q}\) such that if \(\mathbf{B}=\mathbf{F}_{\mathsf{Q}^{\prime}}(X)/\theta_{\mathsf{Q}^{\prime}}(\Sigma)\) is finitely presented and unifiable in \(\mathsf{Q}^{\prime}\), then \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is unifiable in \(\mathsf{Q}\). If \(\mathsf{Q}\) has projective unifiers then \(\mathsf{Q}^{\prime}\) has projective unifiers._
Proof.: It is an easy exercise in general algebra to show that if \(\Theta=\bigcap\{\theta\in\operatorname{Con}(\mathbf{F}_{\mathsf{Q}}(X)): \mathbf{F}_{\mathsf{Q}}(X)/\theta\in\mathsf{Q}^{\prime}\}\) then
\[\mathbf{F}_{\mathsf{Q}^{\prime}}(X)/\theta_{\mathsf{Q}^{\prime}}(\Sigma) \cong\mathbf{F}_{\mathsf{Q}}(X)/(\theta_{\mathsf{Q}}(\Sigma)\vee\Theta).\]
It follows that \(\mathbf{B}\) is a homomorphic image of \(\mathbf{A}\) via the natural surjection
\[p:a/\theta_{\mathsf{Q}}(\Sigma)\longmapsto a/(\theta_{\mathsf{Q}}(\Sigma) \vee\Theta)\]
composed with the isomorphism. Moreover if \(f:\mathbf{A}\longrightarrow\mathbf{C}\) is a homomorphism and \(\mathbf{C}\in\mathsf{Q}^{\prime}\), then \(\ker(p)\leqslant\ker(f)\) and by the Second Homomorphism Theorem there is a \(f^{\prime}:\mathbf{B}\longrightarrow\mathbf{C}\) with \(f^{\prime}p=f\).
Now let \(\mathbf{B}=\mathbf{F}_{\mathsf{Q}^{\prime}}/\theta_{\mathsf{Q}^{\prime}}(\Sigma)\) be finitely presented and unifiable and let \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\); then \(\mathbf{A}\) is finitely presented and unifiable as well, so, since \(\mathsf{Q}\) has projective unifiers, \(\mathbf{A}\) is projective in \(\mathsf{Q}\). We now show that \(\mathbf{B}\) is projective. Suppose there are algebras \(\mathbf{C},\mathbf{D}\in\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\) and homomorphisms \(h:\mathbf{B}\rightarrow\mathbf{D},g:\mathbf{C}\rightarrow\mathbf{D}\) with \(g\) surjective. Then, there is a homomorphism \(hp:\mathbf{A}\rightarrow\mathbf{D}\), and since \(\mathbf{A}\) is projective by the definition of projectivity there is a homomorphism \(f:\mathbf{A}\rightarrow\mathbf{C}\) such that \(gf=hp\). Factoring \(f\) as above, there is \(f^{\prime}\) such that \(f^{\prime}p=f\). Therefore since \(gf^{\prime}p=gf=hp\) and \(p\) is surjective, we get that \(gf^{\prime}=h\) which means that \(\mathbf{B}\) is projective in \(\mathsf{Q}^{\prime}\).
We will see later in Section 3.2 (Example 3.29) that Lemma 2.28 does not hold with "projective unifiers" replaced by "exact unifiers". We can build on the previous lemma and obtain the following.
**Lemma 2.29**.: _Suppose that \(\mathsf{Q}\) is a quasivariety such that \(\mathbf{F}_{\mathsf{Q}}=\mathbf{F}_{\mathsf{Q}^{\prime}}\) for all \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\). If \(\mathsf{Q}\) has projective unifiers, then every subquasivariety \(\mathsf{Q}^{\prime}\) has projective unifiers._
Proof.: Let \(\mathsf{Q}^{\prime}\) be a subquasivariety of \(\mathsf{Q}\), let \(\mathbf{B}=\mathbf{F}_{\mathsf{Q}^{\prime}}(X)/\theta(\Sigma)\) be finitely presented and unifiable in \(\mathsf{Q}^{\prime}\) and let \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta(\Sigma)\). Then \(\mathbf{B}\) is a homomorphic image of \(\mathbf{A}\) and, since \(\mathbf{B}\) is unifiable there is a homomorphism from \(\mathbf{B}\) to \(\mathbf{F}_{\mathsf{Q}^{\prime}}=\mathbf{F}_{\mathsf{Q}}\). Hence \(\mathbf{A}\) is unifiable as well; hence the hypothesis of Lemma 2.28 are satisfied, and so \(\mathsf{Q}^{\prime}\) has projective unifiers.
We close this subsection with a corollary appearing also in [35] that is useful to some examples we will explore in what follows. We reproduce the easy proof for the reader's convenience.
**Corollary 2.30**.: _Let \(\mathsf{Q}\) be a quasivariety and let \(\mathbf{V}(\mathsf{Q})=\mathsf{V}\); if \(\mathsf{V}\) has exact (projective) unifiers, then so does \(\mathsf{Q}\)._
Proof.: First recall that \(\mathsf{Q}\) and \(\mathsf{V}\) have the same free algebras. Let \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{V}}(\Sigma)\) and \(\mathbf{B}=\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\); if \(\mathbf{B}\) is unifiable then, as \(\mathbf{B}\) is a homomorphic image of \(\mathbf{A}\) via the epimorphism \(p\) described in the proof of Lemma 2.28, \(\mathbf{A}\) is unifiable as well hence it is exact. Therefore there is an embedding \(u:\mathbf{A}\longrightarrow\mathbf{F}_{\mathsf{Q}}(\omega)\); then by (the proof of) Lemma 2.28 there is a \(g:\mathbf{B}\longrightarrow\mathbf{F}_{\mathsf{Q}}(\omega)\) with \(gp=u\). Since \(u\) is injective, so is \(p\) and hence \(\mathbf{A}\) and \(\mathbf{B}\) are isomorphic. This proves the thesis.
### Structural and universal completeness
We now introduce the main notions of interest of this work, that is, structural and universal completeness.
Let \(\mathcal{L}\) be a logic with consequence relation \(\vdash\). We call _clause_ of \(\mathcal{L}\) an ordered pair \((\Sigma,\Gamma)\) where \(\Sigma,\Gamma\) are finite sets of formulas. We usually write a clause as \(\Sigma\Rightarrow\Gamma\). A _rule_ is a clause \(\Sigma\Rightarrow\Gamma\) where \(\Gamma=\{\gamma\}\). A rule is _admissible_ in a logic if, when added to its calculus, it does not produce new theorems. More precisely:
**Definition 2.31**.: A clause \(\Sigma\Rightarrow\Delta\) is _admissible_ in a logic \(\mathcal{L}\) if every substitution making all the formulas in \(\Sigma\) a theorem, also makes at least one of the formulas in \(\Delta\) a theorem.
Moreover, we say that a clause \(\Sigma\Rightarrow\Delta\) is _derivable_ in a logic \(\mathcal{L}\) if \(\Sigma\vdash\delta\) for some \(\delta\in\Delta\). An admissible clause is not necessarily derivable; a popular example is Harrop's rule for intuitionistic logic
\[\{\neg p\rightarrow(q\lor r)\}\ \Rightarrow\ \{(\neg p\to q)\lor(\neg p \to r)\}\]
which is admissible but not derivable.
**Definition 2.32**.: Let \(\mathcal{L}\) be a logic. A clause \(\Sigma\Rightarrow\Gamma\) is _passive_ in \(\mathsf{Q}\) if there is no substitution making the premises \(\Sigma\) a theorem of \(\mathcal{L}\); a clause is _active_ otherwise. Finally, a clause \(\Sigma\Rightarrow\Delta\) is _negative_ if \(\Delta=\emptyset\).
We observe that every admissible negative clause is necessarily passive.
**Definition 2.33**.: A logic is said to be
* _universally complete_ if every admissible clause is derivable;
* _structurally complete_ if every admissible rule is derivable;
* _actively universally complete_ if every active admissible clause is derivable1 Footnote 1: Logics with this property have been more often called _almost structurally complete_ but here we follow A. Citkin’s advice (see [35, footnote 2, page 8]).
* _actively universally complete_ if every active admissible rule is derivable;
* _passively universally complete_ if every passive admissible clause is derivable;
* _passively structurally complete_ if every passive admissible rule is derivable;
* _non negatively universally complete_ if every non negative admissible clause is derivable.
Modulo algebraizability, one obtains the corresponding notions for a quasivariety. In particular, we can express admissibility and derivability of clauses in \(\mathcal{L}_{\mathsf{Q}}\) using the (quasi)equational logic of \(\mathsf{Q}\); this is because the Blok-Pigozzi Galois connection transforms (sets of) formulas in \(\mathcal{L}_{\mathsf{Q}}\) into (sets of) equations in \(\mathsf{Q}\) in a uniform way. The obtained notions make sense for quasivarieties that do not necessarily correspond to a logic.
**Definition 2.34**.: Let \(\mathsf{Q}\) be a quasivariety. A universal sentence \(\Sigma\Rightarrow\Delta\) is _admissible_ in \(\mathsf{Q}\) if every substitution unifying all the identities in \(\Sigma\) also unifies at least one of the identities in \(\Delta\). A universal sentence is _passive_ if there is no substitution unifying its premises, _active_ otherwise. \(\mathsf{Q}\) is _(active/passive) universally/structurally complete_ if every (active/passive) admissible universal sentence/quasiequation is valid in \(\mathsf{Q}\).
If \(P\) is one of those properties, then we say that a logic (or a quasivariety) is _hereditarily_\(P\) if the logic (or the quasivariety) and all its extensions have the property \(P\). Some of these properties are well-known to be distinct: for instance classical logic is non-negative universally complete but not universally complete, while intuitionistic logic is not structurally complete (thanks to Harrop's example) but it is passively structurally complete (as reported by Wronski in 2005, see [29]). The following is a consequence of algebraizability.
**Theorem 2.35**.: _Let \(\mathsf{Q}\) be a quasivariety of logic, \(\Sigma,\Delta\) sets of equations in the language of \(\mathsf{Q}\) and \(\Sigma^{\prime},\Delta^{\prime}\) the corresponding sets of formulas in \(\mathcal{L}_{\mathsf{Q}}\). Then:_
1. \(\Sigma^{\prime}\Rightarrow\Delta^{\prime}\) _is admissible in_ \(\mathcal{L}_{\mathsf{Q}}\) _if and only if_ \(\Sigma\Rightarrow\Delta\) _is admissible in_ \(\mathsf{Q}\)_;_
2. \(\Sigma^{\prime}\Rightarrow\Delta^{\prime}\) _is derivable in_ \(\mathcal{L}_{\mathsf{Q}}\) _if and only if_ \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\)_._
Moreover, by Corollary 2.19 we get the following.
**Proposition 2.36**.: _Let \(\mathsf{Q}\) be a quasivariety of logic, \(\Sigma,\Delta\) sets of equations in the language of \(\mathsf{Q}\) and \(\Sigma^{\prime},\Delta^{\prime}\) the corresponding sets of formulas in \(\mathcal{L}_{\mathsf{Q}}\). Then:_
1. \(\Sigma^{\prime}\Rightarrow\Delta^{\prime}\) _is active in_ \(\mathcal{L}_{\mathsf{Q}}\) _if and only if_ \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) _is unifiable in_ \(\mathsf{Q}\)_;_
2. \(\Sigma^{\prime}\Rightarrow\Delta^{\prime}\) _is passive in_ \(\mathcal{L}_{\mathsf{Q}}\) _if and only if_ \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) _is not unifiable in_ \(\mathsf{Q}\)_._
The next lemma (also derivable from [22, Theorem 2]) characterizes admissibility of universal sentences.
**Lemma 2.37**.: _Let \(\mathsf{Q}\) be any quasivariety, let \(\Sigma\Rightarrow\Delta\) be a clause in the language of \(\mathsf{Q}\) and let \(\mathsf{U}_{\Sigma\Rightarrow\Delta}=\{\mathbf{A}\in\mathsf{Q}:\mathbf{A} \models\Sigma\Rightarrow\Delta\}\). Then the following are equivalent:_
1. \(\Sigma\Rightarrow\Delta\) _is admissible in_ \(\mathsf{Q}\)_;_
2. \(\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\)_;_
3. \(\mathbf{H}(\mathsf{Q})=\mathbf{H}(\mathsf{U}_{\Sigma\Rightarrow\Delta})\)_._
Proof.: The equivalence between (1) and (2) follows directly from the definition of admissibility. Assume now \(\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\), then \(\mathbf{F}_{\mathsf{Q}}(\omega)\in\mathsf{U}_{\Sigma\Rightarrow\Delta}\). Clearly \(\mathbf{HSP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\subseteq\mathbf{H}( \mathsf{U}_{\Sigma\Rightarrow\Delta})\subseteq\mathbf{H}(\mathsf{Q})\). Now every algebra is embeddable in an ultraproduct of its finitely generated subalgebras and every finitely generated algebra is a homomorphic image of \(\mathbf{F}_{\mathsf{Q}}(\omega)\). Therefore if \(\mathbf{A}\in\mathsf{Q}\), then \(\mathbf{A}\in\mathbf{SP}_{u}\mathbf{H}(\mathbf{F}_{\mathsf{Q}}(\omega))\subseteq \mathbf{HSP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\). So \(\mathbf{HSP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))=\mathbf{H}(\mathsf{Q})\) and thus (3) holds.
Conversely assume (3). Since \(\mathbf{F}_{\mathsf{Q}}(\omega)\in\mathbf{H}(\mathsf{Q})=\mathbf{H}(\mathsf{U} _{\Sigma\Rightarrow\Delta})\), there is \(\mathbf{A}\in\mathsf{U}_{\Sigma\Rightarrow\Delta}\) such that \(\mathbf{F}_{\mathsf{Q}}(\omega)\in\mathbf{H}(\mathbf{A})\). Since \(\mathbf{F}_{\mathsf{Q}}(\omega)\) is projective in \(\mathsf{Q}\), it follows that \(\mathbf{F}_{\mathsf{Q}}(\omega)\in\mathbf{S}(\mathbf{A})\subseteq\mathbf{S}( \mathsf{U}_{\Sigma\Rightarrow\Delta})\subseteq\mathsf{U}_{\Sigma\Rightarrow\Delta}\). Therefore, \(\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\) and (2) holds.
To conclude the preliminaries, we present the following lemma which will be particularly useful in our proofs.
**Lemma 2.38**.: _Let \(\mathsf{Q}\) be a quasivariety, and \(\Sigma,\Delta\) be finite sets of equations over variables in a finite set \(X\). The following are equivalent:_
1. \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\)_;_
2. \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\models\Sigma\Rightarrow\Delta\)_;_
3. _there is_ \(p\approx q\in\Delta\) _such that_ \(p/\theta_{\mathsf{Q}}(\Sigma)=q/\theta_{\mathsf{Q}}(\Sigma)\) _in_ \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\)_._
Proof.: It is clear that (1) implies (2) and (2) implies (3). We now show that (3) implies (1).
Let \(\mathbf{A}\in\mathsf{Q}\). If there is no assignment of the variables in \(X\) to \(\mathbf{A}\) that models \(\Sigma\), then \(\mathbf{A}\models\Sigma\Rightarrow\Delta\). Otherwise, suppose there is an assignment \(h\) such that \(\mathbf{A},h\models\Sigma\). Then, since \(\theta_{\mathsf{Q}}(\Sigma)\) is the smallest relative congruence of \(\mathbf{F}_{\mathsf{Q}}(X)\) containing the set of pairs \(S=\{(p,q):p\approx q\in\Sigma\}\), by the Second Homomorphism Theorem we can close the following diagram:
That is, there is a homomorphism \(f:\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\rightarrow\mathbf{A}\) such that \(h=f\pi_{\Sigma}\), where \(\pi_{\Sigma}\) is the natural epimorphism from \(\mathbf{F}_{\mathsf{Q}}(X)\) to \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\). Now by (3) there is at least an identity \(p\approx q\in\Delta\) such that \((p,q)\in\ker(\pi_{\Sigma})\). Since \(h=f\pi_{\Sigma}\), \((p,q)\in\ker(h)\), which means that \(\mathbf{A},h\models p\approx q\) and therefore \(\mathbf{A}\models\Sigma\Rightarrow\Delta\). Since \(\mathbf{A}\) is an arbitrary algebra of \(\mathsf{Q}\) this shows that \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\).
## 3. Universal completeness
In this section we study from the algebraic perspective the notion of universal completeness and its variations: active, nonnegative, passive universal completeness, together with their hereditary versions. That is, we shall see which algebraic properties correspond to the notions coming from the logical perspective (detailed in the preliminaries Subsection 2.4). For each notion, we will present a characterization theorem and some examples. While the characterizations of active and passive universal completeness (to the best of our knowledge) are fully original, we build on existing ones for the other notions, presenting some new results and a coherent presentation in our framework.
### Universal quasivarieties
We start with universal completeness. The following expands [22, Proposition 6].
**Theorem 3.1**.: _For any quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is universally complete;_
2. _for every universal class_ \(\mathsf{U}\subseteq\mathsf{Q}\)_,_ \(\mathbf{H}(\mathsf{U})=\mathbf{H}(\mathsf{Q})\) _implies_ \(\mathsf{U}=\mathsf{Q}\)_._
3. \(\mathsf{Q}=\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
4. _every finitely presented algebra in_ \(\mathsf{Q}\) _is in_ \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_._
Proof.: (2) implies (1) via Lemma 2.35. We show that (1) implies (2). Let \(\mathsf{U}\subseteq\mathsf{Q}\) be a universal class such that \(\mathbf{H}(\mathsf{U})=\mathbf{H}(\mathsf{Q})\) and suppose that \(\mathsf{U}\models\Sigma\Rightarrow\Delta\); then
\[\mathbf{H}(\mathsf{Q})=\mathbf{H}(\mathsf{U})\subseteq\mathbf{H}(\mathsf{U}_{ \Sigma\Rightarrow\Delta})\subseteq\mathbf{H}(\mathsf{Q}).\]
So \(\mathbf{H}(\mathsf{U}_{\Sigma\Rightarrow\Delta})=\mathbf{H}(\mathsf{Q})\) and by Lemma 2.35\(\Sigma\Rightarrow\Delta\) is admissible in \(\mathsf{Q}\). By (1), \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\); therefore \(\mathsf{U}\) and \(\mathsf{Q}\) are two universal classes in which exactly the same clauses are valid, thus they are equal. Hence (2) holds, and thus (2) and (1) are equivalent.
(1) implies (3) follows by Lemma 2.35. Moreover, (3) clearly implies (4). We now show that (4) implies (1), which completes the proof. Consider a universal sentence \(\Sigma\Rightarrow\Delta\) that is admissible in \(\mathsf{Q}\), or equivalently (by Lemma 2.35), such that \(\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\). The finitely presented algebra \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\in\mathbf{ISP}_{u}( \mathbf{F}_{\mathsf{Q}}(\omega))\) by (4), and thus \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\models\Sigma\Rightarrow\Delta\). By Lemma 2.38, \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\) and thus \(\mathsf{Q}\) is universally complete.
By algebraizability, and since the property of being universal (for the discussion in Subsection 2.1) is preserved by categorical equivalence, we get at once:
**Corollary 3.2**.: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is universally complete;_
2. \(\mathcal{L}_{\mathsf{Q}}\) _is universally complete._
The following theorem and lemma show a sufficient and a necessary condition respectively for a quasivariety to be universally complete.
**Theorem 3.3**.: _If every finitely presented algebra in \(\mathsf{Q}\) is exact then \(\mathsf{Q}\) is universally complete._
Proof.: If every finitely presented algebra in \(\mathsf{Q}\) is exact, it is in \(\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\), and thus also in \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\). The claim then follows from Theorem 3.1.
**Theorem 3.4**.: _If \(\mathsf{Q}\) is universally complete, then \(\mathsf{Q}\) is unifiable._
Proof.: Suppose by counterpositive that there is a finite set of identities \(\Sigma\) that is not unifiable in \(\mathsf{Q}\). Then \(\Sigma\Rightarrow\emptyset\) is (passively) admissible but not derivable; indeed it does not hold in the trivial algebra. This implies that \(\mathsf{Q}\) is not universally complete, and the claim is proved.
Since projectivity implies exactness, we observe the following immediate consequence of Theorem 3.3.
**Corollary 3.5**.: _If every finitely presented algebra in \(\mathsf{Q}\) is projective then \(\mathsf{Q}\) is universally complete._
For locally finite varieties there is a stronger result, observed in [22].
**Lemma 3.6**.: _[_22_]_ _Let \(\mathsf{Q}\) be a locally finite quasivariety; then \(\mathbf{A}\in\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\) if and only if every finite subalgebra \(\mathbf{B}\) of \(\mathbf{A}\) is in \(\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\)._
**Theorem 3.7** ([22]).: _Let \(\mathsf{Q}\) be a locally finite variety of finite type. Then \(\mathsf{Q}\) is universally complete if and only if \(\mathsf{Q}\) is unifiable and has exact unifiers._
Proof.: Suppose that \(\mathsf{Q}\) is universally complete; then, by Theorem 3.4, \(\mathsf{Q}\) is unifiable. Since it is universally complete, every finite algebra in \(\mathsf{Q}\) is in \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\), hence in \(\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\) (by Lemma 3.6). Thus every finite unifiable algebra in \(\mathsf{Q}\) is exact and \(\mathsf{Q}\) has exact unifiers. The converse claim follows from Theorem 3.3.
**Remark 3.8**.: We observe that Theorem 3.4 limits greatly the examples of universally complete quasivarieties. In particular, in quasivarieties with finite type the trivial algebra is finitely presented, and thus if \(\mathsf{Q}\) is universally complete, it must be unifiable. This means that a quasivariety with more than one constant in its finite type cannot be universally complete if there are nontrivial models where the constants are distinct; similarly if there is only one constant, then it must generate the trivial algebra in nontrivial models, or equivalently, in \(\mathbf{F}_{\mathsf{Q}}\). If there are no constants, then \(\mathbf{F}_{\mathsf{Q}}=\mathbf{F}_{\mathsf{Q}}(x)\) and, in order to be able to embed the trivial algebra, there has to be an idempotent term.
Let us now discuss some different examples of universally complete (quasi)varieties.
**Example 3.9**.: Let us consider _lattice-ordered abelian groups_ (or abelian \(\ell\)-groups for short). These are algebras \(\mathbf{G}=(G,\wedge,\vee\,,\cdot^{-1}\,,1)\) where \((G,\cdot,^{-1}\,,1)\) is an abelian group, \((G,\wedge,\vee)\) is a lattice, and the group operation distributes over the lattice operations. Every finitely presented abelian \(\ell\)-groups is projective [15]; thus, the variety of abelian \(\ell\)-groups is universally complete by Corollary 3.5.
The same holds for the variety of _negative cones_ of abelian \(\ell\)-groups. Given an \(\ell\)-group \(\mathbf{G}\), the set of elements \(G^{-}=\{x\in G:x\leq 1\}\) can be seen as a _residuated lattice_ (see Section 5.2) \(\mathbf{G}^{-}=(G^{-},\cdot,\rightarrow,\wedge,\vee,1)\) where \((\cdot,\wedge,\vee,1)\) are inherited by the group and \(x\to y=x^{-1}\cdot y\wedge 1\). The algebraic category of negative cones of abelian \(\ell\)-groups is equivalent to the one of abelian \(\ell\)-groups [46], thus every finitely presented algebra is projective and the variety of negative cones of \(\ell\)-groups \(\mathsf{LG}^{-}\) is universally complete. Observe that in all these cases the unique constant \(1\) is absorbing w.r.t. any basic operation, and it generates the trivial algebra.
**Example 3.10**.: _Hoops_ are a particular variety of residuated monoids related to logic which were defined in an unpublished manuscript by Buchi and Owens, inspired by the work of Bosbach on partially ordered monoids (see [17] for details on the theory of hoops). Hoops have a constant which is absorbing w.r.t. any basic operation; hence the least free algebra is trivial in any variety of hoops and any variety of hoops is unifiable. In [8] it was shown that every finite hoop is projective in the class of finite hoops which via Lemma 2.24 entails that every locally finite variety of hoops has projective unifiers. Since any locally finite quasivariety is contained in a locally finite variety, every locally finite quasivariety of hoops is universally complete. The same holds in the variety of \(\rightarrow\)-subreducts of hoops, usually denoted by \(\mathsf{HBCK}\); again locally finite varieties of \(\mathsf{HBCK}\)-algebras have projective unifiers [8] and hence they are universally complete. For a non-locally finite example, we say that a hoop is _cancellative_ if the underlying monoid is cancellative; cancellative hoops form a variety \(\mathsf{C}\) that is categorically equivalent to the one of abelian \(\ell\)-groups [17]. Hence \(\mathsf{C}\) is a non locally finite variety of hoops which is universally complete.
The classes of algebras in the above examples all have projective unifiers. However:
**Example 3.11**.: In lattices there are no constants but any variety of lattices is idempotent; hence the least free algebra is trivial and every lattice is unifiable. Every finite distributive lattice is exact [22] and distributive lattices are locally finite, so distributive lattices are universally complete by Theorem 3.7. Moreover, as we have already observed in Example 2.25, distributive lattices do not have projective unifiers.
We now consider the hereditary version of universal completeness.
**Definition 3.12**.: A quasivariety \(\mathsf{Q}\) is **primitive universal** if all its subquasivarieties are universally complete.
All the above examples of universally complete varieties are primitive universal and this is not entirely coincidental. Distributive lattices are trivially primitive universal, since they do not have any trivial subquasivariety. For all the other examples, we have a general result.
**Theorem 3.13**.: _Let \(\mathsf{Q}\) be a quasivariety with projective unifiers and such that \(\mathbf{F}_{\mathsf{Q}}\) is trivial; then \(\mathsf{Q}\) is primitive universal._
Proof.: Observe that for any subquasivariety \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\), \(\mathbf{F}_{\mathsf{Q}^{\prime}}\) is trivial as well. Hence every algebra in \(\mathsf{Q}\) is unifiable in any subvariety to which it belongs. Let \(\mathbf{B}^{\prime}=\mathbf{F}_{\mathsf{Q}^{\prime}}(X)/\theta_{\mathsf{Q}^{ \prime}}(\Sigma)\) be finitely presented in \(\mathsf{Q}^{\prime}\); then \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is finitely presented in \(\mathsf{Q}\) and thus it is projective in \(\mathsf{Q}\). But then Lemma 2.28 applies and \(\mathbf{B}\) is projective; thus \(\mathsf{Q}^{\prime}\) has projective unifiers and thus it is universally complete by Corollary 3.5.
Is the same conclusion true if we replace "projective unifiers" with "unifiable, locally finite with exact unifiers"? We do not know, but we know that we cannot use an improved version of Lemma 2.28 since it cannot be improved to account for exact unifiers (see Example 3.29).
### Non-negative and active universal quasivarieties
The situation in which universal completeness fails due _only_ to the trivial algebras has been first investigated in [22]; the following expands [22, Proposition 8].
**Theorem 3.14**.: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is non-negative universally complete;_
2. _every admissible universal sentence is valid in_ \(\mathsf{Q}^{+}\)_;_
3. _every nontrivial algebra is in_ \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_._
4. _every nontrivial finitely presented algebra is in_ \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_._
Proof.: The equivalence of the first three points is in [22, Proposition 8], and (3) clearly implies (4). Assume now that (4) holds, we show (1). Let \(\Sigma\Rightarrow\Delta\) be a non-negative admissible universal sentence with variables in a finite set \(X\), we show that \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\models\Sigma\Rightarrow\Delta\). If \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is trivial, then it models \(\Sigma\Rightarrow\Delta\) (given that \(\Delta\) is not \(\emptyset\)). Suppose now that \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is nontrivial, then it is in \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\) by hypothesis and then it models \(\Sigma\Rightarrow\Delta\) since the latter is admissible, and thus \(\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\) by Lemma 2.35. By Lemma 2.38, \(\mathsf{Q}\) models \(\Sigma\Rightarrow\Delta\) and (1) holds.
Moreover:
**Theorem 3.15**.: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is non-negative universally complete;_
2. \(\mathcal{L}_{\mathsf{Q}}\) _is non-negative universally complete._
Proof.: In a categorical equivalence between quasivarieties trivial algebras are mapped to trivial algebras, since the latter can be characterized as the algebras that are a homomorphic image of every algebra in a quasivariety. Thus nontrivial finitely presented algebras are mapped to nontrivial finitely presented algebras, and the result follows from the usual arguments.
We can also obtain an analogue of Theorem 3.3.
**Theorem 3.16**.: _If every nontrivial finitely presented algebra in \(\mathsf{Q}\) is exact (or projective), then \(\mathsf{Q}\) is non-negative universally complete._
Proof.: If every nontrivial finitely presented algebra is exact (or projective), then it is in \(\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\), and therefore in \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\). The claim then follows from Theorem 3.14.
Analogously to the case of universal completeness, we get a stronger result for locally finite quasi-varieties.
**Theorem 3.17**.: _Let \(\mathsf{Q}\) be a locally finite quasivariety. Then \(\mathsf{Q}\) is non-negative universally complete if and only if every nontrivial finitely presented algebra is exact._
Proof.: Suppose that \(\mathsf{Q}\) is locally finite and there is a finite nontrivial algebra \(\mathbf{A}\in\mathsf{Q}\) that is not exact. Then \(\mathbf{A}\notin\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\) and thus, by Lemma 3.6, \(\mathbf{A}\notin\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\). Therefore \(\mathsf{Q}\) cannot be non-negative universally complete by Theorem 3.14. The other direction follows from Theorem 3.16.
**Example 3.18**.: Boolean algebras are an example of a non-negative universally complete variety that is not universally complete. It is easily seen that every nontrivial finite Boolean algebra is exact (indeed, projective), which shows that Boolean algebras are non-negative universally complete by Theorem 3.17. However, there are negative admissible clauses: e.g., the ones with premises given by the presentation of the trivial algebra, which is finitely presented but not unifiable. Thus Boolean algebras are not universally complete.
**Example 3.19**.: Stone algebras are a different example; in [22] the authors proved, using the duality between Stone algebras and particular Priestley spaces, that every finite nontrivial Stone algebra is exact; hence Stone algebras are non-negative universally complete.
We now move on to describe active universal completeness from the algebraic perspective.
**Theorem 3.20**.: _Let \(\mathsf{Q}\) be a quasivariety. The following are equivalent:_
1. \(\mathsf{Q}\) _is active universally complete;_
2. _every unifiable algebra in_ \(\mathsf{Q}\) _is in_ \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
3. _every finitely presented and unifiable algebra in_ \(\mathsf{Q}\) _is in_ \(\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
4. _every universal sentence admissible in_ \(\mathsf{Q}\) _is satisfied by all finitely presented unifiable algebras in_ \(\mathsf{Q}\)_;_
5. _for every_ \(\mathbf{A}\in\mathsf{Q}\)_,_ \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}\in\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{ Q}}(\omega))\)_._
Proof.: We start by showing that (1) implies (2). Assume (1), and let \(\Sigma\Rightarrow\Delta\) be such that \(\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\); equivalently, by Lemma 2.35, \(\Sigma\Rightarrow\Delta\) is an admissible universal sentence in \(\mathsf{Q}\). If \(\Sigma\) is unifiable, by hypothesis \(\Sigma\Rightarrow\Delta\) is valid in \(\mathsf{Q}\). Suppose now that \(\Sigma\) has variables in a finite set \(X\) and it is not unifiable, that is, via Corollary 2.19 there is no homomorphism from \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) to \(\mathbf{F}_{\mathsf{Q}}\). Let \(\mathbf{A}\) be a unifiable algebra in \(\mathsf{Q}\); we argue that there is no assignment of the variables in \(\Sigma\) that validates \(\Sigma\) in \(\mathbf{A}\). Indeed otherwise the following diagram would commute and \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) would be unifiable, yielding a contradiction.
Therefore, \(\Sigma\Rightarrow\Delta\) is vacuously satisfied in \(\mathbf{A}\), which is any unifiable algebra in \(\mathsf{Q}\), thus (2) holds. Now, clearly (2) implies (3), and (3) and (4) are equivalent by the definitions.
Let us show that (4) implies (1). Let \(\Sigma\Rightarrow\Delta\) be an active admissible universal sentence in \(\mathsf{Q}\) with variables in a finite set \(X\); we want to show that it is also valid in \(\mathsf{Q}\). Since by hypothesis \(\Sigma\Rightarrow\Delta\) is active admissible, \(\Sigma\) is unifiable, and therefore so is \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) by Corollary 2.19. Then by (4), \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\models\Sigma\Rightarrow\Delta\), which implies that \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\) by Lemma 2.38. Therefore the first four points are equivalent.
Finally, we show that (1) implies (5) and (5) implies (2), which completes the proof. We start with \((1)\Rightarrow(5)\). Let \(\mathbf{A}\in\mathsf{Q}\), and consider a clause \(\Sigma\Rightarrow\Delta\) valid in \(\mathbf{F}_{\mathsf{Q}}(\omega)\). We show that \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\). Now, if \(\mathsf{Q}\models\Sigma\Rightarrow\Delta\), in particular \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\). Suppose that \(\mathsf{Q}\not\models\Sigma\Rightarrow\Delta\). Since \(\mathsf{Q}\) is active universally complete, \(\Sigma\Rightarrow\Delta\) must be a passive rule, thus \(\Sigma\) is not unifiable. Equivalently, there is no assignment \(h\) of the variables in \(\Sigma\) such that \(\mathbf{F}_{\mathsf{Q}},h\models\Sigma\). Thus, there is also no assignment \(h^{\prime}\) of the variables in \(\Sigma\) such that \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}},h^{\prime}\models\Sigma\), thus \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}(\omega)\models\Sigma\Rightarrow\Delta\).
It is left to prove \((5)\Rightarrow(2)\). Let \(\mathbf{A}\) be a unifiable algebra in \(\mathsf{Q}\), then there is a homomorphism \(h:\mathbf{A}\to\mathbf{F}_{\mathsf{Q}}\) (Lemma 2.21). Consider the map \(h^{\prime}:\mathbf{A}\to\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}\) be defined as \(h^{\prime}(a)=(a,h(a))\). Clearly, \(h^{\prime}\) is an embedding of \(\mathbf{A}\) into \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}\in\mathbf{ISP}_{u}(\mathbf{F}_{ \mathsf{Q}}(\omega))\) (by (5)). Thus also \(\mathbf{A}\in\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{Q}}(\omega))\), which completes the proof.
We observe that the previous characterization extends to universal sentences some of the results in [35] about active structural completeness. We also get the usual result.
**Theorem 3.21**.: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is active universally complete;_
2. \(\mathcal{L}_{\mathsf{Q}}\) _is active universally complete._
Proof.: The result follows from the fact that embeddings, ultraproducts, being finitely presented and unifiable, are all categorical notions and thus preserved by categorical equivalence.
Moreover, we have the following lemma whose proof is the same as the one of Theorems 3.3 and 3.7.
**Theorem 3.22**.: _If \(\mathsf{Q}\) has exact (or projective) unifiers, then \(\mathsf{Q}\) is active universally complete. If \(\mathsf{Q}\) is also locally finite then it is active universally complete if and only if it has exact unifiers._
**Example 3.23**.: A _discriminator_ on a set \(A\) is a ternary operation \(t\) on \(A\) defined by
\[t(a,b,c)=\left\{\begin{array}{ll}a,&\mbox{if }a\neq b;\\ c,&\mbox{if }a=b.\end{array}\right.\]
A variety \(\mathsf{V}\) is a _discriminator variety_[70] if there is a ternary term that is the discriminator on all the subdirectly irreducible members of \(\mathsf{V}\). Discriminator varieties have many strong properties: for instance they are congruence permutable and congruence distributive.
In [20, Theorem 3.1] it has been essentially shown that discriminator varieties have projective unifiers, and therefore they are all active universally complete by Theorem 3.22.
**Example 3.24**.: We now see some examples within the algebraic semantics of many-valued logics; in [8] it has been shown that in any locally finite variety of bounded hoops or \(\mathsf{BL}\)-algebras (the equivalent algebraic semantics of Hajek Basic Logic [51]), the finite unifiable algebras are exactly the finite projective algebras. It follows that any of such varieties has projective unifiers and hence it is active universally complete. This holds also for any locally finite quasivariety of bounded hoops or \(\mathsf{BL}\)-algebras, or their reducts, i.e., bounded \(\mathsf{HBCK}\)-algebras.
In contrast with the case of (unbounded) hoops, not all of them are non-negative universally complete, as we will now discuss. Let us call _chain_ a totally ordered algebra. Every finite \(\mathsf{BL}\)-chain is an ordinal sum of finite Wajsberg hoops, the first of which is an MV-algebra [4]. No finite MV-chain different from the \(2\)-element Boolean algebra \(\mathbf{2}\) is unifiable (they are all simple and the least free algebra is \(\mathbf{2}\)), and thus not exact. It follows by basic facts about ordinal sums that if a locally finite quasivariety \(\mathsf{Q}\) of \(\mathsf{BL}\)-algebras contains a chain whose first component is different from \(\mathbf{2}\), \(\mathsf{Q}\) is not non-negative universally complete. The same holds, mutatis mutandis, for bounded hoops and bounded \(\mathsf{HBCK}\)-algebras. In Section 5.2 we shall see a different class of (discriminator) varieties coming from many-valued logics that are active universally complete.
**Definition 3.25**.: We call a quasivariety \(\mathsf{Q}\)_active primitive universal_ if every subquasivariety of \(\mathsf{Q}\) is active universally complete.
It is evident from the characterization theorem of active universally complete quasivarieties that a variety \(\mathsf{Q}\) is active primitive universal if and only if \(\mathcal{L}_{\mathsf{Q}}\) is hereditarily active universally complete. We have the following fact:
**Theorem 3.26**.: _Suppose that \(\mathsf{Q}\) is a quasivariety such that \(\mathbf{F}_{\mathsf{Q}}=\mathbf{F}_{\mathsf{Q}^{\prime}}\) for all \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\). If \(\mathsf{Q}\) has projective unifiers then it is active primitive universal._
Proof.: The proof follows from Theorem 3.22 and Lemma 2.29.
All varieties in Example 3.24 satisfy the hypotheses of Theorem 3.26 (as the reader can easily check). For discriminator varieties all the examples of lattice-based varieties in Section 5.2 of this paper (but see also [20] or [31] for more examples) have the same property; hence they are all active primitive universal.
Now, a variety is _q-minimal_ if it does not have any proper nontrivial subquasivariety; so a q-minimal variety is necessarily equationally complete. We have this result by Bergman and McKenzie:
**Theorem 3.27**.: _[_14_]_ _A locally finite equationally complete variety is q-minimal if and only if it has exactly one subdirectly irreducible algebra that is embeddable in any nontrivial member of the variety. Moreover, this is always the case if the variety is congruence modular._
It follows immediately that every active universally complete q-minimal variety is active primitive universal.
**Example 3.28**.: Discriminator varieties are active universally complete as seen in example 3.23. Now, given a finitely generated discriminator variety \(\mathsf{V}\), it is generated by a finite algebra \(\mathbf{A}\) having a discriminator term on it, also called a _quasi-primal_ algebra. By [74]\(\mathsf{V}\) is equationally complete and, since it is congruence modular, it is q-minimal; hence \(\mathsf{V}\) is active primitive universal.
Finally, we observe that Lemma 2.29 cannot be improved to "having exact unifiers" and the counterexample is given by _De Morgan lattices_; we will see below that they form an active universally complete variety that is not active primitive universal.
**Example 3.29**.: A De Morgan lattice is a distributive lattice with a unary operation \(\neg\) which is involutive and satisfies the De Morgan Laws. It is well-known that the variety \(\mathsf{DM}\) of De Morgan lattices is locally finite and has exactly two proper non trivial subvarieties, i.e. the variety \(\mathsf{BLa}\) of Boolean lattices (axiomatized by \(x\leqslant y\vee\neg y\)) and the variety \(\mathsf{KL}\) of Kleene lattices (axiomatized by \(x\wedge\neg x\leqslant y\vee\neg y\)). It is easily seen that all these nontrivial varieties have the same one-generated free algebra whose universe is \(\{x,\neg x,x\vee\neg x,x\wedge\neg x\}\). It follows that all the subquasivarieties of De Morgan lattices have the same least free algebra and \(\mathsf{DM}\) satisfies the hypotheses of Theorem 3.26. Admissibility in De Morgan lattices has been investigated in [64] and [22]. Now for a finite algebra \(\mathbf{A}\in\mathsf{DM}\) the following are equivalent:
1. \(\mathbf{A}\) is unifiable;
2. the universal sentence \(\{x\approx\neg x\}\Rightarrow\emptyset\) is valid in \(\mathbf{A}\);
3. \(\mathbf{A}\in\mathbf{IS}(\mathbf{F}_{\mathsf{DM}}(\omega))\).
The equivalence of (2) and (3) has been proved in [22, Lemma 28], while (3) implies (1) trivially. If we assume that (2) does not hold for \(\mathbf{A}\), then there is an \(a\in A\) with \(\neg a=a\); so if \(f:\mathbf{A}\longrightarrow\mathbf{F}_{\mathsf{DM}}(x)\) is a homomorphism and \(f(a)=\varphi\), then \(\varphi=\neg\varphi\). But there is no element in \(\mathbf{F}_{\mathsf{DM}}(x)\) with that property, so \(\mathbf{A}\) cannot be unifiable. This concludes the proof of the equivalence of the three statements.
Therefore \(\mathsf{DM}\) has exact unifiers and thus it is active universally complete by Theorem 3.22. Now consider the subvariety of \(\mathsf{DM}\) of Kleene lattices. In [22] it is shown that the universal sentence
\[\Phi:=\{x\leqslant\neg x,x\wedge\neg y\leqslant\neg x\lor y\}\Rightarrow \neg y\leqslant y\]
is admissible in \(\mathsf{KL}\). It is also active, as the reader can easily check that the substitution \(x\longmapsto z\wedge\neg z\), \(y\longmapsto\neg z\) unifies the premises of \(\Phi\). However it fails in the three element Kleene lattice \(\mathbf{K}_{3}\) in Figure 2, with the assignment \(x=a\), \(y=\neg a\); hence \(\mathsf{KL}\) is not active universally complete. So \(\mathsf{DM}\) is a variety that is active universally complete but not active primitive universal.
Note that in \(\mathsf{KL}\) there must be a finite unifiable algebra that is not exact (since \(\mathsf{KL}\) cannot have exact unifiers). Now a finite Kleene lattice \(\mathbf{A}\) is exact if and only if both \(\{x\approx\neg x\}\Rightarrow\emptyset\) and \(\Phi\) are valid in \(\mathbf{A}\)[22, Lemma 38]. Let \(\mathbf{A}=\mathbf{K}_{3}\times\mathbf{2}\); the reader can easily check that \(\mathbf{A}\) is unifiable in \(\mathsf{KL}\) (since it satisfies \(\{x\approx\neg x\}\Rightarrow\emptyset\) and hence it is unifiable in \(\mathsf{DM}\)) but does not satisfy \(\Phi\). This shows (as promised) that Lemma 2.28 cannot be improved.
### Passive universal quasivarieties
We will now see that passive universal completeness in a quasivariety corresponds to an algebraic notion we have already introduced: unifiability. Moreover, we shall see that it corresponds to the apparently weaker notion of negative universal completeness, that is, every (passive) admissible negative universal sentence is derivable. We recall that a quasivariety \(\mathsf{Q}\) is unifiable if every finitely presented algebra in \(\mathsf{Q}\) is unifiable.
**Theorem 3.30**.: _For every quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is passive universally complete;_
2. \(\mathsf{Q}\) _is negative universally complete;_
3. \(\mathsf{Q}\) _is unifiable._
Proof.: Assume (1) and let \(\Sigma\Rightarrow\emptyset\) be a negative admissible universal sentence; then it is necessarily passive, since there is no substitution that unifies \(\emptyset\). Thus, by (1), \(\Sigma\Rightarrow\emptyset\) is valid in \(\mathsf{Q}\).
Assume now (2), we prove that it implies (3) by contrapositive. Suppose that \(\mathsf{Q}\) is not unifiable, that is, there exists a finite set of identities \(\Sigma\) that is not unifiable. Then the negative universal sentence \(\Sigma\Rightarrow\emptyset\) is (passively) admissible, but it is not derivable (in particular, it fails in the trivial algebra).
Finally, if (3) holds, then (1) trivially holds, since if every set of identities is unifiable there is no passive admissible clause.
In some cases, we can improve the previous result.
**Lemma 3.31**.: _Let \(\mathsf{Q}\) be a quasivariety such that \(\mathbf{I}(\mathbf{F}_{\mathsf{Q}})=\mathbf{I}\mathbf{P}_{u}(\mathbf{F}_{ \mathsf{Q}})\), then the following are equivalent._
1. \(\mathsf{Q}\) _is unifiable;_
2. _every algebra in_ \(\mathsf{Q}\) _is unifiable._
Proof.: We prove the nontrivial direction by contraposition. Consider an arbitrary algebra \(\mathbf{A}\in\mathsf{Q}\) and assume that it is not unifiable; without loss of generality we let \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta\) for some set \(X\) and some relative congruence \(\theta\). Since \(\mathbf{A}\) is not unifiable, there is no assignment \(h:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}\) such that \(\mathbf{F}_{\mathsf{Q}},h\models\Sigma_{\theta}\), where \(\Sigma_{\theta}=\{t\approx u:(t,u)\in\theta\}\). Equivalently, iff \(\mathbf{F}_{\mathsf{Q}}\models\Sigma_{\theta}\Rightarrow\emptyset\). Now, the equational consequence relation relative to a class of algebras \(\mathsf{K}\) is finitary if and only if \(\mathsf{K}\) is closed under ultraproducts (see for instance [71]); thus by hypothesis the equational consequence relation relative to \(\mathbf{F}_{\mathsf{Q}}\) is finitary, and we obtain that \(\mathbf{F}_{\mathsf{Q}}\models\Sigma^{\prime}_{\theta}\Rightarrow\emptyset\), for \(\Sigma^{\prime}_{\theta}\) some finite subset of \(\Sigma_{\theta}\). That is, \(\Sigma^{\prime}\) is finite and not unifiable, thus \(\mathsf{Q}\) is not unifiable and the proof is complete.
Observe that if a quasivariety \(\mathsf{Q}\) is such that \(\mathbf{F}_{\mathsf{Q}}\) is finite, it satisfies the hypothesis of the previous lemma.
**Corollary 3.32**.: _Let \(\mathsf{Q}\) be a quasivariety such that \(\mathbf{I}(\mathbf{F}_{\mathsf{Q}})=\mathbf{I}\mathbf{P}_{u}(\mathbf{F}_{ \mathsf{Q}})\), then the following are equivalent._
1. \(\mathsf{Q}\) _is passive universally complete;_
2. \(\mathsf{Q}\) _is negative universally complete;_
3. \(\mathsf{Q}\) _is unifiable;_
4. _every algebra in_ \(\mathsf{Q}\) _is unifiable._
Since unifiability is preserved by categorical equivalence, we get the following.
**Corollary 3.33**.: _A quasivariety \(\mathsf{Q}\) is passive universally complete if and only if \(\mathcal{L}_{\mathsf{Q}}\) is passive universally complete._
Figure 2. The lattice \(\mathbf{K}_{3}\)
## 4. Structural completeness
In this section we investigate the algebraic counterparts of structural completeness and its variations. The main new results are about the characterization of passive structurally complete quasivarieties; moreover, we also show a characterization of primitive quasivarieties grounding on the results in [50].
### Structural quasivarieties
The bridge theorems for structural completeness have been first established by Bergman [13]. We present the proof for the sake of the reader, expanding with point (6).
**Theorem 4.1** ([13]).: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is structurally complete;_
2. \(\mathsf{Q}=\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
3. _no proper subquasivariety of_ \(\mathsf{Q}\) _generates a proper subvariety of_ \(\mathbf{H}(\mathsf{Q})\)_;_
4. _for all_ \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\) _if_ \(\mathbf{H}(\mathsf{Q}^{\prime})=\mathbf{H}(\mathsf{Q})\)_, then_ \(\mathsf{Q}=\mathsf{Q}^{\prime}\)_;_
5. _for all_ \(\mathbf{A}\in\mathsf{Q}\) _if_ \(\mathbf{V}(\mathbf{A})=\mathbf{H}(\mathsf{Q})\)_, then_ \(\mathbf{Q}(\mathbf{A})=\mathsf{Q}\)_;_
6. _every finitely presented algebra in_ \(\mathsf{Q}\) _is in_ \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_._
Proof.: First, (1) is equivalent to (2) via Lemma 2.35. The implications (3) \(\Leftrightarrow\) (4) \(\Rightarrow\) (5) \(\Rightarrow\) (2) are straightforward. (2) implies (4) since if \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\) and \(\mathbf{H}(\mathsf{Q}^{\prime})=\mathbf{H}(\mathsf{Q})\), we get that \(\mathbf{F}_{\mathsf{Q}^{\prime}}(\omega)=\mathbf{F}_{\mathbf{H}(\mathsf{Q}^{ \prime})}(\omega)=\mathbf{F}_{\mathbf{H}(\mathsf{Q})}(\omega)=\mathbf{F}_{ \mathsf{Q}}(\omega)\); thus \(\mathsf{Q}=\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))=\mathbf{Q}(\mathbf{F} _{\mathsf{Q}}^{\prime}(\omega))\subseteq\mathsf{Q}^{\prime}\) and then equality holds. Thus the first five points are equivalent; Finally, clearly (2) implies (6), and (6) implies (2) since a quasivariety is generated by its finitely presented algebras ([50, Proposition 2.1.18]).
**Corollary 4.2**.: _A variety \(\mathsf{V}\) is structurally complete if and only if every proper subquasivariety of \(\mathsf{V}\) generates a proper subvariety; therefore if \(\mathbf{A}\) is such that \(\mathbf{V}(\mathbf{A})\) is structurally complete, then \(\mathbf{V}(\mathbf{A})=\mathbf{Q}(\mathbf{A})\)._
Since the definition of structural completeness is invariant under categorical equivalence we get also:
**Corollary 4.3**.: _Let \(\mathsf{Q}\) be a quasivariety; then \(\mathsf{Q}\) is structurally complete if and only if \(\mathcal{L}_{\mathsf{Q}}\) is structurally complete._
Let us extract some sufficient conditions for structural completeness.
**Lemma 4.4**.: _Let \(\mathsf{Q}\) be a quasivariety; if_
1. _every_ \(\mathbf{A}\in\mathsf{K}\) _is exact in_ \(\mathsf{Q}=\mathbf{Q}(\mathsf{K})\)_, or_
2. _every finitely generated algebra in_ \(\mathsf{Q}\) _is exact, or_
3. _every finitely presented algebra in_ \(\mathsf{Q}\) _is exact, or_
4. _every finitely generated relative subdirectly irreducible in_ \(\mathsf{Q}\) _is exact,_
_then \(\mathsf{Q}\) is structurally complete. Moreover if every \(\mathbf{A}\in\mathsf{K}\) is exact in \(\mathbf{V}(\mathsf{K})\) and every subdirectly irreducible member of \(\mathbf{V}(\mathsf{K})\) is in \(\mathbf{IS}(\mathsf{K})\), then \(\mathbf{V}(\mathsf{K})\) is structurally complete._
Proof.: If each algebra in \(\mathsf{K}\) is exact in \(\mathsf{Q}=\mathbf{Q}(\mathsf{K})\), then \(\mathsf{K}\subseteq\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\); therefore \(\mathsf{Q}=\mathbf{Q}(\mathsf{K})\subseteq\mathbf{Q}(\mathbf{F}_{\mathsf{Q}} (\omega))\) and thus equality holds. Hence \(\mathsf{Q}\) is structurally complete by the characterization theorem. The other points follow.
For the last claim, every subdirectly irreducible member of \(\mathbf{V}(\mathsf{K})\) lies in \(\mathbf{IS}(\mathsf{K})\) and thus is exact in \(\mathbf{V}(\mathsf{K})\). Since any variety is generated as a quasivariety by its subdirectly irreducible members, \(\mathbf{V}(\mathsf{K})\) is structurally complete.
We observe that none of the previous conditions is necessary. For locally finite quasivarieties we have a necessary and sufficient condition for structural completeness because of the following:
**Lemma 4.5** ([22]).: _Let \(\mathsf{Q}\) be a locally finite quasivariety and \(\mathbf{A}\) a finite algebra in \(\mathsf{Q}\). Then \(\mathbf{A}\in\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\) if and only if \(\mathbf{A}\in\mathbf{ISP}(\mathbf{F}_{\mathsf{Q}}(\omega))\)._
The following theorem improves [22, Corollary 11].
**Theorem 4.6**.: _For a locally finite quasivariety \(\mathsf{Q}\) of finite type the following are equivalent:_
1. \(\mathsf{Q}\) _is structurally complete;_
2. _each finite algebra in_ \(\mathsf{Q}\) _is in_ \(\mathbf{ISP}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
3. _every finite relative subdirectly irreducible in_ \(\mathsf{Q}\) _is exact._
Proof.: Assume (1); then each finite algebra in \(\mathsf{Q}\) is in \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\) and thus, by Lemma 4.5, is in \(\mathbf{ISP}(\mathbf{F}_{\mathsf{Q}}(\omega))\) and (2) holds. If (2) holds and \(\mathbf{A}\) is finite relative subdirectly irreducible, then it is in \(\mathbf{IS}(\mathbf{F}_{\mathsf{Q}}(\omega))\), i.e. it is exact. Finally if (3) holds, then \(\mathsf{Q}\) is structurally complete by Lemma 4.4.
### Primitive quasivarieties
We now consider the hereditary notion of structural completeness.
**Definition 4.7**.: A class of algebras \(\mathsf{K}\) in a quasivariety \(\mathsf{Q}\) is _equational relative to \(\mathsf{Q}\)_ if \(\mathsf{K}=\mathbf{V}(\mathsf{K})\cap\mathsf{Q}\). In particular, a subquasivariety \(\mathsf{Q}^{\prime}\) of \(\mathsf{Q}\) is _equational relative to \(\mathsf{Q}\)_ if \(\mathsf{Q}^{\prime}=\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\); a quasivariety \(\mathsf{Q}\) is _primitive_ if every subquasivariety of \(\mathsf{Q}\) is equational relative to \(\mathsf{Q}\).
Clearly primitivity is downward hereditary and a variety \(\mathsf{V}\) is primitive if and only if every subquasivariety of \(\mathsf{V}\) is a variety. We can show the following.
**Theorem 4.8**.: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is primitive;_
2. _every subquasivariety of_ \(\mathsf{Q}\) _is structurally complete;_
3. _for all subdirectly irreducible_ \(\mathbf{A}\in\mathbf{H}(\mathsf{Q})\) _and for any_ \(\mathbf{B}\in\mathsf{Q}\)_, if_ \(\mathbf{A}\in\mathbf{H}(\mathbf{B})\)_, then_ \(\mathbf{A}\in\mathbf{ISP}_{u}(\mathbf{B})\)_._
Proof.: We first show the equivalence between (1) and (2). Suppose that \(\mathsf{Q}\) is primitive and let \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\); if \(\mathsf{Q}^{\prime\prime}\subseteq\mathsf{Q}^{\prime}\) and \(\mathbf{H}(\mathsf{Q}^{\prime\prime})=\mathbf{H}(\mathsf{Q}^{\prime})\) then
\[\mathsf{Q}^{\prime}=\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}=\mathbf{H}( \mathsf{Q}^{\prime\prime})\cap\mathsf{Q}=\mathsf{Q}^{\prime\prime}\]
so \(\mathsf{Q}^{\prime}\) is structurally complete by Theorem 4.1.
Conversely assume (2), let \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\) and let \(\mathsf{Q}^{\prime\prime}=\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\) (it is clearly a quasivariety); then \(\mathbf{H}(\mathsf{Q}^{\prime\prime})=\mathbf{H}(\mathsf{Q}^{\prime})\) and thus \(\mathsf{Q}^{\prime\prime}=\mathsf{Q}^{\prime}\), again using the characterization of Theorem 4.1. So \(\mathsf{Q}^{\prime}\) is equational in \(\mathsf{Q}\) and \(\mathsf{Q}\) is primitive.
Assume (1) again, and let \(\mathbf{A},\mathbf{B}\in\mathsf{Q}\) with \(\mathbf{A}\) subdirectly irreducible and \(\mathbf{A}\in\mathbf{H}(\mathbf{B})\). Since \(\mathsf{Q}\) is primitive we have
\[\mathbf{Q}(\mathbf{B})=\mathbf{H}(\mathbf{Q}(\mathbf{B}))\cap\mathsf{Q}\]
and hence \(\mathbf{A}\in\mathbf{Q}(\mathbf{B})\). Since \(\mathbf{A}\) is subdirectly irreducible, \(\mathbf{A}\in\mathbf{ISP}_{u}(\mathbf{B})\) by Theorem 2.3 and (3) holds.
Conversely, assume (3) and let \(\mathsf{Q}^{\prime}\) be a subquasivariety of \(\mathsf{Q}\); if \(\mathbf{B}\in\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\), observe that \(\mathbf{B}\in\mathbf{H}(\mathsf{Q})\) and hence \(\mathbf{B}\leqslant_{sd}\prod\mathbf{A}_{i}\) where the \(\mathbf{A}_{i}\) are subdirectly irreducible in \(\mathbf{H}(\mathsf{Q})\cap\mathbf{H}(\mathsf{Q}^{\prime})\). Then for all \(i\) there is \(\mathbf{B}_{i}\in\mathsf{Q}^{\prime}\) such that \(\mathbf{A}_{i}\in\mathbf{H}(\mathbf{B}_{i})\) and hence by hypothesis \(\mathbf{A}_{i}\in\mathbf{SP}_{u}(\mathbf{B}_{i})\) and so \(\mathbf{A}_{i}\in\mathsf{Q}^{\prime}\) for all \(i\). Therefore \(\mathbf{B}\in\mathsf{Q}^{\prime}\), so \(\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}=\mathsf{Q}^{\prime}\) and \(\mathsf{Q}^{\prime}\) is equational in \(\mathsf{Q}\). Therefore \(\mathsf{Q}\) is primitive and (1) holds.
As commented in the preliminary section (Subsection 2.1), primitivity is preserved under categorical equivalence, and therefore:
**Corollary 4.9**.: _A quasivariety is primitive if and only if \(\mathcal{L}_{\mathsf{Q}}\) is hereditarily structurally complete._
We will see how Theorem 4.8 can be improved in the locally finite case. Let \(\mathsf{Q}\) be a quasivariety and let \(\mathbf{A}\in\mathsf{Q}\); we define
\[[\mathsf{Q}:\mathbf{A}]=\{\mathbf{B}\in\mathsf{Q}:\mathbf{A}\notin\mathbf{IS}( \mathbf{B})\}.\]
The following lemma describes some properties of \([\mathsf{Q}:\mathbf{A}]\); the proofs are quite standard with the exception of point (3). As a matter of fact a proof of the forward implication of (3) appears in [50, Corollary 2.1.17]. However the proof is somewhat buried into generality and it is not easy to follow; so we felt that a suitable translation would make it easier for the readers.
**Lemma 4.10**.: _Let \(\mathsf{Q}\) be a quasivariety; then_
1. _if_ \(\mathbf{A}\in\mathsf{Q}\) _is finite and_ \(\mathsf{Q}\) _has finite type, then_ \([\mathsf{Q}:\mathbf{A}]\) _is a universal class;_
2. _if_ \(\mathbf{A}\) _is relative subdirectly irreducible and finitely presented, then_ \([\mathsf{Q}:\mathbf{A}]\) _is a quasivariety;_
_._
3. \(\mathbf{A}\) _is weakly projective in_ \(\mathsf{Q}\) _if and only if_ \([\mathsf{Q}:\mathbf{A}]\) _is closed under_ \(\mathbf{H}\) _if and only if_ \([\mathsf{Q}:\mathbf{A}]\) _is equational relative to_ \(\mathsf{Q}\)_;_
4. _if_ \(\mathbf{A}\) _is relative subdirectly irreducible, finitely presented and weakly projective in_ \(\mathsf{Q}\)_, then_ \([\mathsf{Q}:\mathbf{A}]\) _is a variety._
_Moreover if \(\mathsf{Q}\) is locally finite of finite type, the converse implications in (1),(2) and (4) hold._
Proof.: For (1), if \(\mathbf{A}\) is finite, then there is a first order universal sentence \(\Psi\) such that, for all \(\mathbf{B}\in\mathsf{Q}\), \(\mathbf{B}\models\Psi\) if and only if \(\mathbf{A}\in\mathbf{IS}(\mathbf{B})\). More in detail, if \(|A|=n\),
\[\Psi:=\exists x_{1}\ldots\exists x_{n}(\&\{x_{i}\neq x_{j}:i,j\leq n,i\neq j \}\ \&\ \mathbf{D}(\mathbf{A})),\]
where \(\mathbf{D}(\mathbf{A})\) is the diagram of \(\mathbf{A}\), that is, a conjunction of universal sentences that describe the operation tables of \(\mathbf{A}\) (identifying each element of \(\mathbf{A}\) with a different \(x_{i}\)), and \(\&\) is first order logic conjunction.
Consider \(\mathbf{B}\in\mathbf{ISP}_{u}([\mathsf{Q}:\mathbf{A}])\), we show that \(\mathbf{A}\notin\mathbf{IS}(\mathbf{B})\); if \(\mathbf{A}\in\mathbf{IS}(\mathbf{B})\), then \(\mathbf{A}\in\mathbf{ISP}_{u}([\mathsf{Q}:\mathbf{A}])\). Hence there exists a family \((\mathbf{A}_{i})_{i\in I}\subseteq[\mathsf{Q}:\mathbf{A}]\) and an ultrafilter \(U\) on \(I\) such that \(\mathbf{C}=\Pi_{i\in I}\mathbf{A}/U\) and \(\mathbf{A}\in\mathbf{IS}(\mathbf{C})\). So \(\mathbf{C}\models\Psi\); but then by Los Lemma there is a (necessarily nonempty) set of indexes \(I^{\prime}\in U\) such that \(\Psi\) is valid in each \(\mathbf{A}_{i}\) with \(i\in I^{\prime}\), which is clearly a contradiction, since each \(\mathbf{A}_{i}\in[\mathsf{Q}:\mathbf{A}]\). Thus \(\mathbf{A}\notin\mathbf{IS}(\mathbf{B})\) and \(\mathbf{B}\in[\mathsf{Q}:\mathbf{A}]\) and therefore \(\mathbf{ISP}_{u}([\mathsf{Q}:\mathbf{A}])=[\mathsf{Q}:\mathbf{A}]\) which is a universal class by Lemma 2.1
Conversely let \(\mathsf{Q}\) be locally finite of finite type; every algebra in \(\mathsf{Q}\) is embeddable in an ultraproduct of its finitely generated (i.e. finite) subalgebras, say \(\mathbf{A}\in\mathbf{ISP}_{u}(\{\mathbf{B}_{i}:i\in I\})\). If \(\mathbf{A}\) is not finite, then \(\mathbf{A}\notin\mathbf{S}(\mathbf{B}_{i})\) for all \(i\), so \(\mathbf{B}_{i}\in[\mathsf{Q}:\mathbf{A}]\) for all \(i\). Since \([\mathsf{Q}:\mathbf{A}]\) is universal, we would have that \(\mathbf{A}\in[\mathsf{Q}:\mathbf{A}]\), a clear contradiction. So \(\mathbf{A}\in\mathbf{IS}(\mathbf{B}_{i})\) for some \(i\) and hence it is finite.
For (2), suppose that \(\mathbf{A}\) is relative subdirectly irreducible and finitely presented, i.e. \(\mathbf{A}\cong\mathbf{F}_{\mathsf{Q}}(\mathbf{x})/\theta(\Sigma)\) where \(\mathbf{x}=(x_{1},\ldots,x_{n})\) and \(\Sigma=\{p_{i}(\mathbf{x})\approx q_{i}(\mathbf{x}):i=1,\ldots,m\}\). We set \(a_{i}=x_{i}/\theta(\Sigma)\); since \(\mathbf{A}\) is relative subdirectly irreducible, it has a relative monolith \(\mu\), i.e. a minimal non trivial relative congruence. Since \(\mu\) is minimal, there are \(c,d\in A\) such that \(\mu\) is the relative congruence generated by the pair \((c,d)\). Now let \(t_{c},t_{d}\) terms in \(\mathbf{F}_{\mathsf{Q}}(\mathbf{x})\) such that \(t_{c}(a_{1},\ldots,a_{n})=c\) and \(t_{d}(a_{1},\ldots,a_{n})=d\) and let \(\Phi\) be the quasiequation
\[\bigwedge_{i=1}^{m}p_{i}(\mathbf{x})\approx q_{i}(\mathbf{x})\ \longrightarrow\ t_{c}( \mathbf{x})\approx t_{d}(\mathbf{x}).\]
Then \(\mathbf{A}\neq\Phi\); moreover if \(\mathbf{C}\in\mathsf{Q}\) is a homomorphic image of \(\mathbf{A}\) which is not isomorphic with \(\mathbf{A}\), then \(\mathbf{C}\models\Phi\). We claim that \([\mathsf{Q}:\mathbf{A}]=\{\mathbf{B}\in\mathsf{Q}:\mathbf{B}\models\Phi\}\) and since \(\Phi\) is a quasiequation this implies that \([\mathsf{Q}:\mathbf{A}]\) is a quasivariety. Clearly if \(\mathbf{B}\models\Phi\), then \(\mathbf{A}\notin\mathbf{IS}(\mathbf{B})\); conversely assume that \(\mathbf{B}\neq\Phi\). Then there are \(b_{1},\ldots,b_{n}\in B\) such that \(p_{i}(b_{1},\ldots,b_{n})=q_{i}(b_{1},\ldots,b_{n})\) but \(t_{c}(b_{1},\ldots,b_{n})\neq t_{d}(b_{1},\ldots,b_{n})\). Let \(g\) be the homomorphism extending the assignment \(x_{i}\longmapsto b_{i}\); then \(\theta(\Sigma)\subseteq\ker(g)\) so by the Second Homomorphism Theorem there is a homomorphism \(f:\mathbf{A}\longrightarrow\mathbf{B}\) such that \(f(a_{i})=b_{i}\). Observe that \(f(\mathbf{A})\in\mathsf{Q}\) (since it is a subalgebra of \(\mathbf{B}\in\mathsf{Q}\)) and \(f(\mathbf{A})\neq\Phi\), so by what we said above \(f(\mathbf{A})\cong\mathbf{A}\); this clearly implies \(\mathbf{A}\in\mathbf{IS}(\mathbf{B})\), so \(\mathbf{B}\notin[\mathsf{Q}:\mathbf{A}]\) as wished.
For the converse, let \(\mathsf{Q}\) be locally finite of finite type; by (1) \(\mathbf{A}\) is finite. Suppose that \(\mathbf{A}\leq_{sd}\prod_{i\in I}\mathbf{B}_{i}\) where each \(\mathbf{B}_{i}\) is relative subdirectly irreducible in \(\mathsf{Q}\). Since \(\mathbf{A}\) is finite, each \(\mathbf{B}_{i}\) can be taken to be finite; if \(\mathbf{A}\notin\mathbf{IS}(\mathbf{B}_{i})\) for all \(i\), then \(\mathbf{B}_{i}\in[\mathsf{Q}:\mathbf{A}]\) for all \(i\) and hence, being \([\mathsf{Q}:\mathbf{A}]\) a quasivariety we have \(\mathbf{A}\in[\mathsf{Q}:\mathbf{A}]\) which is impossible. Hence there is an \(i\) such that \(\mathbf{A}\in\mathbf{IS}(\mathbf{B}_{i})\), so that \(|A|\leq|B_{i}|\); on the other hand \(\mathbf{B}\in\mathbf{H}(\mathbf{A})\), so \(|B|\leq|A|\). Since everything is finite we have \(\mathbf{A}\cong\mathbf{B}_{i}\) and then \(\mathbf{A}\) is relative subdirectly irreducible.
For the first forward direction of (3), suppose that \(\mathbf{B}\in\mathbf{H}([\mathsf{Q}:\mathbf{A}])\). If \(\mathbf{A}\in\mathbf{IS}(\mathbf{B})\), then \(\mathbf{A}\in\mathbf{SH}([\mathsf{Q}:\mathbf{A}])\subseteq\mathbf{HS}([\mathsf{ Q}:\mathbf{A}])\). Now \([\mathsf{Q}:\mathbf{A}]\subseteq\mathsf{Q}\) and \(\mathbf{A}\) is weakly projective in \(\mathsf{Q}\); so \(\mathbf{A}\in\mathbf{S}([\mathsf{Q}:\mathbf{A}])\) which is impossible. It follows that \(\mathbf{A}\notin\mathbf{IS}(\mathbf{B})\) and \(\mathbf{B}\in[\mathsf{Q}:\mathbf{A}]\); thus \([\mathsf{Q}:\mathbf{A}]\) is closed under \(\mathbf{H}\). For the second forward direction, it is easy to see that if \([\mathsf{Q}:\mathbf{A}]\) is closed under \(\mathbf{H}\) then \([\mathsf{Q}:\mathbf{A}]\) is equational relative to \(\mathsf{Q}\). Assume now that \([\mathsf{Q}:\mathbf{A}]\) is closed under \(\mathbf{H}\), we show that \(\mathbf{A}\) is weakly projective in \(\mathsf{Q}\). Suppose that \(\mathbf{A}\in\mathbf{H}(\mathbf{B})\) for some \(\mathbf{B}\in\mathsf{Q}\); if \(\mathbf{A}\notin\mathbf{IS}(\mathbf{B})\), then \(\mathbf{B}\in[\mathsf{Q}:\mathbf{A}]\) and, since \([\mathsf{Q}:\mathbf{A}]\) is closed under \(\mathbf{H}\), \(\mathbf{A}\in[\mathsf{Q}:\mathbf{A}]\), again a contradiction. Hence \(\mathbf{A}\in\mathbf{IS}(\mathbf{B})\) and \(\mathbf{A}\) is weakly projective in \(\mathsf{Q}\). A completely
analogous proof shows that if \([\mathsf{Q}:\mathbf{A}]\) is equational relative to \(\mathsf{Q}\) then \(\mathbf{A}\) is weakly projective, which completes the proof of (3).
(4) follows directly from (1), (2) and (3).
Thus if \(\mathbf{A}\) is relative subdirectly irreducible and finitely presented, then \([\mathsf{Q}:\mathbf{A}]\) is a quasivariety; this is the key to prove the following result, appearing in [50, Proposition 5.1.24]. We present a self-contained proof for the sake of the reader.
**Theorem 4.11** ([50]).: _If \(\mathsf{Q}\) is a locally finite quasivariety of finite type, then the following are equivalent._
1. \(\mathsf{Q}\) _is primitive;_
2. _for all finite relative subdirectly irreducible_ \(\mathbf{A}\in\mathsf{Q}\)_,_ \([\mathsf{Q}:\mathbf{A}]\) _is equational relative to_ \(\mathsf{Q}\)_;_
3. _every finite relative subdirectly irreducible_ \(\mathbf{A}\in\mathsf{Q}\) _is weakly projective in_ \(\mathsf{Q}\)_;_
4. _every finite relative subdirectly irreducible_ \(\mathbf{A}\in\mathsf{Q}\) _is weakly projective in the class of finite algebras in_ \(\mathsf{Q}\)_._
Proof.: (2) and (3) are equivalent by Lemma 4.10, and (3) and (4) are equivalent in locally finite quasivarieties.
Now, (1) implies (2) by Lemma 4.10, since if \(\mathbf{A}\) is a finite relative subdirectly irreducible algebra then \([\mathsf{Q}:\mathbf{A}]\) is a quasivariety, and if \(\mathsf{Q}\) is primitive every subquasivariety is equational relative to \(\mathsf{Q}\) by definition.
Finally, assume (3) and let \(\mathsf{Q}^{\prime}\) be a subquasivariety of \(\mathsf{Q}\); consider a finite algebra \(\mathbf{B}\in\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\), then \(\mathbf{B}\) is a subdirect product of finite relative subdirectly irreducible algebras in \(\mathsf{Q}\), that is, \(\mathbf{B}\leq_{sd}\prod_{i\in I}\mathbf{A}_{i}\) where each \(\mathbf{A}_{i}\) is finite relative subdirectly irreducible in \(\mathsf{Q}\), and thus it is also weakly projective in \(\mathsf{Q}\) by hypothesis. Since \(\mathbf{B}\in\mathbf{H}(\mathsf{Q}^{\prime})\), there is \(\mathbf{A}\in\mathsf{Q}^{\prime}\) such that \(\mathbf{B}\in\mathbf{H}(\mathbf{A})\). But then for each \(i\in I\), \(\mathbf{A}_{i}\in\mathbf{H}(\mathbf{A})\); since each \(\mathbf{A}_{i}\) is weakly projective in \(\mathsf{Q}\), it is also isomorphic to a subalgebra of \(\mathbf{A}\). Thus, \(\mathbf{B}\in\mathbf{ISP}(\mathbf{A})\subseteq\mathsf{Q}^{\prime}\), and therefore \(\mathsf{Q}^{\prime}=\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\), which means that \(\mathsf{Q}\) is primitive and (1) holds.
Most results in the literature are about structurally complete and primitive _varieties_ of algebras and the reason is quite obvious; first the two concepts are easier to formulate for varieties. Secondly being subdirectly irreducible is an absolute concept (every subdirectly irreducible algebra is relative subdirectly irreducible in any quasivariety to which it belongs) while being relative subdirectly irreducible depends on the subquasivariety we are considering. Of course when a quasivariety is generated by a "simple" class (e.g. by finitely many finite algebras), then Theorem 2.3(2) gives a simple solution, but in general describing the relative subdirectly irreducible algebras in a quasivariety is not an easy task.
Now, it is clear that if \(\mathsf{Q}\) is non-negative universally complete, then it is structurally complete. Finding examples of (quasi)varieties that are structurally complete but not primitive is not easy; one idea is to find a finite algebra \(\mathbf{A}\) such that \(\mathbf{A}\) satisfies the hypotheses of Lemma 4.4, but \(\mathbf{V}(\mathbf{A})\) contains some strict (i.e. not a variety) subquasivariety. We will see an example of this in Section 5.1. Let us now show some different kinds of examples of primitive (quasi)varieties.
**Example 4.12**.: The variety of bounded distributive lattices is primitive (as we will discuss in Section 5.1), since it is equationally complete and congruence modular and so is q-minimal by Theorem 3.27.
It is well-known (and easy to check) that the variety of distributive lattices is a _dual discriminator variety_; a _dual discriminator_ on a set \(A\) is a ternary operation \(d\) on \(A\) defined by
\[d(a,b,c)=\left\{\begin{array}{ll}c,&\mbox{if $a\neq b$;}\\ a,&\mbox{if $a=b$.}\end{array}\right.\]
A variety \(\mathsf{V}\) is a dual discriminator variety [41] if there is a ternary term that is the dual discriminator on all the subdirectly irreducible members of \(\mathsf{V}\). Dual discriminator varieties, as opposed to discriminator varieties, do not necessarily have projective unifiers. However, recently in [24] the authors have extended the results in [14] (such as Theorem 3.27) in two directions: every minimal dual discriminator variety is q-minimal, hence primitive and, if the variety is also idempotent, then minimality can be dropped and the variety is primitive. This last fact gives raise to different examples of primitive varieties.
**Example 4.13**.: A _weakly associative lattice_ is an algebra \(\langle A,\,\vee,\,\wedge\rangle\) where \(\vee\) and \(\wedge\) are idempotent, commutative and satisfy the absorption laws but (as the name reveals) only a weak form of associativity. In [41] the authors proved that there is a largest dual discriminator variety \(\mathsf{U}\) of weakly associative lattices; since weakly associative lattices are idempotent, \(\mathsf{U}\) is the largest primitive variety of weakly associative lattices.
**Example 4.14**.: The _pure dual discriminator variety_\(\mathsf{D}\) (see [41, Theorem 3.2]) is a variety with a single ternary operation \(d(x,y,z)\) satisfying
\[d(x,y,y) \approx y\] \[d(x,y,x) \approx x\] \[d(x,x,y) \approx x\] \[d(x,y,d(x,y,z)) \approx d(x,y,z)\] \[d(u,v,d(x,y,z)) \approx d(d(u,v,x),d(u,v,y),d(u,v,z))\]
which is enough to prove that \(\mathsf{D}\) is a dual discriminator variety. Since \(d\) is idempotent \(\mathsf{D}\) is an idempotent dual discriminator variety and so it is primitive.
A different example is given by the following.
**Example 4.15**.: A _modal algebra_ is a Boolean algebra with a modal operator \(\square\), that we take as a basic unary operation, satisfying \(\square 1\approx 1\) and \(\square(x\wedge y)\approx\square x\wedge\square y\); there is an extensive literature on modal algebras (see for instance [77] and the bibliography therein). A modal algebra is a _K4-algebra_ if it satisfies \(\square x\leqslant\square\,\square x\); in [72] V.V. Rybakov classified all the primitive varieties of K4-algebras. However very recently [25] Carr discovered a mistake in Rybakov's proof; namely Rybakov in his description missed some varieties that all have the properties of containing a unifiable weakly projective algebra that is not projective. So any of such varieties, though primitive, does not have projective unifiers.
We now present some examples from (quasi)varieties that are the equivalent algebraic semantics of (fragments) of many-valued logics; in particular, of infinite-valued Lukasiewicz logic.
**Example 4.16**.: Wajsberg algebras are the equivalent algebraic semantics of infinite-valued Lukasiewicz logic in the signature of bounded commutative residuated lattices \((\cdot,\to,\wedge,\vee,0,1)\) and they are term-equivalent to the better known MV-algebras [27]; Wajsberg hoops are their \(0\)-free subreducts. About these algebras there are some recent results [1]. In summary:
* the only primitive variety of Wajsberg algebras is the variety of Boolean algebras, that is also non-negative universally complete, and it is the only non-negative universally complete variety of Wajsberg algebras;
* there are nontrivial primitive quasivarieties of Wajsberg algebras;
* a proper variety of Wajsberg hoops is structurally complete if and only if it is primitive if and only if every subdirectly irreducible is either finite or perfect.
The third point above clearly implies that the variety of Wajsberg hoops is not primitive. Considering varieties of \(\to\)-subreducts, the \(\to\)-subreducts of Wajsberg hoops is a subvariety of \(\mathsf{BCK}\)-algebras usually denoted by \(\mathsf{LBCK}\); every locally finite subvariety of \(\mathsf{LBCK}\)-algebras is a variety of \(\mathsf{HBCK}\)-algebras, so it is universally complete. However:
* the only non locally finite subvariety is the entire variety \(\mathsf{LBCK}\)[59];
* \(\mathsf{LBCK}\) it is generated as a quasivariety by its finite chains [2];
* every infinite chain contains all the finite chains as subalgebras [59];
* so if \(\mathsf{Q}\) is a quasivariety which contains only finitely many chains, then \(\mathbf{V}(\mathsf{Q})\) is locally finite, hence universally complete and so \(\mathsf{Q}=\mathbf{V}(\mathsf{Q})\);
* otherwise \(\mathsf{Q}\) contains infinitely many chains and so \(\mathbf{V}(\mathsf{Q})=\mathsf{Q}=\mathsf{LBCK}\).
Hence every subquasivariety of \(\mathsf{LBCK}\) is a variety and \(\mathsf{LBCK}\) is primitive. The status of non locally finite varieties of basic hoops and basic algebras is still unclear (except for the cases we mentioned) and it is under investigation.
### Active structurally complete quasivarieties
The problem of active structural completeness has been tackled in [35]; it is an extensive and profound paper touching many aspects and there is no need to reproduce it here. We will only state the definition, the main result, and we will display an example.
**Theorem 4.17** ([35]).: _For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is active structurally complete;_
2. _every unifiable algebra of_ \(\mathsf{Q}\) _is in_ \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
3. _every finitely presented unifiable algebra in_ \(\mathsf{Q}\) _is in_ \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega))\)_;_
4. _every admissible quasiequation in_ \(\mathsf{Q}\) _is valid in all the finitely presented unifiable algebras in_ \(\mathsf{Q}\)_;_
5. _for every_ \(\mathbf{A}\in\mathsf{Q}\)_,_ \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}\in\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}( \omega))\)_._
6. _for every_ \(\mathbf{A}\in\mathsf{Q}_{rsi}\)_,_ \(\mathbf{A}\times\mathbf{F}_{\mathsf{Q}}\in\mathbf{ISP}_{u}(\mathbf{F}_{\mathsf{ Q}}(\omega))\)_._
Given that, we have as usual:
**Corollary 4.18**.: _A quasivariety \(\mathsf{Q}\) is active structurally completeif and only if \(\mathcal{L}_{\mathsf{Q}}\) is actively structurally complete._
**Example 4.19**.: An \(S4\)-algebra is a \(K4\)-algebra satisfying \(\square x\leq x\); if we define \(\Diamond x:=\neg\square\neg x\), then a _monadic_ algebra is an \(S4\)-algebras satisfying \(\Diamond x\leq\square\Diamond x\). Now let \(\mathbf{A}\), \(\mathbf{B}\) be the monadic algebra and the \(S4\)-algebra in Figure 3 and let \(\mathsf{V}=\mathbf{V}(\mathbf{A})\) and \(\mathsf{W}=\mathbf{V}(\mathbf{B})\).
Let \(\mathsf{U}=\mathsf{V}\vee\mathsf{W}\) (the varietal join); from [35, Section 8] one can deduce that:
* every finitely generated algebra in \(\mathsf{U}\) is isomorphic to the direct product of an algebra in \(\mathsf{V}\) and one in \(\mathsf{W}\), hence \(\mathsf{U}\) is locally finite;
* \(\mathsf{U}\) is active structurally complete but not structurally complete;
* \(\mathsf{U}\) does not have exact unifiers.
Since \(\mathsf{U}\) is locally finite, by Theorem 3.22, it cannot be active universally complete; so \(\mathsf{U}\) is an example of a variety that is active structurally complete but not active universally complete.
### Passive quasivarieties
Passive structurally complete quasivarieties have been studied in [66] in relation to the joint embedding property, while here we take a different path. We start with the following observation.
**Proposition 4.20**.: _A quasivariety \(\mathsf{Q}\) is passive structurally complete if and only if every non-negative passive admissible universal sentence is derivable in \(\mathsf{Q}\)._
Proof.: For the non-trivial direction, suppose \(\mathsf{Q}\) is passive structurally complete, and let \(\Sigma\Rightarrow\Delta\) be a non-negative (\(\Delta\neq\Diamond\)) passive admissible universal sentence. This means that \(\Sigma\) is not unifiable, and thus, each quasiequation \(\Sigma\Rightarrow\delta\), for any \(\delta\in\Delta\), is passive admissible. By hypothesis, each such \(\Sigma\Rightarrow\delta\) is valid in \(\mathsf{Q}\), thus so is \(\Sigma\Rightarrow\Delta\) and the conclusion holds.
It is clear that a key concept to study passive clauses is understanding the unifiability of the premises. In order to do so, we introduce the following notion.
**Definition 4.21**.: We say that a finite set of identities \(\Sigma\) is _trivializing_ in a class of algebras \(\mathsf{K}\) if the quasiequation \(\Sigma\Rightarrow(x\approx y)\) is valid in \(\mathsf{K}\), where the variables \(x,y\) do not appear in \(\Sigma\).
Notice that such a quasiequation \(\Sigma\Rightarrow(x\approx y)\) is valid in an algebra \(\mathbf{A}\) if and only if either \(\mathbf{A}\) is trivial, or there is no assignment of the variables of \(\Sigma\) in \(\mathbf{A}\) that makes \(\Sigma\) valid in \(\mathbf{A}\).
**Lemma 4.22**.: _Let \(\mathsf{Q}\) be a quasivariety, and let \(\Sigma\) be a finite set of equations in its language. The following are equivalent:_
1. \(\Sigma\) _is not unifiable in_ \(\mathsf{Q}\)_;_
2. \(\mathbf{F}_{\mathsf{Q}}\) _is nontrivial and_ \(\Sigma\) _is trivializing in_ \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}})\)_;_
3. \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\)_._
Proof.: It is easy to see that (2) and (3) are equivalent, modulo the fact that a set of identities is trivializing in \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}})\) if and only if it is trivializing in \(\mathbf{F}_{\mathsf{Q}}\).
Let us now assume that the identities in \(\Sigma\) are on a (finite) set of variables \(X\). Then, given Lemma 2.21, \(\Sigma\) is not unifiable in \(\mathsf{Q}\) if and only if there is no homomorphism \(h:\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\to\mathbf{F}_{ \mathsf{Q}}\). We show that the latter holds if and only if there is no homomorphism \(k:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}\) such that \(k(t)=k(u)\) for each \(t\approx u\in\Sigma\). Indeed, for the non-trivial direction, suppose that there is a homomorphism \(k:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}\) with the above property. Then the following diagram commutes, i.e., there is a homomorphism \(h:\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\to\mathbf{F}_{ \mathsf{Q}}\):
Notice that there is no homomorphism \(k:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}\) such that \(k(t)=k(u)\) for each \(t\approx u\in\Sigma\) if and only if there is no assignment of variables in \(X\) validating \(\Sigma\) in \(\mathbf{F}_{\mathsf{Q}}\). The latter is equivalent to \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\).
We are now ready to prove the characterization theorem.
**Theorem 4.23**.: _Let \(\mathsf{Q}\) be a quasivariety, then the following are equivalent._
1. \(\mathsf{Q}\) _is passive structurally complete;_
2. \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\) _implies_ \(\Sigma\) _is trivializing in_ \(\mathsf{Q}\)_;_
3. _either_ \(\mathbf{F}_{\mathsf{Q}}\) _is trivial, or_ \(\Sigma\) _is trivializing in_ \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}})\) _implies_ \(\Sigma\) _is trivializing in_ \(\mathsf{Q}\)_;_
4. _every nontrivial finitely presented algebra is unifiable._
Proof.: We first show that (1) and (2) are equivalent. By definition, \(\mathsf{Q}\) is passive structurally complete if and only if each quasiequation \(\Sigma\Rightarrow\delta\) where \(\Sigma\) is not unifiable in \(\mathsf{Q}\) is valid in \(\mathsf{Q}\). That is, \(\Sigma\) not unifiable in \(\mathsf{Q}\) implies \(\mathsf{Q}\models\Sigma\Rightarrow\delta\), for all identities \(\delta\). By Proposition 4.22, the latter is equivalent to: \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\) implies \(\mathsf{Q}\models\Sigma\Rightarrow\delta\), for all identities \(\delta\). From this it follows the particular case where \(\delta=\{x\approx y\}\), with \(x,y\) not appearing in \(\Sigma\). In turn, if \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\) implies \(\mathsf{Q}\models\Sigma\Rightarrow(x\approx y)\), then clearly \(\mathsf{Q}\models\Sigma\Rightarrow\delta\) for any \(\delta\), and thus (1) \(\Leftrightarrow\) (2).
Now, (2) and (3) are equivalent by Lemma 4.22, thus the first three points are equivalent. Let us now assume (2) and prove (4). We consider a nontrivial finitely presented algebra in \(\mathsf{Q}\), \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\). If it is not unifiable, by Lemma 4.22\(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\). By (2) this implies that \(\Sigma\) is trivializing in \(\mathsf{Q}\), that is, \(\mathsf{Q}\models\Sigma\Rightarrow(x\approx y)\) (with \(x,y\) new variables). This clearly implies that \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is trivial, a contradiction. Thus \(\mathbf{F}_{\mathsf{Q}}(X)/\theta_{\mathsf{Q}}(\Sigma)\) is unifiable and (4) holds.
Finally, we prove that (4) implies (1). Suppose \(\Sigma\Rightarrow\delta\) is a passive quasiequation over variables in \(X\), that is, \(\Sigma\) is not unifiable in \(\mathsf{Q}\). By Lemma 4.22\(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\). Let \(x,y\) be variables not in
\(X\), and consider the finitely presented algebra \(\mathbf{F}_{\mathsf{Q}}(X^{\prime})/\theta_{\mathsf{Q}}(\Sigma)\), where \(X^{\prime}=X\cup\{x,y\}\) and suppose by way of contradiction that it is not trivial. By (4) it is unifiable, that is, there is a homomorphism \(h:\mathbf{F}_{\mathsf{Q}}(X^{\prime})/\theta_{\mathsf{Q}}(\Sigma)\to\mathbf{F}_ {\mathsf{Q}}\). Then, considering the natural epimorphism \(\pi_{\Sigma}:\mathbf{F}_{\mathsf{Q}}(X^{\prime})\to\mathbf{F}_{\mathsf{Q}}(X^{ \prime})/\theta_{\mathsf{Q}}(\Sigma)\), the composition \(h\pi_{\Sigma}\) is an assignment from \(X^{\prime}\) to \(\mathbf{F}_{\mathsf{Q}}\) satisfying \(\Sigma\); but \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\), a contradiction. Thus \(\mathbf{F}_{\mathsf{Q}}(X^{\prime})/\theta_{\mathsf{Q}}(\Sigma)\) is trivial, and therefore \(x/\theta_{\mathsf{Q}}(\Sigma)=y/\theta_{\mathsf{Q}}(\Sigma)\). By Lemma 2.38\(\mathsf{Q}\models\Sigma\Rightarrow(x\approx y)\), and thus \(\mathsf{Q}\models\Sigma\Rightarrow\delta\) and (1) holds.
Analogously to the case of passive universal completeness, if the smallest free algebra is isomorphic to all its ultraproducts we can improve the previous result.
**Lemma 4.24**.: _Let \(\mathsf{Q}\) be a quasivariety such that \(\mathbf{I}(\mathbf{F}_{\mathsf{Q}})=\mathbf{IP}_{u}(\mathbf{F}_{\mathsf{Q}})\), then the following are equivalent._
1. _[label=()]_
2. _every nontrivial finitely presented algebra in_ \(\mathsf{Q}\) _is unifiable;_
3. _every nontrivial algebra in_ \(\mathsf{Q}\) _is unifiable._
Proof.: The proof is analogous to the one of Lemma 3.31; we prove the nontrivial direction by contraposition. Consider an arbitrary algebra \(\mathbf{A}=\mathbf{F}_{\mathsf{Q}}(X)/\theta\in\mathsf{Q}\) and assume that it is not unifiable. Then there is no assignment \(h:\mathbf{F}_{\mathsf{Q}}(X)\to\mathbf{F}_{\mathsf{Q}}\) such that \(\mathbf{F}_{\mathsf{Q}},h\models\Sigma_{\theta}\), where \(\Sigma_{\theta}=\{t\approx u:(t,u)\in\theta\}\). Equivalently, iff \(\mathbf{F}_{\mathsf{Q}}\models\Sigma_{\theta}\Rightarrow\emptyset\). Now, the equational consequence relation relative to \(\mathbf{F}_{\mathsf{Q}}\) is finitary (since all ultraproducts of \(\mathbf{F}_{\mathsf{Q}}\) are isomorphic to \(\mathbf{F}_{\mathsf{Q}}\)); thus we obtain that \(\mathbf{F}_{\mathsf{Q}}\models\Sigma_{\theta}^{\prime}\Rightarrow\emptyset\), for \(\Sigma_{\theta}^{\prime}\) some finite subset of \(\Sigma_{\theta}\). But \(\mathbf{F}_{\mathsf{Q}}(X)/\theta\not\models\Sigma_{\theta}^{\prime} \Rightarrow(x\approx y)\) (with \(x,y\not\in X\)), since it is nontrivial, which contradicts (2) of Theorem 4.23; equivalently it contradicts (1) and thus the proof is complete.
**Corollary 4.25**.: _Let \(\mathsf{Q}\) be a quasivariety such that \(\mathbf{I}(\mathbf{F}_{\mathsf{Q}})=\mathbf{IP}_{u}(\mathbf{F}_{\mathsf{Q}})\), then the following are equivalent._
1. _[label=()]_
2. \(\mathsf{Q}\) _is passively structurally complete;_
3. \(\mathbf{F}_{\mathsf{Q}}\models\Sigma\Rightarrow\emptyset\) _implies_ \(\Sigma\) _is trivializing in_ \(\mathsf{Q}\)_;_
4. _either_ \(\mathbf{F}_{\mathsf{Q}}\) _is trivial, or_ \(\Sigma\) _is trivializing in_ \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}})\) _implies_ \(\Sigma\) _is trivializing in_ \(\mathsf{Q}\)_;_
5. _every nontrivial finitely presented algebra is unifiable;_
6. _every nontrivial algebra in_ \(\mathsf{Q}\) _is unifiable._
**Remark 4.26**.: The previous corollary can be applied whenever \(\mathbf{F}_{\mathsf{Q}}\) is finite, therefore to all locally finite quasivarieties, but also to more complex classes of algebras, e.g., all subquasivarieties of \(\mathsf{FL}_{w}\) (see Subsection 5.2).
We will see an interesting application of Theorem 4.23 (or Corollary 4.25) in substructural logics in Subsection 5.2; let us now show some other consequences. Given a quasivariety \(\mathsf{Q}\) whose smallest free algebra \(\mathbf{F}_{\mathsf{Q}}\) is nontrivial, let us consider the following set:
\[\mathcal{P}_{\mathsf{Q}}=\{\Sigma\Rightarrow\delta:\mathsf{Q}(\mathbf{F}_{ \mathsf{Q}})\models\Sigma\Rightarrow\{x\approx y\},\delta\text{ any identity}\}.\]
\(\mathcal{P}_{\mathsf{Q}}\) axiomatizes a subquasivariety of \(\mathsf{Q}\), that we denote with \(\mathsf{P}_{\mathsf{Q}}\). From Theorem 4.23 we get the following.
**Corollary 4.27**.: _Let \(\mathsf{Q}\) be a quasivariety such that \(\mathbf{F}_{\mathsf{Q}}\) is nontrivial. Every passive structurally complete subquasivariety of \(\mathsf{Q}\) is contained in \(\mathsf{P}_{\mathsf{Q}}\), which is the largest subquasivariety of \(\mathsf{Q}\) that is passive structurally complete._
Moreover, for locally finite quasivarieties the characterization theorem reads as follows.
**Corollary 4.28**.: _Let \(\mathsf{Q}\) be a locally finite quasivariety, then the following are equivalent._
1. _[label=()]_
2. \(\mathsf{Q}\) _is passive structurally complete;_
3. _every algebra in_ \(\mathsf{Q}\) _is unifiable;_
4. _every finite subdirectly irreducible in_ \(\mathsf{Q}\) _is unifiable._
A nontrivial algebra \(\mathbf{A}\) is _Kollar_ if it has no trivial subalgebras, and a quasivariety \(\mathsf{Q}\) is a _Kollar quasivariety_ if all nontrivial algebras in \(\mathsf{Q}\) are Kollar. By [58] if \(\mathbf{A}\) belongs to a Kollar quasivariety, \(1_{\mathbf{A}}\), the largest congruence of \(\mathbf{A}\), is compact in \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\); from there a straightforward application of Zorn's Lemma yields that if \(\mathbf{A}\) is nontrivial there is at least one maximal congruence \(\theta\in\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) below \(1_{\mathbf{A}}\) (i.e. \(\mathbf{A}/\theta\) is relative simple).
**Theorem 4.29**.: _If \(\mathsf{Q}\) is a Kollar quasivariety and \(\mathbf{F}_{\mathsf{Q}}\) is the only finitely generated relative simple algebra in \(\mathsf{Q}\), then \(\mathsf{Q}\) is passive structurally complete._
Proof.: Let \(\mathbf{A}\) be a nontrivial finitely presented algebra in \(\mathsf{Q}\); since \(\mathsf{Q}\) is a Kollar quasivariety, \(\mathbf{A}\) has a relative simple homomorphic image, that must be finitely generated. Hence it must be equal to \(\mathbf{F}_{\mathsf{Q}}\), so \(\mathbf{A}\) is unifiable; by Theorem 4.23\(\mathsf{Q}\) is passive structurally complete.
**Corollary 4.30**.: _For a locally finite Kollar quasivariety \(\mathsf{Q}\) such that \(\mathbf{F}_{\mathsf{Q}}\) has no proper subalgebra the following are equivalent:_
1. \(\mathbf{F}_{\mathsf{Q}}\) _is the only finite relative simple algebra in_ \(\mathsf{Q}\)_;_
2. \(\mathsf{Q}\) _is passive structurally complete._
Proof.: If (1) holds, than (2) holds by Theorem 4.29. Conversely assume (2); then every nontrivial finitely presented algebra in \(\mathsf{Q}\) is unifiable. Since \(\mathsf{Q}\) is locally finite \(\mathbf{F}_{\mathsf{Q}}\) is finite and nontrivial since \(\mathsf{Q}\) is Kollar; now since \(\mathbf{F}_{\mathsf{Q}}\) has no proper subalgebra no finite relative simple algebra different from \(\mathbf{F}_{\mathsf{Q}}\) can be unifiable, but \(\mathsf{Q}\) must contain at least a relative simple algebra [50, Theorem 3.1.8]. Hence \(\mathbf{F}_{\mathsf{Q}}\) must be relative simple and (1) holds.
The next results will allow us to find interesting applications in varieties of bounded lattices, which we will explore in Section 5.1. We say that an algebra \(\mathbf{A}\) in a variety \(\mathsf{V}\) is _flat_ if \(\mathbf{HS}(\mathbf{A})\) does not contain any simple algebra different from \(\mathbf{F}_{\mathsf{V}}\).
**Theorem 4.31**.: _Let \(\mathsf{V}\) be a Kollar variety; if every finitely generated algebra in \(\mathsf{V}\) is flat then \(\mathsf{V}\) is passive structurally complete. If \(\mathbf{V}\) is locally finite, then the converse holds as well._
Proof.: First, if \(\mathbf{F}_{\mathsf{V}}\) is trivial then \(\mathsf{V}\) is vacuously passive structurally complete. If \(\mathbf{F}_{\mathsf{V}}\) is nontrivial and every finitely generated algebra is flat, then the only finitely generated simple lattice in \(\mathsf{V}\) must be \(\mathbf{F}_{\mathsf{V}}\); since \(\mathsf{V}\) is Kollar, \(\mathsf{V}\) is passive structurally complete by Theorem 4.29.
If \(\mathsf{V}\) is locally finite and passive structurally complete, then \(\mathbf{F}_{\mathsf{V}}\) is the only finite simple algebra in \(\mathsf{V}\) by Corollary 4.30. It follows that no finite simple algebra different from \(\mathbf{F}_{\mathsf{V}}\) can appear in \(\mathbf{HS}(\mathbf{A})\) for any finite \(\mathbf{A}\in\mathsf{V}\). So every finite algebra in \(\mathsf{V}\) must be flat.
**Theorem 4.32**.: _Let \(\mathsf{V}\) be a congruence distributive Kollar variety; a finitely generated variety \(\mathsf{W}\subseteq\mathsf{V}\) is passive structurally complete if and only if each generating algebra is flat._
Proof.: Suppose that \(\mathsf{W}=\mathbf{V}(K)\) where \(K\) is a finite set of finite algebras; by Jonsson Lemma any simple algebra in \(\mathsf{V}\) is in \(\mathbf{HS}(K)\). If \(K\) consists entirely of flat algebras, then there cannot be any simple algebra in \(\mathsf{V}\) different from \(\mathbf{F}_{\mathsf{W}}\), so \(\mathsf{W}\) is passive structurally complete. On the other hand if \(\mathbf{A}\in K\) is not flat, then there is an algebra \(\mathbf{B}\in\mathbf{HS}(K)\) which is simple and different from \(\mathbf{F}_{\mathsf{V}}\). Clearly \(\mathbf{B}\in\mathsf{W}\), which is not passive structurally complete.
## 5. Applications to algebra and logic
In this last section we will see some relevant examples and applications of our results in the realm of algebra and (algebraic) logic that deserve a deeper exploration than the examples already presented in the previous sections. We will start with focusing on varieties of lattices and bounded lattices, and then move to their expansions that are the equivalent algebraic semantics of subtructural logics: residuated lattices.
As a main result, in the last subsection we present the logical counterpart of the characterization of passive structural completeness in substructural logics with weakening, that is, such a logic is passively structurally complete if and only if every classical contradiction is explosive in it; building on this, from the algebraic perspective, we are able to axiomatize the largest variety of representable bounded commutative integral residuated lattices that is passively structurally complete (and such that all of its quasivarieties have this property). Notice that this characterization establishes negative results as well: if a logic (or a quasivariety) is not passively structurally complete, a fortiori it is not structurally complete either.
### (Bounded) lattices
In this subsection we start with some results about primitive (quasi)varieties of lattices, and then move to bounded lattices, where in particular we obtain some new results about passive structurally complete varieties.
#### 5.1.1. Primitivity in lattices
Many examples of quasivarieties that are primitive can be found in lattices satisfying _Whitman's condition_ (W); Whitman's condition is a universal sentence that holds in free lattices:
(W) \[\{x\wedge y\leqslant u\lor v\}\Rightarrow\{x\leqslant u\lor v,y\leqslant u \lor v,x\wedge y\leqslant u,x\wedge y\leqslant v\}.\]
Now a finite lattice is finitely projective in the variety of all lattices if and only if it satisfies (W) [33], which implies:
**Lemma 5.1**.: _Let \(\mathsf{K}\) be a finite set of finite lattices. If every lattice in \(\mathsf{K}\) satisfies (W) then \(\mathbf{Q}(\mathsf{K})\) is primitive._
Proof.: \(\mathbf{Q}(\mathsf{K})\) is locally finite and by Theorem 2.3(2) every relative subdirectly irreducible lies in \(\mathbf{IS}(\mathsf{K})\); as (W) is a universal sentence it is preserved under subalgebras, thus they all satisfy (W) and hence they are all finitely projective in the variety of lattices and then also in \(\mathbf{Q}(\mathsf{K})\). By Theorem 4.11(4), \(\mathbf{Q}(\mathsf{K})\) is primitive.
Luckily finite lattices satisfying (W) abound, so there is no shortage of primitive quasivarieties of lattices. For varieties of lattices the situation is slightly different; in particular, because of Lemma 4.4 it is not enough that all lattices in \(\mathsf{K}\) are weakly projective in \(\mathbf{V}(\mathsf{K})\) to guarantee that \(\mathbf{V}(\mathsf{K})\) is structurally complete.
First we introduce some lattices: \(\mathbf{M}_{n}\) for \(3\leqslant n\leqslant\omega\) are the modular lattices consisting of a top, a bottom, and \(n\) atoms while the lattices \(\mathbf{M}_{3,3}\) and \(\mathbf{M}_{3,3}^{+}\) are displayed in Figure 4.
Observe that all the above lattices, with the exception of \(\mathbf{M}_{3,3}\), satisfy (W). Now Gorbunov ([50], Theorem 5.1.29) showed that \(\mathbf{M}_{3,3}^{+}\) is _splitting_ in the lattice of subquasivarieties of modular lattices. More in detail for any quasivariety \(\mathsf{Q}\) of modular lattices, either \(\mathbf{M}_{3,3}^{+}\in\mathsf{Q}\) or else \(\mathsf{Q}=\mathbf{Q}(\mathbf{M}_{n})\) for some \(n\leqslant\omega\). Observe that, for \(n<\omega\), \(\mathsf{Q}(\mathbf{M}_{n})\) is primitive by Lemma 5.1 and \(\mathbf{V}(\mathbf{M}_{n})=\mathbf{Q}(\mathbf{M}_{n})\) by Lemma 2.4; then the only thing left to show is that \(\mathbf{V}(\mathbf{M}_{\omega})\) is a primitive variety and Gorbunov did exactly that. On the other hand no variety \(\mathsf{V}\) of lattices containing \(\mathbf{M}_{3,3}^{+}\) can be primitive; in fact \(\mathbf{M}_{3,3}\) is a simple homomorphic image of \(\mathbf{M}_{3,3}^{+}\) that cannot be embedded in \(\mathbf{M}_{3,3}^{+}\). By Lemma 2.4, \(\mathbf{Q}(\mathbf{M}_{3,3}^{+})\subsetneq\mathbf{V}(\mathbf{M}_{3,3}^{+})\), so \(\mathsf{V}\) contains a strict (i.e. not a variety) subquasivariety and cannot be primitive. Thus Gorbunov's result can be formulated as: _a variety of modular lattices is primitive if and only if it does not contain \(\mathbf{M}_{3,3}^{+}\)_. Note that it cannot be improved to quasivarieties: since \(\mathbf{M}_{3,3}^{+}\) satisfies (W), \(\mathbf{Q}(\mathbf{M}_{3,3}^{+})\) is primitive by Lemma 5.1. However we observe:
**Lemma 5.2**.: _If \(\mathsf{Q}\) is a quasivariety of modular lattices and \(\mathbf{M}_{3,3}\in\mathsf{Q}\), then \(\mathsf{Q}\) is not primitive._
Proof.: Clearly the two element lattice \(\mathbf{2}\in\mathsf{Q}\) and it is easy to check that \(\mathbf{M}_{3,3}^{+}\leqslant_{sd}\mathbf{2}\times\mathbf{M}_{3,3}\) so \(\mathbf{M}_{3,3}^{+}\in\mathsf{Q}\) and \(\mathbf{M}_{3,3}\in\mathbf{H}(\mathbf{M}_{3,3}^{+})\). Since \(\mathbf{M}_{3,3}\) cannot be embedded in \(\mathbf{M}_{3,3}^{+}\), in \(\mathsf{Q}\) there is a simple finite (so finitely
presented, since lattices have finite type) algebra that is not weakly projective. By Theorem 4.11, \(\mathbf{Q}\) is not primitive.
Therefore to find a variety of modular lattices that is structurally complete but not primitive it is enough to find a finite lattice \(\mathbf{F}\) such that \(\mathbf{M}_{3,3}^{+}\in\mathbf{V}(\mathbf{F})\) but \(\mathsf{K}=\{\mathbf{F}\}\) satisfies the hypotheses of Lemma 4.4. Bergman in [13] observed that the _Fano lattice_\(\mathbf{F}\) has exactly those characteristics; the Fano lattice is the (modular) lattice of subspaces of \((\mathbb{Z}_{2})^{3}\) seen as a vector space on \(\mathbb{Z}_{2}\) and it is displayed in Figure 5.
Now:
* \(\mathbf{F}\) is projective in \(\mathbf{V}(\mathbf{F})\)[52];
* the subdirectly irreducible members of \(\mathbf{V}(\mathbf{F})\) are exactly \(\mathbf{2},\mathbf{M}_{3},\mathbf{M}_{3,3},\mathbf{F}\) and they are all subalgebras of \(\mathbf{F}\).
It follows that \(\mathbf{F}\) does not satisfies (W) (since \(\mathbf{M}_{3,3}\) does not), \(\mathbf{V}(\mathbf{F})\) is structurally complete and (since \(\mathbf{M}_{3,3}\in\mathbf{V}(\mathbf{F})\)) not primitive by Lemma 5.2; also \(\mathbf{Q}(\mathbf{F})\) is structurally complete but, since \(\mathbf{M}_{3,3}\in\mathbf{Q}(\mathbf{F})\), it cannot be primitive as well.
Primitive varieties of lattices have been studied in depth in [53]; there the authors proved the following theorem that explains the behavior we have seen above.
**Theorem 5.3** ([53]).: _If \(\mathbf{A}\) is a lattice satisfying (W), then \(\mathbf{V}(\mathbf{A})\) is primitive if and only if every subdirectly irreducible lattice in \(\mathbf{HS}(\mathbf{A})\) satisfies (W)._
We believe that many of the techniques in [53] could be adapted to gain more understanding of primitive quasivarieties of lattices, but proceeding along this path would make this part too close to being a paper in lattice theory, and we have chosen a different focus. We only borrow an example from [53] that shows that Lemma 5.1 cannot be inverted for quasivarieties. Let \(\mathbf{H}^{+},\mathbf{H}\) be the lattices in Figure 6.
It is easily seen that the pair \(\mathbf{H}^{+},\mathbf{H}\) behaves almost like the pair \(\mathbf{M}_{3,3}^{+},\mathbf{M}_{3,3}\): \(\mathbf{H}^{+}\) satisfies (W) (so \(\mathbf{Q}(\mathbf{H}^{+})\) is primitive), \(\mathbf{H}\) does not satisfy (W) and \(\mathbf{H}^{+}\leq_{sd}\mathbf{2}\times\mathbf{H}\). As above we can conclude that \(\mathbf{V}(\mathbf{H}^{+})\) is not primitive. However \(\mathbf{V}(\mathbf{H})\) is primitive [53] so \(\mathbf{Q}(\mathbf{H})\) is a primitive quasivariety generated by a finite lattice not satisfying (W).
#### 5.1.2. Bounded lattices
We now focus on applications of our results in varieties of bounded lattices. A _bounded_ lattice is a lattice with two constants, \(0\) and \(1\), that represent the top and the bottom of the lattice. Bounded lattices form a variety \(\mathsf{L}^{b}\) that shares many features with variety of lattices. In particular, let \(\mathbf{2}^{b}\) be the two element bounded lattice, then the variety of bounded distributive lattices is \(\mathsf{D}^{b}=\mathbf{ISP}(\mathbf{2}^{b})\). Therefore
\[\mathbf{Q}(\mathbf{F}_{\mathsf{D}_{b}}(\omega))\subseteq\mathsf{D}^{b}= \mathbf{ISP}(\mathbf{2}^{b})\subseteq\mathbf{Q}(\mathbf{F}_{\mathsf{D}_{b}}( \omega))\]
and by Theorem 4.1, the variety of bounded distributive lattices \(\mathsf{D}^{b}\) is structurally complete, as shown in [35]. In [14] it is shown that locally finite, congruence modular, minimal varieties are q-minimal; since these hypotheses apply to \(\mathsf{D}^{b}\), the latter is also primitive. However, it is not non-negative universally
Figure 5. The Fano lattice
complete; it is a nice exercise in general algebra to show that for any variety \(\mathsf{V}\) of bounded lattices, \(1\) is join irreducible in \(\mathbf{F}_{\mathsf{V}}(\omega)\). It follows that
\[\{x\lor y\approx 1\}\Rightarrow\{x\approx 1,y\approx 1\}\]
is an active universal sentence that is admissible in \(\mathsf{V}\). But it is clearly not derivable, since any nontrivial variety of bounded lattices contains \(\mathbf{2}^{b}\times\mathbf{2}^{b}\) which does not satisfy the universal sentence.
**Proposition 5.4**.: _No nontrivial variety of bounded lattices is active universally complete._
Actually something more is true; if \(\mathsf{V}\) is a variety of bounded lattices that is structurally complete, then by Theorem 4.6, each finite subdirectly irreducible algebra \(\mathbf{A}\in\mathsf{V}\) must satisfy the above universal sentence, i.e. \(1\) must be join irreducible in \(\mathbf{A}\). But the bounded lattices \(\mathbf{N}_{5}^{b}\) and \(\mathbf{M}_{3}^{b}\) do not satisfy that, so any structurally complete variety of bounded lattice must omit them both. As in the unbounded case, this means that the variety must be the variety of bounded distributive lattices. Thus:
**Proposition 5.5** ([35]).: _The variety of bounded distributive lattices is the only (active) structurally complete variety of bounded lattices._
We have seen that active structural completeness does not have much meaning in bounded lattices. Passive structural completeness has more content, as we are now going to show. Notice that any variety of bounded lattices is Kollar and \(\mathbf{F}_{\mathsf{V}}=\mathbf{2}^{b}\) for any variety \(\mathsf{V}\) of bounded lattices. Since \(\mathbf{2}^{b}\) is simple and has no proper subalgebras, any simple bounded lattice not isomorphic with \(\mathbf{2}^{b}\) is not unifiable; in particular if a variety \(\mathsf{V}\) contains a finite simple lattice \(\mathbf{L}\) different from \(\mathbf{2}^{b}\), then \(\mathbf{V}(\mathbf{L})\) cannot be passive structurally completeby Corollary 4.30, and hence neither can \(\mathsf{V}\).
We will use this fact to show that the only variety of bounded modular lattices that is passive structurally complete is the one we already know to possess that property, i.e. the variety \(\mathsf{D}^{b}\) of bounded distributive lattices. A key step is to show that \(\mathbf{M}_{3}^{b}\) is splitting in the variety of bounded modular lattices; in the unbounded case, this follows from the fact that \(\mathbf{M}_{3}\) is projective and subdirectly irreducible. However, \(\mathbf{M}_{3}^{b}\) is not projective in the variety of bounded modular lattices. Indeed, the lattice in Figure 5.1.2 is a bounded modular lattice having \(\mathbf{M}_{3}^{b}\) as homomorphic image, but it has no subalgebra isomorphic with \(\mathbf{M}_{3}^{b}\), which hence cannot be a retract.
However we can use A. Day idea in [34]; a finite algebra \(\mathbf{A}\) is _finitely projected_ in a variety \(\mathsf{V}\) if for any \(\mathbf{B}\in\mathsf{V}\) if \(f:\mathbf{B}\longrightarrow\mathbf{A}\) is surjective, then there is a finite subalgebra \(\mathbf{C}\) of \(\mathbf{B}\) with \(f(\mathbf{C})\cong\mathbf{A}\). Clearly any finite projective lattice is finitely projected. A finite algebra \(\mathbf{A}\)_splitting_ in a variety \(\mathsf{V}\) if \(\mathbf{A}\in\mathsf{V}\) and there is a subvariety \(\mathsf{W}_{\mathbf{A}}\subseteq\mathsf{V}\) such that for any variety \(\mathsf{U}\subseteq\mathsf{V}\) either \(\mathbf{A}\in\mathsf{U}\) or \(\mathsf{U}\subseteq\mathsf{W}_{\mathbf{A}}\). This simply means that the lattice of subvarieties of \(\mathsf{V}\) is the disjoint union of the filter generated by \(\mathbf{V}(\mathbf{A})\) and the ideal generated by \(\mathsf{W}_{\mathbf{A}}\). The key result is:
**Theorem 5.6**.: _([34], Theorem 3.7) If \(\mathsf{V}\) is a congruence distributive variety, then any finitely projected subdirectly irreducible algebra in \(\mathsf{V}\) is splitting in \(\mathsf{V}\)._
**Lemma 5.7**.: _Let \(\mathsf{V}^{b}\) be a variety of bounded lattices and let \(\mathsf{V}\) be the variety of lattice subreducts of \(\mathsf{V}^{b}\). If \(\mathbf{L}\) is finitely projected in \(\mathsf{V}\), then \(\mathbf{L}^{b}\) is finitely projected in \(\mathsf{V}^{b}\)._
Proof.: The fact that \(\mathsf{V}\) is indeed a variety is easy to check. Let now \(\mathbf{A}^{b}\in\mathsf{V}^{b}\) and suppose that there is an onto homomorphism \(f:\mathbf{A}^{b}\longrightarrow\mathbf{L}^{b}\); then \(f\) is onto from \(\mathbf{A}\) to \(\mathbf{L}\) and since \(\mathbf{L}\) is finitely projected in \(\mathsf{V}\) there is a subalgebra \(\mathbf{B}\) of \(\mathbf{A}\) with \(f(\mathbf{B})\cong\mathbf{L}\). But \(B\cup\{0,1\}\) is the universe of a finite subalgebra \(\mathbf{C}\) of \(\mathbf{A}^{b}\). Extend \(f\) to \(\hat{f}\) by setting \(\hat{f}(0)=0\) and \(\hat{f}(1)=1\); then \(\hat{f}(\mathbf{C})\cong\mathbf{L}^{b}\) and so \(\mathbf{L}^{b}\) is finitely projected in \(\mathsf{V}^{b}\).
**Theorem 5.8**.: _A variety of modular bounded lattices is passive structurally complete if and only if it is the variety of bounded distributive lattices._
Proof.: \(\mathsf{D}^{b}\) is structurally complete, hence passive structurally complete. Conversely observe that \(\mathbf{M}_{3}\) is projective in the variety of modular lattices, so \(\mathbf{M}_{3}^{b}\) is finitely projected in the variety of bounded modular lattices. Hence, by Theorem 5.6, \(\mathbf{M}_{3}^{b}\) is splitting in the variety, which means that for any variety \(\mathsf{V}\) of bounded modular lattices, either \(\mathbf{M}_{3}^{b}\in\mathsf{V}\) or \(\mathsf{V}\) is \(\mathsf{D}^{b}\). But if \(\mathbf{M}_{3}^{b}\in\mathsf{V}\) then \(\mathsf{V}\) cannot be passive universally complete, since \(\mathbf{M}_{3}^{b}\) is simple. The conclusion follows.
In order to find other relevant varieties of bounded lattices that are passive structurally complete, we are going to take a closer look at flat lattices. Finding flat bounded lattices is not hard since the lattice of subvarieties of lattices has been studied thoroughly and a lot is known about it (an excellent survey is [54]). Clearly \(\mathbf{N}_{5}\) is flat and hence so is \(\mathbf{N}_{5}^{b}\); however we know exactly all the covers of the minimal nondistributive varieties of lattices (which is of course \(\mathbf{V}(\mathbf{N}_{5})\)). There are 15 finite subdirectly irreducible nonsimple lattices, commonly called \(\mathbf{L}_{1},\ldots,\mathbf{L}_{15}\) (some of them are in Figure 7) that generate all the join irreducible (in the lattice of subvarieties) covers of \(\mathbf{V}(\mathbf{N}_{5})\). It is easy to see their bounded versions all are join irreducible covers of \(\mathbf{V}(\mathbf{N}_{5}^{b})\) in the lattice of subvarieties of bounded lattices. We suspect that they are also the only join irreducible covers; one needs only to check that the (rather long) proof for lattices [57] goes through for bounded lattices but we leave this simple but tedious task to the reader. In any case for \(i=1,\ldots,15\) the subdirectly irreducible algebras in \(\mathbf{V}(\mathbf{L}_{i}^{b})\) are exactly \(\mathbf{2}^{b},\mathbf{N}_{5}^{b}\) and \(\mathbf{L}_{i}^{b}\) (via a straightforward application of Jonsson Lemma); so each \(\mathbf{L}_{i}^{b}\) is flat and each \(\mathbf{V}(\mathbf{L}_{i}^{b})\) is passively structurally complete (by Theorem 4.32).
Let's make more progress: consider the rules
\[(SD_{\wedge}) x\wedge y\approx x\wedge z \Rightarrow\quad x\wedge y\approx x\wedge(y\lor z)\] \[(SD_{\vee}) x\lor y\approx x\lor z \Rightarrow\quad x\lor y\approx x\vee(y\wedge z).\]
A lattice is _meet semidistributive_ if it satisfies \(SD_{\wedge}\), _join semidistributive_ if it satisfies \(SD_{\wedge}\) and _semidistributive_ if it satisfies both. Clearly (meet/join) semidistributive lattices form quasivarieties called \(\mathsf{SD}_{\wedge}\), \(\mathsf{SD}_{\vee}\) and \(\mathsf{SD}\) respectively, and so do their bounded versions. It is a standard exercise to show that homomorphic images of a finite (meet/join) semidistributive lattices are (meet/join) semidistributive. It is also possible to show none of the three quasivariety (and their bounded versions) is a variety (see [54] p. 82 for an easy argument); they are also not locally finite since for instance \(\mathbf{F}=\mathbf{F}_{\mathsf{SD}}(x,y,z)\) is infinite; hence \(\mathbf{F}^{b}\) is a bounded infinite three-generated lattice and thus \(\mathsf{SD}^{b}\) is not locally finite as well. A variety \(\mathsf{V}\) of (bounded) lattices is (meet/join) semidistributive if \(\mathsf{V}\subseteq\mathsf{SD}\) (\(\mathsf{V}\subseteq\mathsf{SD}/_{\wedge}\) / \(\mathsf{V}\subseteq\mathsf{SD}/_{\vee}\)).
We need a little bit of lattice theory. A filter of \(\mathbf{L}\) is an upset \(F\) of \(\mathbf{L}\) that is closed under meet; a filter is _prime_ if \(a\lor b\in F\) implies \(a\in F\) or \(b\in F\). An _ideal_\(I\) of \(\mathbf{L}\) is the dual concept, i.e. a downset that is closed under join; an ideal is _prime_ if \(a\wedge b\in I\) implies \(a\in I\) or \(b\in I\). The following lemma is straightforward.
**Lemma 5.9**.: _If \(F\) is a prime filter of \(\mathbf{L}\) (\(I\) is a prime ideal of \(\mathbf{L}\)), then \(L\backslash F\) is a prime ideal of \(\mathbf{L}\) (\(\mathbf{L}\backslash I\) is a prime filter of \(\mathbf{L}\))._
**Lemma 5.10**.: _Any bounded (meet/join) semidistributive lattice is unifiable in the variety of bounded lattices._
Proof.: Let \(\mathbf{L}\) be bounded and meet semidistributive. Since \(\mathbf{L}\) is lower bounded by \(0\) a standard application of Zorn Lemma yields a maximal proper filter \(F\) of \(\mathbf{L}\); we claim that \(F\) is also prime. Let \(a,b\notin F\); then the filter generated by \(F\cup\{a\}\) must be the entire lattice. Hence there must be a \(c\in F\) with \(c\wedge a=0\); similarly there must be a \(d\in F\) with \(d\wedge b=0\). Let \(e=c\wedge d\); then \(e\in F\) and \(e\wedge a=e\wedge b=0\) and by meet semidistributivity \(e\wedge(a\lor b)=0\). But if \(a\lor b\in F\), then \(0\in F\), a clear contradiction. Hence \(a\lor b\notin F\) and \(F\) is prime.
Let now \(\varphi:\mathbf{L}\Longrightarrow\mathbf{2}^{b}\) defined by
\[\varphi(x)=\left\{\begin{array}{ll}1,&\mbox{if }x\in F;\\ 0,&\mbox{if }x\notin F.\end{array}\right.\]
Using the fact that \(F\) is prime and \(L\backslash F\) is prime it is straightforward to check that \(\varphi\) is a homomorphism. Therefore \(\mathbf{L}\) is unifiable.
A dual proof shows that the conclusion holds for join semidistributivity and a fortiori for semidistributivity.
**Proposition 5.11**.: _Any bounded finite (meet/join) semidistributive lattice is flat._
Proof.: If \(\mathbf{L}\) is finite and (meet/join) semidistributive, every lattice in \(\mathbf{HS}(\mathbf{L})\) is finite and (meet/join) semidistributive. So it is unifiable and, if simple, it must be equal to \(\mathbf{2}^{b}\); therefore \(\mathbf{L}\) is flat.
**Corollary 5.12**.: _Every locally finite (meet/join) semidistributive variety of bounded lattices is passive structurally complete._
In [61] several (complex) sets of equations implying semidistributivity are studied; one of them is useful to us, since it describes a class of locally finite varieties. The description is interesting in that involves some of the \(\mathbf{L}_{i}^{\prime}s\) we have introduced before.
Figure 7.
**Theorem 5.13**.: _[_61_]_ _There exists a finite set \(\Gamma\) of lattices equations such that, if \(\mathsf{V}\) is any variety of lattices such that \(\mathsf{V}\models\Gamma\), then the following hold:_
1. \(\mathsf{V}\) _is semidistributive;_
2. \(\mathsf{V}\) _is locally finite;_
3. _only_ \(\mathbf{L}_{13},\mathbf{L}_{14},\mathbf{L}_{15}\in\mathsf{V}\)_._
A variety satisfying \(\Gamma\) is called _almost distributive_ and it is straightforward to check that a similar result holds for varieties of bounded lattices. Therefore:
**Proposition 5.14**.: _Every almost distributive variety of bounded lattices is passive structurally complete._
We close this subsection with a couple of observations; first \(\mathbf{V}(\mathbf{L}_{1}^{b},\mathbf{L}_{2}^{b})\) is a variety of bounded lattices that is passive structurally complete (by Theorem 4.32) but neither meet nor join semidistributive. Next, what about infinite flat (bounded) lattices? We stress that in [63] there are several examples of this kind and we believe that a careful analysis of the proofs therein could give some insight on how to construct a non locally finite variety of bounded lattices that it is passive structurally complete. But again, this is not a paper in lattice theory; therefore we defer this investigation.
### Substructural logics and residuated lattices
Originally, _substructural logics_ were introduced as logics which, when formulated as Gentzen-style systems, lack some (including "none" as a special case) of the three basic _structural rules_ (i.e. exchange, weakening and contraction) of classical logic. Nowadays, substructural logics are usually intended as those logics whose equivalent algebraic semantics are residuated structures, and they encompass most of the interesting non-classical logics: intuitionistic logic, basic logic, fuzzy logics, relevance logics and many other systems. Precisely, by substructural logics we mean here the axiomatic extensions of the Full Lambek calculus \(\mathcal{FL}\) (see [43] for details and a survey on substructural logics). All these logics are _strongly algebraizable_: their equivalent algebraic semantics are all _varieties_ of \(\mathsf{FL}\)-algebras, particular residuated lattices that we shall now define.
A _residuated lattice_ is an algebra \(\mathbf{A}=\langle A,\vee,\wedge,\cdot,/,\rangle,1\rangle\) where
1. \(\langle A,\vee,\wedge\rangle\) is a lattice;
2. \(\langle A,\cdot,1\rangle\) is a monoid;
3. \(/\) and \(\backslash\) are the right and left divisions w.r.t. \(\cdot\), i.e., \(x\cdot y\leqslant z\) iff \(y\leqslant x\backslash z\) iff \(x\leqslant z/y\), where \(\leqslant\) is given by the lattice ordering.
Residuated lattices form a variety \(\mathsf{RL}\) and an equational axiomatization, together with many equations holding in these very rich structures, can be found in [19].
A residuated lattice \(\mathbf{A}\) is _integral_ if it satisfies the equation \(x\leqslant 1\); it is _commutative_ if \(\cdot\) is commutative, and in this case the divisions coincide: \(x\backslash y=y/x\), and they are usually denoted with \(x\to y\). The classes of residuated lattices that satisfy any combination of integrality and commutativity are subvarieties of \(\mathsf{RL}\). We shall call the variety of integral residuated lattices \(\mathsf{IRL}\), commutative residuated lattices \(\mathsf{CRL}\), and their intersection \(\mathsf{CIRL}\).
Residuated lattices with an extra constant \(0\) in the language are called \(\mathsf{FL}\)-algebras, since they are the equivalent algebraic semantics of the Full Lambek calculus \(\mathcal{FL}\). Residuated lattices are then the equivalent algebraic semantics of \(0\)-free fragment of \(\mathcal{FL}\), \(\mathcal{FL}^{+}\). An \(\mathsf{FL}\)-algebra is \(0\)_-bounded_ if it satisfies the inequality \(0\leqslant x\) and the variety of zero-bounded \(\mathsf{FL}\)-algebras is denoted by \(\mathsf{FL}_{o}\); integral and \(0\)-bounded \(\mathcal{FL}\)-algebras are called \(\mathcal{FL}_{w}\) algebras (since they are the equivalent algebraic semantics of the Full Lambek Calculus with weakening), and we call its commutative subvariety \(\mathcal{FL}_{ew}\).
Restricting ourselves to the commutative case there is another interesting equation:
\[(x\to y)\vee(y\to x)\approx 1.\]
It can be shown (see [19] and [55]) that a subvariety of \(\mathsf{FL}_{\mathsf{ew}}\) or \(\mathsf{CIRL}\) satisfies the above equation if and only if any algebra therein is a subdirect product of totally ordered algebras, and this implies that all the subdirectly irreducible algebras are totally ordered. Such varieties are called _representable_ and the subvariety axiomatized by that equation is the largest subvariety of \(\mathsf{FL}_{\mathsf{ew}}\) or \(\mathsf{CIRL}\) that is representable.
The representable subvariety of \(\mathsf{FL}_{ew}\) is usually denoted by \(\mathsf{MTL}\), since it is the equivalent algebraic semantics of Esteva-Godo's _Monoidal t-norm based logic_[38].
#### 5.2.1. Active universal completeness
We have already seen examples of subvarieties of \(\mathsf{FL}_{ew}\)-algebras that are active universally complete, but those were all locally finite subvarieties of \(\mathsf{BL}\)-algebras, that is, \(\mathsf{MTL}\)-algebras satisfying the divisibility equation: \(x\wedge y=x(x\to y)\). In this section we will display a different class of examples. If \(\mathbf{A}\) is any algebra a congruence \(\theta\in\operatorname{Con}(\mathbf{A})\) is a **factor congruence** if there is a \(\theta^{\prime}\in\operatorname{Con}(\mathbf{A})\) such that \(\theta\vee\theta^{\prime}=1_{\mathbf{A}}\), \(\theta\wedge\theta^{\prime}=0_{\mathbf{A}}\) and \(\theta,\theta^{\prime}\) permute. It is an easy exercise in general algebra to show that in this case \(\mathbf{A}\cong\mathbf{A}/\theta\times\mathbf{A}/\theta^{\prime}\); note that \(1_{\mathbf{A}}\) and \(0_{\mathbf{A}}\) are factor congruences that gives a trivial decomposition. A less known fact (that appears in [31]) is:
**Lemma 5.15**.: _Let \(\mathbf{A}\) be any algebra and \(\theta\) a factor congruence; then \(\mathbf{A}/\theta\) is a retract of \(\mathbf{A}\) if and only if there is a homomorphism \(h:\mathbf{A}/\theta\longrightarrow\mathbf{A}/\theta^{\prime}\)._
Proof.: Suppose first that there is a homomorphism \(h:\mathbf{A}/\theta\longrightarrow\mathbf{A}/\theta^{\prime}\). Since \(\mathbf{A}\cong\mathbf{A}/\theta\times\mathbf{A}/\theta^{\prime}\) for \(u\in A\), \(u=(a/\theta,b/\theta^{\prime})\), we set \(f(u)=a/\theta\); then \(f:\mathbf{A}\longrightarrow\mathbf{A}/\theta\) is clearly an epimorphism, since \((a/\theta,a/\theta^{\prime})\in A\) for all \(a\in A\). Let
\[g(a/\theta)=(a/\theta,h(a/\theta)).\]
One can check that \(g\) is a homomorphism with standard calculations and clearly \(fg=id_{\mathbf{A}/\theta}\). Hence \(\mathbf{A}/\theta\) is a retract of \(\mathbf{A}\).
Conversely suppose that \(f,g\) witness a retraction from \(\mathbf{A}/\theta\) in \(\mathbf{A}\); then if \(g(a/\theta)=(u/\theta,v/\theta^{\prime})\), set \(h(a/\theta)=v/\theta^{\prime}\). It is then easy to see that \(h\) is a homomorphism and the thesis holds.
Observe that in any \(\mathsf{FL}\)-algebra, every compact (i.e., finitely generated) congruence is principal; as a matter of fact if \(\mathbf{A}\) is in \(\mathsf{FL}\), \(X=\{(a_{1},b_{1}),\ldots,(a_{n},b_{n})\}\) is a finite set of pairs from \(A\) and \(p=\bigwedge_{i=1}^{n}[(a_{i}\backslash b_{i})\wedge(b_{i}\backslash a_{i}) \wedge 1]\) then \(\vartheta_{\mathbf{A}}(X)=\vartheta_{\mathbf{A}}(p,1)\).
**Theorem 5.16**.: _Let \(\mathsf{Q}\) be a quasivariety of \(\mathsf{FL}_{w}\)-algebras in which every principal congruence is a factor congruence; then \(\mathsf{Q}\) has projective unifiers._
Proof.: Let \(\mathbf{F}_{\mathsf{Q}}(X)/\theta\) be a finitely presented unifiable algebra in \(\mathsf{Q}\); then there is an onto homomorphism from \(\mathbf{F}_{\mathsf{Q}}(X)/\theta(\Sigma)\) to \(\mathbf{F}_{\mathsf{Q}}=\mathbf{2}\). Now \(\theta=\theta(\Sigma)\) is a principal congruence, hence it is a factor congruence with witness \(\theta^{\prime}\), i.e. \(\mathbf{F}_{\mathsf{Q}}(X)\cong\mathbf{F}_{\mathsf{Q}}(X)/\theta\times\mathbf{ F}_{\mathsf{Q}}(X)/\theta^{\prime}\). If \(\theta^{\prime}=1_{\mathbf{A}}\), then \(\mathbf{F}_{\mathsf{Q}}(X)=\mathbf{F}_{\mathsf{Q}}(X)/\theta\) and so it is projective. Otherwise \(\mathbf{F}_{\mathsf{Q}}=\mathbf{2}\) is embeddable in \(\mathbf{F}_{\mathsf{Q}}(X)/\theta^{\prime}\); hence there is a homomorphism from \(\mathbf{F}_{\mathsf{Q}}(X)/\theta\) to \(\mathbf{F}_{\mathsf{Q}}(X)/\theta^{\prime}\). By Lemma 5.15\(\mathbf{F}_{\mathsf{Q}}(X)/\theta\) is a retract of \(\mathbf{F}_{\mathsf{Q}}(X)\), i.e. it is projective.
So any quasivariety of \(\mathsf{FL}_{w}\)-algebras with the property that every principal congruence is a factor congruence is active universally complete (Theorem 3.22); really it is active primitive universally complete, because \(\mathbf{F}_{\mathsf{Q}}\) is the two-element algebra for any quasivariety \(\mathsf{Q}\) of \(\mathsf{FL}_{w}\)-algebras (Theorem 3.26). We observe in passing that for any \(\mathsf{FL}_{w}\) algebra every factor congruence is principal; this is because every variety of \(\mathsf{FL}_{w}\)-algebras is Kollar and congruence distributive. Discriminator varieties of \(\mathsf{FL}_{ew}\)-algebras have been completely described in [60]; as a consequence we have:
**Theorem 5.17**.: _For a variety \(\mathsf{V}\) of \(\mathsf{FL}_{ew}\)-algebras the following are equivalent:_
1. \(\mathsf{V}\) _is a discriminator variety;_
2. \(\mathsf{V}\) _is semisimple, i.e. all the subdirectly irreducible members of_ \(\mathsf{V}\) _are simple;_
3. _there is an_ \(n\in\mathbb{N}\) _such that_ \(\mathsf{V}\models x\vee-x^{n}\approx 1\)_;_
4. _for any_ \(\mathbf{A}\in\mathsf{V}\) _every compact (i.e. principal) congruence is a factor congruence._
Proof.: The equivalence of (1), (2) and (3) has been proved in [60]. Assume then (1); it is well-known that in every discriminator variety every principal congruence is a factor congruence. In fact if \(\mathsf{V}\) is a discriminator variety with discriminator term \(t(x,y,z)\) let for any \(\mathbf{A}\in\mathsf{V}\) and \(a,b\in A\)
\[\theta_{\mathbf{A}}(a,b) =\{(u,v):t(a,b,u)=t(a,b,v)\}\] \[\gamma_{\mathbf{A}}(a,b) =\{(u,v):t(a,t(a,b,u),u)=t(a,t(a,b,v),v)\}.\]
Using the properties of the discriminator term it is easy to verify that they are congruences and the complement of each other; since discriminator varieties are congruence permutable they are factor congruences and (4) holds.
Conversely assume (4) and let \(\mathbf{A}\) be a subdirectly irreducible member of \(\mathsf{V}\). Let \(\mu_{\mathbf{A}}\) be the minimal nontrivial congruence of \(\mathbf{A}\); then \(\mu_{\mathbf{A}}\) is principal, so it must be a factor congruence. This is possible if and only if \(\mu_{\mathbf{A}}=1_{\mathbf{A}}\); therefore \(\mathbf{A}\) is simple, and \(\mathsf{V}\) is semisimple.
**Corollary 5.18**.: _Every discriminator (or, equivalently, semisimple) variety \(\mathsf{V}\) of \(\mathsf{FL}_{ew}\)-algebras is active primitive universal._
We observe that Theorem 5.16 does not add anything as far as \(\mathsf{BL}\)-algebras are concerned; in fact any discriminator variety of \(\mathsf{FL}_{ew}\)-algebras must satisfy \(x^{n}\approx x^{n+1}\) for some \(n\)[60] and the varieties of \(\mathsf{BL}\)-algebras with that property are exactly the locally finite varieties, which we already pointed out are active universally complete.
#### 5.2.2. Passive structural completeness
A particularly interesting application of our characterization of passive structurally complete varieties is in the subvariety of integral and \(0\)-bounded \(\mathsf{FL}\)-algebras. Let us rephrase Theorem 4.23 in this setting. First, using residuation it is easy to see that every finite set of identities in \(\mathsf{FL}\) is equivalent to a single identity. Moreover, in every subquasivariety \(\mathsf{Q}\) of \(\mathsf{FL}_{w}\), the smallest free algebra \(\mathbf{F}_{\mathsf{Q}}\) is the \(2\)-element Boolean algebra \(\mathbf{2}\), and its generated quasivariety is the variety of Boolean algebras.
**Corollary 5.19**.: _Let \(\mathsf{Q}\) be a quasivariety of \(\mathsf{FL}_{w}\)-algebras, then the following are equivalent:_
1. \(\mathsf{Q}\) _is passive structurally complete;_
2. _every trivializing identity in the variety of Boolean algebras is trivializing in_ \(\mathsf{Q}\)_;_
3. _every nontrivial finitely presented algebra is unifiable._
4. _every nontrivial algebra is unifiable._
The previous corollary has a possibly more transparent shape from the point of view of the logics. Let us call a formula \(\varphi\) in the language of \(\mathsf{FL}\)-algebras _explosive in a logic_\(\mathcal{L}\), with consequence relation \(\vdash_{\mathcal{L}}\), if \(\varphi\vdash_{\mathcal{L}}\delta\) for all formulas \(\delta\) in the language of \(\mathcal{L}\). Moreover, we call \(\varphi\) a _contradiction in_\(\mathcal{L}\) if \(\varphi\vdash_{\mathcal{L}}0\). Since \(\mathsf{FL}_{w}\)-algebras are \(0\)-bounded, it is clear that contradictions coincide with explosive formulas in all axiomatic extensions of \(\mathcal{FL}_{w}\).
**Corollary 5.20**.: _Let \(\mathcal{L}\) be an axiomatic extension of \(\mathcal{FL}_{w}\), then the following are equivalent:_
1. \(\mathcal{L}\) _is passively structurally complete._
2. _Every contradiction of classical logic is explosive in_ \(\mathcal{L}\)_._
3. _Every passive rule of_ \(\mathcal{L}\) _has explosive premises._
Let us first explore the consequences of the equivalence between (1) and (2) in Corollary 5.20. It is well known that intuitionistic logic is passively structurally complete (reported by Wronski at the \(51\)st Conference on the History of Logic, Krakow, 2005). This is easily seen by Corollary 5.20, indeed, observe that any contradiction of classical logic \(\varphi\) is such that its negation \(\neg\varphi\) is a theorem of classical logic. Using the Glivenko translation and the deduction theorem, we obtain that \(\varphi\) is explosive in intuitionistic logic as well, which is then passively structurally complete. We will now show how this argument can be extended to a wide class of logics.
Let us write the negations corresponding to the two divisions as \(\neg x=x\backslash 0\) and \(\sim x=0/x\). Following [44, 45], we say that two logics \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are _Glivenko equivalent_ if for all formulas \(\varphi\):
\[\vdash_{\mathcal{L}_{1}}\neg\varphi\ \ \text{iff}\ \ \vdash_{\mathcal{L}_{2}}\neg\varphi\]
(equivalently, \(\vdash_{\mathcal{L}_{1}}\sim\varphi\ \text{iff}\ \ \vdash_{\mathcal{L}_{2}}\sim\varphi\)). Given a logic \(\mathcal{L}\), we call _Glivenko logic of \(\mathcal{L}\)_ the smallest substructural logic that is Glivenko equivalent to \(\mathcal{L}\). Moreover, we call _Glivenko logic of \(\mathcal{L}\) with respect to \(\mathcal{L}^{\prime}\)_, and denote it with \(\mathsf{S}_{\mathcal{L}^{\prime}}(\mathcal{L})\) the smallest extension of \(\mathcal{L}^{\prime}\) that is Glivenko equivalent to \(\mathcal{L}\) (all these notions make sense by the results in [44, 45]). \(\mathsf{S}_{\mathcal{L}^{\prime}}(\mathcal{L})\) is axiomatized relatively to \(\mathcal{L}^{\prime}\) by the set of axioms \(\{\neg\sim\varphi\mathbin{:\!\!
Here we are interested in the Glivenko equivalent of classical logic with respect to \(\mathcal{FL}_{w}\). From the algebraic perspective, this corresponds to the largest subvariety of \(\mathsf{FL}_{w}\) that is Glivenko equivalent to Boolean algebras, \(\mathsf{G_{FL}_{w}(B)}\). The latter is axiomatized in [43, Corollary 8.33] as the subvariety of \(\mathsf{FL}_{w}\) satisfying:
1. \(\sim(x\wedge y)=\sim(xy)\)
2. \(\sim(x\lor y)=\sim(\neg x\lor y)\)
3. \(\neg(x\lor y)=\neg(\sim x\lor y)\)
4. \(\sim(x\lor y)=\sim(\neg\sim x\lor\neg\sim y)\)
5. \(\sim(x/y)=\sim(\neg\sim x/\neg\sim y)\).
**Theorem 5.21**.: _Every axiomatic extension \(\mathcal{L}\) of the Glivenko logic of classical logic with respect to \(\mathcal{FL}_{w}\) is passively structurally complete._
Proof.: Consider a contradiction of classical logic \(\varphi\), by the deduction theorem \(\vdash\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and only if \(-a\) has infinite order. Now, sDL turns out to be the variety generated by the perfect chains (see [73, 10]).
**Lemma 5.23**.: _A chain \(\mathbf{A}\in\mathsf{FL}_{\mathsf{c}ew}\) is perfect if and only if there is no element with finite order \(a\in A\) such that \(a\geq\neg a\)._
Proof.: By order preservation, if there is an element \(a\in A,a\geq\neg a,a^{n}=0\), then both \(a\) and its negation have finite order, thus the chain is not perfect. Suppose now a chain \(\mathbf{A}\) is not perfect. Observing that for every element \(x\in A\) it cannot be that both \(x\) and \(\neg x\) gave infinite order, we get that there is an element \(a\in A\) such that both \(a\) and its negation \(\neg a\) have finite order. If \(a\ngeq\neg a\), since \(\mathbf{A}\) is a chain, \(a<\neg a\). Then \(\neg\neg a\leq\neg a\), and they both have finite order.
**Theorem 5.24**.: _For a subvariety \(\mathsf{V}\) of \(\mathsf{MTL}\) the following are equivalent:_
1. \(\mathsf{V}\) _is passive structurally complete;_
2. \(\mathsf{V}\) _is a subvariety of_ sDL_._
Proof.: Since subvarieties of sDL have a Boolean retraction term (2) implies (1) by Corollary 5.19. Suppose now that \(\mathsf{V}\nleq\mathsf{sDL}\). Then there is a chain \(\mathbf{A}\) in \(\mathsf{V}\) that is not perfect. By Lemma 5.23, there exists \(a\in A,a\geq\neg a,a^{n}=0\) for some \(n\in\mathbb{N}.\) Thus, \(\neg(a\vee\neg a)^{n}=1\). But the identity \(\neg(x\vee\neg x)^{n}=0\) holds in Boolean algebras. Thus \(\neg(x\vee\neg x)^{n}\approx 1\) is trivializing in Boolean algebras but not in \(\mathsf{V}\). By Corollary 5.19, \(\mathsf{V}\) is not passive structurally complete and thus (1) implies (2).
Notice that the previous theorem also implies that a variety of MTL-algebras that is not a subvariety of sDL cannot be structurally complete.
We mention that structural completeness in subvarieties of \(\mathsf{MTL}\) (or their logical counterparts) has been studied by several authors: e.g., [76] and [49] for Lukasiewicz logics, [36] Godel logic, and [29] for fuzzy logics in the MTL framework; in the latter the authors show for instance that all subvarieties of pseudocomplemented MTL-algebras (\(\mathsf{SMTL}\)) are passive structurally complete. This result is here obtained as a consequence of Theorem 5.24, since \(\mathsf{SMTL}\) is a subvariety of sDL. From the results mentioned above and the characterization theorem, it also follows that the only varieties of MV-algebras (the equivalent algebraic semantics of infinite-valued Lukasiewicz logic) that are structurally complete are Boolean algebras and the variety generated by perfect MV-algebras (this result has been obtained following a different path in [49]).
We also remark that a variety of \(\mathsf{FL}_{ew}\)-algebras can be at most non-negative universally complete since trivial algebras are finitely presented and not unifiable (unifiability is a necessary condition for universal completeness by Theorem 3.4); by Proposition 4.20 this happens if and only if the variety is active universally complete and passive structurally complete. Thus, for instance, a semisimple variety of \(\mathsf{FL}_{ew}\)-algebras satisfying the conditions in Corollary 5.19 would be non-negative universally complete. We stress that this observation is not of particular interest in MTL-algebras, since the only discriminator variety in sDL is the variety of Boolean algebras. Indeed, consider a chain \(\mathbf{A}\) in a discriminator variety \(\mathsf{V}\) in sDL. Then there is some \(n\in\mathbb{N}\) such that \(\mathsf{V}\models x\vee\neg x^{n}\approx 1\). Let now \(a\in A\); either \(a\) has finite order, and then from \(a\vee\neg a^{n}\) we obtain that \(a=1\), or \(a\) has infinite order, and then \(\neg a\) has finite order. So by the analogous reasoning \(\neg a=1\). Therefore \(\mathbf{A}\) is the two-element chain, and \(\mathsf{V}\) is the variety of Boolean algebras.
## 6. Conclusions
In Figure 8 we display several classes of varieties that we have considered in this paper (and the labels should be self explanatory); we are dropping the hereditary subclasses to avoid clutter. Observe that this is really a meet semilattice under inclusion.
Almost all the classes are provably distinct.
1. The variety of bounded distributive lattice is structurally complete (Proposition 5.5) but it is neither passive universally complete, since it is Kollar and the least free algebra is not trivial, nor non-negative universally complete (Proposition 5.4). Hence \(S\neq NNU,S+PU\).
2. The variety of Boolean algebras is non-negative universally complete but not universally complete (Example 3.18) so \(NNU\neq U\).
3. Any locally finite variety of \(\mathsf{BL}\)-algebras is active universally complete and some of them are not non-negative universally complete (Example 3.24), so \(AU\neq NNU\).
4. The variety in Example 4.19 is active structurally complete but not active universally complete, hence \(AS\neq AU\).
5. Any locally finite variety of bounded semidistributive lattices different from the distributive variety is passive structurally complete (Corollary 5.12) but not structurally complete, since the only structurally complete variety of bounded distributive lattices is the distributive variety (Proposition 5.5); as above it is also not passive universally complete. Hence \(PS\neq S,PU\).
6. The variety \(\mathbf{V}(\mathbf{M}^{+}_{3,3})\) (Section 5.1) is passive universally complete, as any variety of lattices, but it is not structurally complete since \(\mathbf{Q}(\mathbf{M}^{+}_{3,3})\nleq\mathbf{V}(\mathbf{M}^{+}_{3,3})\); hence \(PU\neq S+PU\).
7. Example 7.11 in [35] shows that \(AS\neq S\).
Moreover for the primitive counterparts:
1. the variety \(\mathbf{V}(\mathbf{F})\) generated by the Fano lattice is structurally complete and passive universally complete but not primitive (Section 5.1).
2. the variety of De Morgan lattices (Example 3.29) is active universally complete but not active primitive universal.
3. the variety of injective monounary algebras is active structurally complete but not active primitive structural (Example 7.2 in [35]).
There are three examples that we were not able to find, which would guarantee total separation of all the classes we have considered:
1. A (quasi)variety that is structurally complete and passive universally complete, but not universally complete.
2. A non-negative universally complete (quasi)variety such that not all subquasivarieties are non-negative universally complete.
3. A universally complete variety which is not primitive universal.
The natural example for (3) would be a locally finite variety with exact unifiers having a subvariety without exact unifiers. However we are stuck because of lack of examples: we have only one unifiable locally finite variety with exact (non projective) unifiers, i.e. the variety of distributive lattices, which is trivially primitive universal. A similar situation happens for (2); all the examples of non-negative universally complete varieties we have are either equationally complete and congruence distributive (so they do not have nontrivial subquasivarieties), or else are active universally complete just by consequence of their characterization (such as the subvarieties of \(\mathsf{FL}_{ew}\) in Section 5.2). Then we have Stone algebras that are not equationally complete but the only nontrivial subvariety is the variety of Boolean algebras, that is non-negative universally complete. Now from Corollary 2.30 it is immediate that
Figure 8. The universal and structural completeness classes.
every subquasivariety of \(\mathsf{ST}\) is non-negative universally complete. In conclusion a deeper investigation of universally complete and non-negative universally complete varieties is needed.
For (1) the situation is (slightly) easier to tackle: any primitive variety of lattices that is not universally complete gives a counterexample. While it seems impossible that all the primitive varieties in Section 5.1.1 are universally complete, actually proving that one it is not does not seem easy. This is due basically to the lack of information on free algebras in specific varieties of lattices, such as for instance \(\mathbf{V}(\mathbf{M}_{3})\); note that this variety is locally finite and hence all the finitely generated free algebras are finite. But we are not aware of any characterization.
|
2310.20397 | Convergence in Distribution of Randomized Algorithms: The Case of
Partially Separable Optimization | We present a Markov-chain analysis of blockwise-stochastic algorithms for
solving partially block-separable optimization problems. Our main contributions
to the extensive literature on these methods are statements about the Markov
operators and distributions behind the iterates of stochastic algorithms, and
in particular the regularity of Markov operators and rates of convergence of
the distributions of the corresponding Markov chains. This provides a detailed
characterization of the moments of the sequences beyond just the expected
behavior. This also serves as a case study of how randomization restores
favorable properties to algorithms that iterations of only partial information
destroys. We demonstrate this on stochastic blockwise implementations of the
forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a
special case, convex), nonsmooth optimization. | D. Russell Luke | 2023-10-31T12:19:10Z | http://arxiv.org/abs/2310.20397v1 | # Convergence in Distribution of Randomized Algorithms:
###### Abstract
We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the _distributions_ of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.
**2010 Mathematics Subject Classification:** Primary 65C40, 90C06, 90C26; Secondary 46N30, 60J05, 49M27, 65K05.
Keywords:Nonconvex optimization, Large-scale optimization, Markov chain, Random function iteration, Error bounds, Convergence rates
## 1 Introduction
We present a Markov-chain analysis of blockwise-stochastic algorithms for solving
\[\underset{x\in\mathcal{E}}{\text{minimize}}\ f(x)+\sum_{j=1}^{m}g_{j}(x). \tag{1}\]
Here \(\mathcal{E}\) is a Euclidean space that is decomposed into a direct sum of the subspaces \(\mathcal{E}_{j}\), denoted \(\mathcal{E}=\bigoplus_{j=1}^{m}\mathcal{E}_{j}\), and for each \(j=1,2,\ldots,m\), the function \(f\) is continuously differentiable with blockwise-Lipschitz gradients, \(g_{i}\) is everywhere subdifferentially regular (the regular and limiting subgradients coincide) and
\[g_{j}(x)=h_{j}(x_{j}) \tag{2}\]
for \(h_{j}:\mathcal{E}_{j}\rightarrow(-\infty,+\infty]\) subdifferentially regular. This represents a partially separable structured optimization problem.
Problems with this structure are ubiquitous, and particular attention has focused on iterative algorithms for large-scale instances where the iterates are generated from only partial evaluation of the objective. Which partial information to access in each iteration is randomly selected and computations can be done in parallel across distributed systems [34; 36; 37; 43]. There is a rich literature on the analysis of these methods, focusing mainly on deterministic properties of the objective function and expectations, iteration complexity, convergence of objective values, and acceleration strategies [8; 10; 13; 31; 32; 33; 35; 36].
Our own contributions to the literature on such stochastic methods has focused on a stochastic block-coordinate primal-dual method for the instance of (1) where \(f(x)\) is the indicator function of an affine subspace [24]. We will touch on primal-dual approaches via a stochastic blockwise Douglas-Rachford Algorithm 3, but more practical primal-dual approaches to nonsmooth problems are not on the agenda of the present study.
Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms in the most complete sense possible. By that we mean not only statements about the limits of the ergodic sequences, which only tell one about the expectation, but rather the limiting distributions of the sequence of measures behind the iterates, when viewed as a Markov chain (see Theorem 7 and Proposition 17). This allows one to access the moments of the limiting sequence, not just its mean.
Getting a handle on the distributions behind iterates of randomized algorithms is significant not only for its generality, but also for the range of practical applications this encompasses. To explain this we note that, in its most general form, _consistency_ of the update functions generating the Markov operators is not assumed. In plain terms, the update functions in the Markov chain need not have common fixed points. To see why this matters, it is first important to recognize that the literature on randomized algorithms is exclusively concerned with _almost sure_ convergence. In [19, Proposition 2.5] it is shown that almost sure convergence of the iterates of such Markov chains can only happen when the update functions have common fixed points. Situations where the update functions do not have common fixed points are only a small perturbation away: consider any fixed point iteration with numerical error. To be sure, the consistent case allows for tremendous simplifications, and we show this in sections 3.2 and 4.2.1; the point is, however, that our approach goes far beyond this idealized case.
Previous work has established a foundation for this based on a fixed point theoretic approach [5, 6, 7, 18, 19, 20]. A different perspective, modeled after a more direct analysis of the _descent_ properties of algorithms in an optimization context has been established recently by Salzo and Villa [39]. This was further developed in the masters thesis of Kartamyschew [21]. In the present work we extend the results of [21] to a fully nonconvex setting for more general mappings.
A noteworthy feature of blockwise methods, and what distinguishes the present study from [18, 19, 20] is that, even when the objective in (1) is convex, blockwise algorithms do _not_ satisfy the usual regularity properties enjoyed by convex optimization algorithms that lead generically to global convergence. This is demonstrated in Example 2. The stochastic implementations for convex problems, however, _do_ enjoy nice properties _in expectation_ (see Theorem 1), and this is enough to guarantee generic global convergence (Theorem 7, Proposition 17). While this fact lies implicitly behind the convergence analysis of, for instance, [24] and many others, it was recognized in [39] as the important property of _descent in expectation_. We place these observations in the context of Markov operators with update functions that satisfy desirable properties in expectation (see Theorem 3). These notions, at the level of the Markov operator, have already been defined in [18, 19, 20]; the convergence results presented in those works, however, are based on the assumption that each of the update functions that generate the Markov operator have the same _class_ of regularity that they have in expectation. Blockwise algorithms for partially separable optimization do not enjoy this structure, and therefore many of the results of [18, 19, 20] do not immediately apply; indeed, we conjecture that some of the stronger convergence results of [18, 19] are not true without additional compactness assumptions, hence our analogous global convergence statement for the convex case Proposition 6, is weaker than its counterparts [18, Theorem 3.6] or [19, Theorem 2.9].
The basic machinery of stochastic blockwise function iterations (Algorithm 1) and Markov chains is reviewed in section 2. In section 3 we review and establish the chain of regularity lifted from the regularity of the individual mappings on the sample space, Theorem 1, to the regularity of the corresponding Markov operators on the space of probability measures, Theorem 3. In section 3.2 the special case of consistent stochastic feasibility is detailed, showing in particular how the abstract objects for the general case simplify (see Theorem 5). In section 4 we present abstract convergence results, with and without rates (Proposition 6 and Theorem 7). The key to quantitative results in the space of probability measures is _metric subregularity_ of the _invariant Markov transport discrepancy_ (41). This is shown in the case of consistent stochastic feasibility to be _necessary_ for quantitative convergence of _paracontractive_ Markov operators in Theorem 9.
We return to the specialization of stochastic partial blockwise splitting algorithms in section 5, where we develop a case study of stochastic blockwise forward-backward splitting (Algorithm 2) and stochastic blockwise Douglas-Rachford (Algorithm 3), establishing the regularity of the corresponding fixed
point operators (Propositions 11-14) and convergence in distribution of the corresponding Markov chains (Proposition 17).
## 2 Notation and Random Function Iterations
As usual, \(\mathbb{N}\) denotes the natural numbers including \(0\). We denote by \(\mathscr{P}(G)\) the set of all probability measures on \(G\subset\mathcal{E}\); the measurable sets are given by the Borel sigma algebra on a subset \(G\subset\mathcal{E}\), denoted by \(\mathcal{B}(G)\). The notation \(X\sim\mu\in\mathscr{P}(G)\) means that the law of \(X\), denoted \(\mathcal{L}(X)\), satisfies \(\mathcal{L}(X):=\mathbb{P}^{X}:=\mathbb{P}(X\in\cdot)=\mu\), where \(\mathbb{P}\) is the probability measure on some underlying probability space. The open ball centered at \(x\in\mathcal{E}\) with radius \(r>0\) is denoted \(\mathbb{B}(x,r)\); the closure of the ball is denoted \(\overline{\mathbb{B}}(x,r)\). The distance of a point \(x\in\mathcal{E}\) to a set \(A\subset\mathcal{E}\) in the metric \(d\) is denoted by \(d(x,A):=\inf_{w\in A}d(x,w)\). The _projector_ onto a set \(A\) is denoted by \(P_{A}\) and \(P_{A}(x)\) is the set of all points where \(d(x,A)\) is attained. This is empty if \(A\) is open, and a singleton if \(A\) is closed and convex; generically, \(P_{A}\) is a (possibly empty) set-valued mapping, for which we use the notation \(P_{A}:\,\mathcal{E}\rightrightarrows\mathcal{E}\,\). For the ball of radius \(r\) around a subset of points \(A\subset\mathcal{E}\), we write \(\mathbb{B}(A,r):=\bigcup_{x\in A}\mathbb{B}(x,r)\).
Let \(\mathbb{I}\) denote an index set, each element \(i\in\mathbb{I}\) of which is a unique assignment to nonempty subsets of \(\{1,2,\ldots,m\}\): \(M_{i}\in 2^{\{1,2,\ldots,m\}}\setminus\emptyset\) for \(i\in\mathbb{I}\), where \(\cup_{i\in\mathbb{I}}M_{i}=2^{\{1,2,\ldots,m\}}\setminus\emptyset\) and \(M_{i}\neq M_{j}\) for \(i\neq j\). For convenience we will let the first such subset be the set itself. \(M_{1}:=\{1,2,\ldots,m\}\). For \(i\in\mathbb{I}\) we denote the subspace \(\mathcal{E}_{M_{i}}:=\bigoplus_{j\in M_{i}}\mathcal{E}_{j}\) where \(\{\mathcal{E}_{1},\ldots,\mathcal{E}_{m}\}\) is a collection of mutually orthogonal subspaces of \(\mathcal{E}\). The complement to this space in \(\mathcal{E}\) is denoted \(\mathcal{E}_{M_{i}}^{\circ}:=\mathcal{E}\setminus\mathcal{E}_{M_{i}}\); likewise, denote the complement to the subset \(M_{i}\) in \(\{1,2,\ldots,m\}\) by \(M_{i}^{\circ}=\{1,2,\ldots,m\}\setminus M_{i}\). The _affine_ embedding of the subspace \(\mathcal{E}_{M_{i}}\) in \(\mathcal{E}\) at a point \(z\in\mathcal{E}\) is denoted \(\mathcal{E}_{M_{i}}\bigoplus\{z\}\); the canonical embedding of \(\mathcal{E}_{M_{i}}\) in \(\mathcal{E}\) is thus \(\mathcal{E}_{M_{i}}\bigoplus\{0\}\) where it is understood that \(0\in\mathcal{E}\). We use the corresponding notation for subsets \(G\subset\mathcal{E}\): \(G_{j}\subset\mathcal{E}_{j}\) and the affine embedding of a subset \(G_{M_{i}}\) at a point \(z\in G_{M_{i}^{\circ}}\) is given by \(G_{M_{i}}\bigoplus\{z\}_{M_{i}^{\circ}}\). The blockwise mappings \(T_{i}:\,\mathcal{E}\to\mathcal{E}\,\) corresponding to this structure are defined by
\[[T_{i}(x)]_{j}:=\begin{cases}T_{j}^{\prime}(x),&j\in M_{i},\\ x_{j}&\text{else},\end{cases}\quad\text{ for }\quad T_{j}^{\prime}:\, \mathcal{E}\to\mathcal{E}_{j}\,,\quad j=1,2,\ldots,m. \tag{3}\]
Note that \(T_{j}^{\prime}\) is some action with respect to the \(j\)'th block in \(\mathcal{E}_{j}\), though with input from \(x\in\mathcal{E}\).
The measure space of indexes is denoted \((\mathbb{I},\mathcal{I})\) and \(\xi\) is an \(\mathbb{I}\)-valued random variable on a probability space. The random variables \(\xi_{k}\) in the sequence \((\xi_{k})_{k\in\mathbb{N}}\) (abbreviated \((\xi_{k})\)) are independent and identically distributed (i.i.d.) with \(\xi_{k}\) distributed as \(\xi\) (\(\xi_{k}\sim\xi\)). At each iteration \(k\) of the algorithm one selects at random a nonempty subset of blocks \(M_{\xi_{k}}\subset\{1,2,\ldots,m\}\) and performs an update to each block as follows:
```
Initialization: Select a random variable \(X_{0}\) with distribution \(\mu\), \(t=(t_{1},t_{2},\ldots,t_{m})>0\), and \((\xi_{k})_{k\in\mathbb{N}}\) an i.i.d. sequence with values on \(\mathbb{I}\) and \(X_{0}\) and \((\xi_{k})\) independently distributed. Given \(T_{j}^{\star}:\,\mathcal{E}\to\mathcal{E}_{j}\,\) for\(j=1,2,\ldots,m\). for\(k=0,1,2,\ldots\)do \[X^{k+1}=T_{\xi_{k}}(X^{k})\quad\text{where}\quad[T_{\xi_{k}}(X^{k})]_{j}:= \begin{cases}T_{j}^{\prime}(X^{k}),&j\in M_{\xi_{k}}\,,\\ X_{j}^{k}&\text{else}\end{cases}.\] (4)
```
**Algorithm 1**Stochastic Block Iteration (SBI)
This is a special instance of a _random function iteration_ studied in [18; 19; 20]. Convergence of such an iteration is understood in the sense of distributions and is a consequence of two key properties: that the mapping \(T_{i}\) is almost \(\alpha\)_-firmly nonexpansive (abbreviated \(a\alpha\)-fne) in expectation_ ((29) and (32a)), and that the _invariant Markov transport discrepancy_ defined in (41) is _gauge metrically subregular_ (55) at invariant measures. The latter of these two properties has been shown in many settings to be necessary for quantitative convergence of the iterates [18; 26]. The first property, with the qualifier "almost" removed, is enough to guarantee that the sequence of measures is _asymptotically regular_ with respect to the _Wasserstein metric_. All this is formally defined below.
### Markov chains, measure-valued mappings, and stochastic fixed point problems
The following assumptions hold throughout.
**Assumption 1**.:
1. _[label=()]_
2. \(\xi_{0},\xi_{1},\ldots,\xi_{k}\) _are i.i.d random variables for all_ \(k\in\mathbb{N}\) _on a probability space with values on_ \(\mathbb{I}\)_. The variable_ \(X_{0}\) _is an random variable with values on_ \(\mathcal{E}\)_, independent from_ \(\xi_{k}\)_._
3. _The function_ \(\Phi:\,\mathcal{E}\times\mathbb{I}\to\mathcal{E}\,,\,(x,i)\mapsto T_{i}x\) _is measurable._
Let \((X_{k})_{k\in\mathbb{N}}\) be a sequence of random variables with values on \(G\subset\mathcal{E}\). Recall that a Markov chain with _transition kernel_ \(p\) satisfies
1. [label=()]
2. \(\mathbb{P}\left(X_{k+1}\in A\,|\,X_{0},X_{1},\ldots,X_{k}\right)=\mathbb{P} \left(X_{k+1}\in A\,|\,X_{k}\right)\);
3. \(\mathbb{P}\left(X_{k+1}\in A\,|\,X_{k}\right)=p(X_{k},A)\)
for all \(k\in\mathbb{N}\) and \(A\in\mathcal{B}(G)\) almost surely in probability, \(\mathbb{P}\)-a.s. In [19] it is shown that the sequence of random variables \((X_{k})\) generated by Algorithm 1 is a Markov chain with transition kernel \(p\) given by
\[(x\in G)(A\in\mathcal{B}(G))\qquad p(x,A):=\mathbb{P}(T_{\xi}x\in A) \tag{5}\]
for the measurable _update function_\(\Phi:\,G\times\mathbb{I}\to G\,\) given by \(\Phi(x,i):=T_{i}x\).
The Markov operator \(\mathcal{P}\) associated with this Markov chain is defined pointwise for a measurable function \(f:\,G\to\mathbb{R}\,\) via
\[(x\in G)\qquad\mathcal{P}f(x):=\int_{G}f(y)p(x,\mathrm{d}\,y),\]
when the integral exists. Note that
\[\mathcal{P}f(x)=\int_{\Omega}f(T_{\xi(\omega)}x)\mathbb{P}(\mathrm{d}\,\omega )=\int_{\mathbb{I}}f(T_{i}x)\mathbb{P}^{\xi}(\mathrm{d}\,i).\]
Let \(\mu\in\mathscr{P}(G)\). The dual Markov operator acting on a measure \(\mu\) is indicated by action on the right by \(\mathcal{P}\):
\[(A\in\mathcal{B}(G))\qquad(\mathcal{P}^{*}\mu)(A):=(\mu\mathcal{P})(A):=\int_ {G}p(x,A)\mu(\mathrm{d}\,x).\]
The distribution of the \(k\)'th iterate of the Markov chain generated by Algorithm 1 is therefore easily represented as follows: \(\mathcal{L}(X_{k})=\mu_{0}\mathcal{P}^{k}\), where \(\mathcal{L}(X)\) denotes the law of the random variable \(X\). Of course in general random variables do not converge, but distributions associated with the sequence of random variables \((X_{k})\) of Algorithm 1, if they converge to anything, do so to _invariant measures_ of the associated Markov operator. An invariant measure of the Markov operator \(\mathcal{P}\) is any distribution \(\pi\in\mathscr{P}\) that satisfies \(\pi\mathcal{P}=\pi\). The set of all invariant probability measures is denoted by \(\mathrm{inv}\,\mathcal{P}\). The underlying problem we seek to solve is to
\[\mathrm{Find}\qquad\pi\in\mathrm{inv}\,\mathcal{P}. \tag{6}\]
This is the _stochastic fixed point problem_ studied in [19, 20]. When the mappings \(T_{i}\) have common fixed points, the problem reduces to the _stochastic feasibility_ problem studied in [18].
Let \((\nu_{k})\) be a sequence of probability measures on \(G\subset\mathcal{E}\), and let \(C_{b}(G)\) denote the set of bounded and continuous functions from \(G\) to \(\mathbb{R}\). The sequence \((\nu_{k})\) is said to converge in distribution to \(\nu\) whenever \(\nu\in\mathscr{P}(G)\) and for all \(f\in C_{b}(G)\) it holds that \(\nu_{k}f\to\nu f\) as \(k\to\infty\), where \(\nu f:=\int f(x)\nu(\mathrm{d}\,x)\). In other words, a sequence of random variables \((X_{k})\) converges in distribution if their laws (\(\mathcal{L}(X_{k})\)) do. We use the _weighted Wasserstein metric_ for the space of measures. Let
\[\mathscr{P}_{2}(G)=\left\{\mu\in\mathscr{P}(G)\,\left|\,\exists\,x\in G\,:\, \int\|x-y\|_{\mathbf{p}}^{2}\mu(\mathrm{d}\,y)<\infty\,\right.\right\} \tag{7}\]
where \(\|\cdot\|_{\mathbf{p}}\) is the Euclidean norm weighted by \(\mathbf{p}\). This will be made explicit below. The Wasserstein 2-metric on \(\mathscr{P}_{2}(G)\), with respect to the weighted Euclidean norm \(\|\cdot\|_{\mathbf{p}}\) denoted \(d_{W_{2,\mathbf{p}}}\), is defined by
\[d_{W_{2,\mathbf{p}}}(\mu,\nu):=\left(\inf_{\gamma\in\mathcal{C}(\mu,\nu)}\int_ {G\times G}\|x-y\|_{\mathbf{p}}^{2}\gamma(dx,dy)\right)^{1/2} \tag{8}\]
where \(\mathcal{C}(\mu,\nu)\) is the set of couplings of \(\mu\) and \(\nu\):
\[\mathcal{C}(\mu,\nu):=\left\{\gamma\in\mathscr{P}(G\times G)\mid\gamma(A\times G )=\mu(A),\,\gamma(G\times A)=\nu(A)\quad\forall A\in\mathcal{B}(G)\,\right\}. \tag{9}\]
The principle mode of convergence in distribution that we use is convergence in distribution of the sequence \((\mathcal{L}(X_{k}))\) to a probability measure \(\pi\in\mathscr{P}(G)\), i.e. for any \(f\in C_{b}(G)\)
\[\mathcal{L}(X_{k})f=\mathbb{E}[f(X_{k})]\to\pi f,\qquad\text{as }k\to\infty.\]
This is a stronger form of convergence than convergence of Cesaro averages sometimes seen in the literature. Since we are considering the The Wasserstein 2-metric, convergence in this metric implies that also the second moments converge in this metric. For more background on the analysis of sequences of measures we refer interested readers to [4, 40, 41, 16, 42].
### Stochastic blockwise splitting algorithms
The concrete targets of the analysis presented here are two fundamental templates for solving problems of the form (1), forward-backward splitting as formulated in [39] and Douglas-Rachford splitting; the latter has not been studied in this context.
Denote by \(\partial_{x_{j}}f:\,\mathcal{E}\rightrightarrows\mathcal{E}_{j}\,\) the partial limiting subdifferential of \(f\) with respect to the block \(x_{j}\in\mathcal{E}_{j}\):
\[\partial_{x_{j}}f(\overline{x}):=\left\{v\in\mathcal{E}_{j}\,\left|\,f(x) \geq f(\overline{x})+\left\langle v\bigoplus\{0\},\;x-\overline{x}\right\rangle +o\{\|x-\overline{x}\|\},x\in\mathcal{E}_{j}\bigoplus\{\overline{x}\}\,\right\}. \tag{10}\]
When \(f\) is continuously differentiable, then this coincides with the partial gradient \(\nabla_{x_{j}}f:\,\mathcal{E}\to\mathcal{E}_{j}\,\). The prox mapping of a function \(h:\,\mathcal{E}\rightrightarrows(-\infty,+\infty]\,\) is defined by
\[\operatorname{prox}_{h,\lambda}(x)\coloneqq\operatorname{argmin}_{y\in \mathcal{E}}\left\{h(y)+\frac{1}{2\lambda}\left\|y-x\right\|^{2}\right\}. \tag{11}\]
The prox mapping is nonempty and single-valued whenever \(h\) is proper, lsc and convex [30]. To allow for generalization to _nonconvex_ functions we use instead the _resolvent_\(J_{\partial h,\lambda}:\,\mathcal{E}\to\mathcal{E}\,\):
\[J_{\partial h,\lambda}(x)\coloneqq\left\{y\in\mathcal{E}\mid(\lambda\partial h +\operatorname{Id})\left(y\right)\ni x\right\}. \tag{12}\]
It is clear from this that, in general, \(\operatorname{prox}_{h,\lambda}(x)\subset J_{\partial h,\lambda}(x)\) for all \(x\).
Note that \(g_{j}\) defined in (2) is just the extension by zero of \(h_{j}\) to a mapping on \(\mathcal{E}\). This yields
\[J_{\partial g_{j},\lambda_{j}}(x)=\left(\begin{array}{c}x_{1}\\ x_{2}\\ \vdots\\ x_{j-1}\\ J_{\partial h_{j},\lambda_{j}}(x_{j})\\ x_{j+1}\\ \vdots\\ x_{m}\end{array}\right)\quad\text{ and }\quad\left(J_{\partial g_{j},\lambda_{j}}- \operatorname{Id}\right)(x)=\left(J_{\partial h_{j},\lambda_{j}}(x_{j})-x_{j} \right)\bigoplus\{0\}. \tag{13}\]
Let \(\partial_{j}f:\,\mathcal{E}\to\mathcal{E}\,\) denote the canonical embedding of \(\partial_{x_{j}}f\) by zero into \(\mathcal{E}\):
\[\partial_{j}f(x):=\partial_{x_{j}}f(x)\bigoplus\{0\}. \tag{14}\]
The corresponding resolvent, \(J_{\partial_{j}f,\lambda}(x)\) is given by
\[J_{\partial_{j}f,\lambda}(x):=\left(\begin{array}{c}x_{1}\\ x_{2}\\ \vdots\\ x_{j-1}\\ J_{\partial f_{j}(\cdot;x),\lambda}(x_{j})\\ x_{j+1}\\ \vdots\\ x_{m}\end{array}\right) \tag{15}\]
where \(f_{j}(\cdot;x):\,\mathcal{E}_{j}\to\mathbb{R}\,,\) with \(x\in\mathcal{E}\) a _parameter_, denotes
\[f_{j}(y;x):=f\left(x+(y-x_{j})\bigoplus\{0\}\right)\] (16a) so that \[\partial f_{j}(y;x) =\partial_{x_{j}}f\left(x+(y-x_{j})\bigoplus\{0\}\right)\quad \text{and} \tag{16b}\] \[J_{\partial f_{j}(\cdot;x),\lambda}(x_{j}) =\left\{y\in\mathcal{E}_{j}\,\left|\,y+\partial_{x_{j}}f(x+(y-x_{ j})\bigoplus\{0\})\ni x_{j}\,\right.\right\}. \tag{16c}\]
We recognize that the resolvent of a function that is not fully separable is not considered _prox friendly_ from a computational standpoint, and this can only be evaluated numerically with some error. The framework presented here is well suited for algorithms with numerical error, and this is discussed at some length in [19, Section 4]. In the interest of keeping the presentation simple, we present results for exact evaluation of all the relevant operators; the incorporation of appropriate noise models for inexact computation builds on the structure introduced here and does not require any assumption of summable errors or increasing accuracy, though the noise model does require some careful consideration (see [19, Section 4.3]).
The abstract template is Algorithm 1 where the mappings \(T_{j}^{\prime}\) specialize to
\[T_{j}^{\prime}(x):=\frac{1}{2}\left(R_{\partial f_{j}(\cdot;y),t_{j}}R_{ \partial h_{j},t_{j}}(x_{j})+x_{j}\right),\quad y=R_{\partial g_{j},t_{j}}(x) \qquad\text{(blockwise Douglas-Rachford)}\]
where
\[R_{\partial h_{j},t_{j}}(x_{j})=2J_{\partial h_{j},t_{j}}(x_{j})-x_{j}\quad \text{and}\quad R_{\partial f_{j}(\cdot;x),t_{j}}(x_{j}):=2J_{\partial f_{j}( \cdot;x)}(x_{j})-x_{j}.\]
or, when \(f\) is continuously differentiable,
\[T_{j}^{\prime}(x):=J_{\partial h_{j},t_{j}}\left(x_{j}-t_{j}\nabla_{x_{j}}f(x )\right)\qquad\text{(blockwise forward-backward)}.\]
Using the resolvent instead of the prox mapping, the blockwise forward-backward algorithm studied in [39] consists of iterations of randomly selected mappings \(T_{i}^{FB}:\,\mathcal{E}\to\mathcal{E}\,\):
\[T_{i}^{FB}:=\left(\operatorname{Id}+\sum_{j\in M_{i}}\left(J_{\partial g_{j}, t_{j}}\left(\operatorname{Id}-t_{j}\nabla_{j}f\right)-\operatorname{Id} \right)\right)\quad(i\in\mathbb{I}). \tag{17}\]
```
Initialization: Select a random variable \(X_{0}\) with distribution \(\mu\), \(t=(t_{1},t_{2},\ldots,t_{m})>0\), and \((\xi_{k})_{k\in\mathbb{N}}\) an i.i.d. sequence with values on \(\mathbb{I}\) and \(X_{0}\) and \((\xi_{k})\) independently distributed. for\(k=0,1,2,\ldots\)do \[X^{k+1}=T_{\xi_{k}}^{FB}(X^{k}):=\left(\operatorname{Id}+\sum_{j\in M_{\xi_{k}}} \left(J_{\partial g_{j},t_{j}}\left(\operatorname{Id}-t_{j}\nabla_{j}f\right) -\operatorname{Id}\right)\right)(X^{k}),\] (18a) or equivalently for\(j=0,1,2,\ldots,m\)do \[X_{j}^{k+1}=[T_{\xi_{k}}^{FB}(X^{k})]_{j}:=\begin{cases}J_{\partial h_{j},t_{j} }\left(X_{j}^{k}-t_{j}\nabla_{x_{j}}f(X^{k})\right)&\text{if }j\in M_{\xi_{k}}\\ X_{j}^{k}&\text{else }.\end{cases}\] (18b)
```
**Algorithm 2**Stochastic Blockwise Forward-Backward Splitting (S-BFBS)
The blockwise Douglas-Rachford algorithm consists of iterations of randomly selected mappings \(T_{i}^{DR}:\,\mathcal{E}\to\mathcal{E}\,\):
\[T_{i}^{DR}:=\frac{1}{2}\left(\sum_{j\in M_{i}}\left(R_{\partial_{j}f,t_{j}}R_{ \partial g_{j},t_{j}}-\operatorname{Id}\right)+2\operatorname{Id}\right)\quad( i\in\mathbb{I}). \tag{19}\]
In addition to its own merits, in the convex setting the Douglas-Rachford algorithm has the interpretation
as the ADMM algorithm [15] applied to the "pre-primal" problem to (1) [11, 14]:
\[\operatorname*{minimize}_{x\in\mathbb{R}^{n}}\,p(x)+q(Ax)\qquad\text{ where }\quad p^{*}(-A^{T}x)=f(x)\quad\text{and}\quad g(x)=q^{*}(x). \tag{21}\]
The stochastic blockwise Douglas-Rachford Algorithm 3 therefore can be understood as a stochastic blockwise ADMM algorithm for solving (21). The discussion above about the separability of \(f\) is yet another way of understanding the observed computational difficulty of implementing this algorithm; it is quite unlikely that \(f\) given by (21) will be separable in the standard basis and therefore the resolvent (16c) will have to be computed numerically. Alternative primal-dual methods that circumvent this are the topic of future research.
Before we begin, however, it will be helpful to give an example delineating consistent from inconsistent feasibility.
**Example 1** (consistent/inconsistent stochastic feasibility problems).: _Examples for partially separable optimization and blockwise algorithms abound, particularly in machine learning, but seldom is the distinction made between consistent and inconsistent problems. This is illustrated here for the problem of set feasibility, or, when feasible points don't exist, best approximation. Consider the problem_
\[\mathsf{Find}\,\,\overline{x}\in\cap_{j=1}^{m}\Omega\]
_where \(\Omega_{j}\subset\mathbb{R}^{n}\) are closed sets. This can be recast as the following optimization problem on the product space \((\mathbb{R}^{n})^{m}\):_
\[\text{minimize}\,\,f(x)+\sum_{j=1}^{m}\iota_{\Omega_{j}}(x_{j}) \tag{22}\]
_where \(x_{j}\in\mathbb{R}^{n}\),_
\[\iota_{\Omega}(x)=\begin{cases}0&\text{ when }x\in\Omega\\ +\infty&\text{ else,}\end{cases}\]
_and \(f\) is some reasonable coupling function that promotes similarity between the blocks \(x_{j}\). In the context of problem (1) \(h_{j}=\iota_{\Omega_{j}}\). Common instances of the coupling function are \(f(x)=\frac{1}{2}d(x,D)^{2}\) for \(D:=\{x=(x_{1},x_{2},\ldots,x_{m})\mid x_{i}=x_{j}\,\,\forall i\neq j\}\) or the more strict indicator function \(f(x)=\iota_{D}(x)\). The prox operators associated with the indicator functions are just projectors, while the gradient of the function \(f\) in the smooth case can be constructed from the projection onto \(D\) (just the averaging operator)._
_When \(\cap_{j=1}^{m}\Omega\neq\emptyset\), the solutions to the feasibility problem and problem (22) coincide for both instances of \(f\). In this case the blockwise operators \(T_{i}\) in (2) and (3) have common fixed points, which are (perhaps not exclusively) points where the sets intersect, and so, when all goes well, fixed points of these algorithms coincide with points in \(\cap_{j=1}^{m}\Omega\); at the very least fixed points of the algorithms coincide with critical points. Viewed as random function iterations, the iterates of (2) and (3) in this consistent case are random variables whose distributions converge to delta functions with support in \(\cap_{j=1}^{m}\Omega\) and the algorithms converge to solutions of a stochastic feasibility problem studied in [18]:_
\[\mathsf{Find}\,\,\overline{x}\in\{x\mid\mathbb{P}(x\in\mathsf{Fix}\,\,T_{ \xi})=1\,\}\,.\]
_If the intersection is empty, as will often be the case in practice regardless of noise considerations, then it is easy to see that the blockwise operators in (2) and (3) do not have common fixed points when \(f(x)=\iota_{D}(x)\). The random algorithms do not have fixed points in this case, but viewed as random function iterations, the distributions of the iterates converge to invariant measures of the Markov operator corresponding to either Algorithm (2) or (3). These algorithms therefore find solutions to the more general stochastic fixed point problem 6 studied in [19]. How to interpret such invariant measures is an open issue in general. For this example, in the case of just two convex sets with empty intersection, the invariant probability measures will consist of equally weighted pairs of delta functions centered at best approximation pairs between the sets._
_The numerical behavior of deterministic versions of (2) and (3), and many others, has been thoroughly studied for the broad class of cone and sphere problems, which includes sensor localization, phase retrieval, and computed tomography [25]. For the example of set feasibility presented here, convergence depends on the regularity properties of the projectors onto the respective sets, which as shown in [27] is derived from the regularity of the sets. The main contribution of this article is to show that randomization can lead to Markov operators with better regularity than that of the individual operators generating its transition kernel._
## 3 Regularity
Our main results concern convergence of Markov chains under regularity assumptions that are lifted from the generating mappings \(T_{i}\). In [27] a framework was developed for a quantitative convergence analysis of set-valued mappings \(T_{i}\) that are one-sided Lipschitz continuous in the sense of set-valued-mappings with Lipschitz constant slightly greater than \(1\). We begin with the regularity of \(T_{i}\) and follow this through to the regularity of the resulting Markov operator.
### Almost \(\alpha\)-firmly nonexpansive mappings
Let \(G\subset\mathcal{E}\) and let \(F:\,G\rightrightarrows\mathcal{E}\,.\) The mapping \(F\) is said to be _pointwise almost nonexpansive at \(x_{0}\in G\) on \(G\)_ whenever
\[\exists\epsilon\in[0,1):\quad\|x^{+}-x_{0}^{+}\|\leq\sqrt{1+\epsilon}\,\|x-x_{ 0}\|,\qquad\forall x\in G,\forall x^{+}\in Fx,x_{0}^{+}\in Fx_{0}. \tag{23}\]
The _violation_ is a value of \(\epsilon\) for which (23) holds. When the above inequality holds for all \(x_{0}\in G\) then \(F\) is said to be _almost nonexpansive on \(G\)_. When \(\epsilon=0\) the mapping \(F\) is said to be _(pointwise) nonexpansive_. The mapping \(F\) is said to be _pointwise almost \(\alpha\)-firmly nonexpansive at \(x_{0}\in G\) on \(G\)_, abbreviated _pointwise \(\alpha\)-fne_ whenever
\[\exists\epsilon\in[0,1)\text{ and }\alpha\in(0,1):\] \[\|x^{+}-x_{0}^{+}\|^{2}\leq(1+\epsilon)\|x-x_{0}\|^{2}-\tfrac{1- \alpha}{\alpha}\psi(x,x_{0},x^{+},x_{0}^{+}) \tag{24}\] \[\forall x\in G,\forall x^{+}\in Fx,\forall x_{0}^{+}\in Fx_{0},\]
where the _transport discrepancy_\(\psi\) of \(F\) at \(x,x_{0}\), \(x^{+}\in Fx\) and \(x_{0}^{+}\in Fx_{0}\) is defined by
\[\psi(x,x_{0},x^{+},x_{0}^{+}):=\] \[\|x^{+}-x\|^{2}+\|x_{0}^{+}-x_{0}\|^{2}+\|x^{+}-x_{0}^{+}\|^{2}+ \|x-x_{0}\|^{2}-\|x^{+}-x_{0}\|^{2}-\|x-x_{0}^{+}\|^{2}. \tag{25}\]
When the above inequality holds for all \(x_{0}\in G\) then \(F\) is said to be _a\(\alpha\)-fine on \(G\)_. The _violation_ is the constant \(\epsilon\) for which (24) holds. When \(\epsilon=0\) the mapping \(F\) is said to be _(pointwise) \(\alpha\)-firmly nonexpansive_, abbreviated _(pointwise) \(\alpha\)-fine_.
The transport discrepancy \(\psi\) is a central object for characterizing the regularity of mappings in metric spaces and ties the regularity of the mapping to the geometry of the space. A short calculation shows that, in a Euclidean space, this has the representation
\[\psi(x,x_{0},x^{+},x_{0}^{+})=\|(x-x^{+})-(x_{0}-x_{0}^{+})\|^{2}. \tag{26}\]
The definition of pointwise \(\mathrm{a}\alpha\)-fne mappings in Euclidean spaces appeared first in [27]. This generalizes the notion of _averaged_ mappings dating back to Mann, Krasnoselskii, and others [29, 2, 9, 12, 22].
A partial blockwise mapping \(T_{i}\) that is \(\alpha\)-fne on an affine subspace \(\mathcal{E}_{M_{i}}\bigoplus\{z\}\) may not be \(\alpha\)-fne on \(\mathcal{E}\), as the next example from [21, Remark 3.9] shows.
**Example 2**.: _Let \(\mathcal{E}=\mathbb{R}^{2}\) and define \(f(x_{1},x_{2})=(x_{1}+x_{2})^{2}\), \(g_{1}(x_{1},x_{2})=h_{1}(x_{1})=0\) and \(g_{2}(x_{1},x_{2})=h_{2}(x_{2})=x_{2}^{2}\). Here \(f\) is convex and differentiable with global gradient Lipschitz constant \(L=4\) and the functions \(g_{j}\) are clearly convex. The proximal gradient algorithm applied to the function \(F=f+\sum_{j=1}^{2}g_{j}\) is \(x^{k+1}=T^{FB}(x^{k})=\mathrm{prox}_{g}(\mathrm{Id}\,-t\nabla f)(x^{k})\). For all \(t\in(0,1/2)\) it can be shown that the fixed point mapping \(T^{FB}\) is \(\alpha\)-firmly nonexpansive with the unique fixed point \((0,0)\), the global minimum of the objective function \(F\). Hence from any initial point \(x^{0}\) this iteration converges to the global minimum \((0,0)\). A blockwise implementation of this algorithm would involve computing the proximal gradient step with respect to \(x_{1}\), leaving \(x_{2}\) fixed; that is at some iterations \(k\) one computes_
\[x^{k+1}=T^{FB}_{1}(x^{k}):=\mathrm{prox}_{g_{1}}((\mathrm{Id}\,-t\nabla_{x_{1} ^{k}}f)(x_{1}^{k},x_{2}^{k})=((1-2t)x_{1}^{k}-2tx_{2}^{k},x_{2}^{k}). \tag{27}\]
_A straightforward calculation shows that the blockwise mapping \(T^{FB}_{1}\) is not \(\alpha\)-fne on \(\mathbb{R}^{2}\) for any \(t>0\), although it is \(\alpha\)-fne on \(\mathbb{R}\times\{z\}\) for any \(z\in\mathbb{R}\) whenever \(t\in(0,1/2)\). Being \(\alpha\)-fne on \(\mathbb{R}\times\{z\}\) for any \(z\in\mathbb{R}\) is not much help, however, since this means that repeated application of \(T^{FB}_{1}\) defined by (27) converges to the minimum of \(F\) restricted to the affine subspace \(\mathbb{R}\times\{z\}\), namely \((-z,z)\)._
In light of the above counterexample, Theorem 1 below shows how randomization in the blockwise forward-backward algorithm restores the \(\alpha\)-fne property _in expectation_[20, Definition 3.6]. This is the fixed point analog to descents in expectation introduced in [39].
In the stochastic setting we consider only _single-valued_ mappings \(T_{i}\) that are \(\mathrm{a}\alpha\)-fne in expectation. We can therefore write \(x^{+}=T_{i}x\) instead of always taking some selection \(x^{+}\in T_{i}x\) (which then raises issues of measurability and so forth). On a closed subset \(G\subset\mathcal{E}\) for a general self-mapping \(T_{i}:\,G\to G\,\) for \(i\in\mathbb{I}\), the mapping \(\varPhi:\,G\times\mathbb{I}\to G\,\) be given by \(\varPhi(x,i)=T_{i}x\) is said to be _pointwise almost nonexpansive in expectation at \(x_{0}\in G\)_ on \(G\), abbreviated _pointwise almost nonexpansive in expectation_, whenever
\[\exists\epsilon\in[0,1):\quad\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(x_ {0},\xi)\right\|\right]\leq\sqrt{1+\epsilon}\left\|x-x_{0}\right\|,\qquad \forall x\in G. \tag{28}\]
When the above inequality holds for all \(x_{0}\in G\) then \(\varPhi\) is said to be _almost nonexpansive in expectation on \(G\)_. As before, the violation is a value of \(\epsilon\) for which (28) holds. When the violation is \(0\), the qualifier "almost" is dropped. The mapping \(\varPhi\) is said to be _pointwise almost \(\alpha\)-firmly nonexpansive in expectation at \(x_{0}\in G\)_ on \(G\), abbreviated _pointwise \(\mathrm{a}\alpha\)-fne in expectation_, whenever
\[\exists\epsilon\in[0,1),\alpha\in(0,1):\quad\forall x\in G, \tag{29}\] \[\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(x_{0},\xi)\right\| ^{2}\right]\leq(1+\epsilon)\|x-x_{0}\|^{2}-\tfrac{1-\alpha}{\alpha}\mathbb{E} \left[\psi(x,x_{0},\varPhi(x,\xi),\varPhi(x_{0},\xi))\right].\]
When the above inequality holds for all \(x_{0}\in G\) then \(\varPhi\) is said to be _almost \(\alpha\)-firmly nonexpansive (\(\mathrm{a}\alpha\)-fne) in expectation on \(G\)_. The violation is a value of \(\epsilon\) for which (29) holds. When the violation is \(0\), the qualifier "almost" is dropped and the abbreviation \(\alpha\)_-fne in expectation_ is used. The defining inequalities (28) and (29) will be amended below in (32a) to account for _weighted norms_.
The next result, derived from [21, Proposition 5.5] shows in particular that any collection of self-mappings \(\{T_{i}\}_{i\in\mathbb{I}}\) on \(G\subset\mathcal{E}\) that is \(\mathrm{a}\alpha\)-fne on \(G_{M_{i}}\bigoplus\{z\}\) is \(\mathrm{a}\alpha\)-fne in expectation with respect to a weighted norm on \(G\). In particular, denote by \(\eta_{i}\) the probability of selecting the \(i\)'th collection of blocks, \(M_{i}\), and let \(p_{j}\) denote the probability that the \(j\)'th block is among the randomly selected collection of blocks:
\[0<p_{j}=\sum_{i\in\mathbb{I}}\eta_{i}\cdot\chi_{M_{i}}(j)\leq 1\quad\text{ where }\quad\chi_{M_{i}}(j)=\begin{cases}1&\text{ if }j\in M_{i}\\ 0&\text{ else }\end{cases}\quad(j=1,2,\ldots,m). \tag{30}\]
Define the corresponding weighted norm
\[\left\|z\right\|_{\mathbf{p}}:=\left(\sum_{j=1}^{m}\tfrac{1}{p_{j}}\|z_{j}\| _{\tilde{z}_{j}}^{2}\right)^{1/2}. \tag{31}\]
**Theorem 1** (almost \(\alpha\)-firmly nonexpansive in expectation (a\(\alpha\)-fne in expectation)).: _Let the single-valued self-mappings \(\{T_{i}\}_{i\in\mathbb{I}}\) on the subset \(G\subset\mathcal{E}\) satisfy_
1. _for each_ \(i\)_,_ \(T_{i}\) _is the identity mapping on_ \(\mathcal{E}_{M_{i}^{\circ}}\)_;_
2. \(T_{1}\) _is a_\(\alpha\)_-fne on_ \(G\) _with constant_ \(\overline{\alpha}\) _and violation no greater than_ \(\overline{\epsilon}\) _where_ \(M_{1}:=\{1,2,\ldots,m\}\)_._
_Then_
1. _for all_ \(i\) _and each_ \(z\in G\)_,_ \(T_{i}\) _is a_\(\alpha\)_-fne on_ \(G_{M_{i}}\bigoplus\{z\}_{M_{i}^{\circ}}\) _with constant at most_ \(\overline{\alpha}\) _and violation no greater than_ \(\overline{\epsilon}\)_;_
2. _the mapping_ \(\varPhi:\,G\times\mathbb{I}\to G\,\) _given by_ \(\varPhi(x,i)=T_{i}x\) _satisfies_ \[\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(y,\xi)\right\|_{ \mathbf{p}}^{2}\right]\leq(1+\overline{p\epsilon})\|x-y\|_{\mathbf{p}}^{2}- \tfrac{1-\overline{\alpha}}{\overline{\alpha}}\mathbb{E}\left[\psi_{\mathbf{p} }(x,y,\varPhi(x,\xi),\varPhi(y,\xi))\right]\quad\forall x,y\in G\] (32a) _where_ \[\psi_{\mathbf{p}}(x,y,\varPhi(x,i),\varPhi(y,i)):=\|\left(x-\varPhi(x,i)\right) -\left(y-\varPhi(y,i)\right)\|_{\mathbf{p}}^{2}\quad\text{and}\quad\overline{ p}:=\max_{j}\{p_{j}\}.\] (32b)
A mapping \(\varPhi:\,G\times\mathbb{I}\to G\,\) that satisfies (32a) is called _a\(\alpha\)-fne in expectation_ with respect to the weighted norm \(\|\cdot\|_{\mathbf{p}}\) with constant \(\overline{\alpha}\) and violation no greater than \(\overline{p\epsilon}\).
_Proof_. The proof of part (i) follows immediately from the observation that \(T_{i}\) on \(G_{M_{i}}\bigoplus\{z\}_{M_{i}^{\circ}}\) is equivalent to \(T_{1}\) restricted to the same subset.
To see part (ii), fix any \(x,y\in G\), and let \(T_{j}^{\prime}:\,G_{j}\to G_{j}\,\)\((j=1,2,\ldots,m)\) be the \(j\)'th block mapping for \(j\in M_{i}\). Hence, \(T_{i}(x)=P_{\mathcal{E}_{M_{i}}}T_{i}(x)+P_{\mathcal{E}_{M_{i}^{\circ}}}(x)\) and \(T_{1}(x)=\bigoplus_{j=1}^{m}T_{j}^{\prime}(x)\) where \(P_{\mathcal{E}_{M_{i}}}:\,\mathcal{E}\to\mathcal{E}\,\) is the orthogonal projection onto the subspace \(\mathcal{E}_{M_{i}}\) and likewise for \(P_{\mathcal{E}_{M_{i}^{\circ}}}\). We begin with the left hand side of the defining inequality:
\[\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(y,\xi)\right\|_{ \mathbf{p}}^{2}\right] =\sum_{i\in\mathbb{I}}\eta_{i}\left\|T_{i}(x)-T_{i}(y)\right\|_{ \mathbf{p}}^{2}\] \[=\sum_{i\in\mathbb{I}}\eta_{i}\left\|\left(P_{\mathcal{E}_{M_{i} }}T_{i}(x)+P_{\mathcal{E}_{M_{i}^{\circ}}}(x)\right)-\left(P_{\mathcal{E}_{M_{ i}}}T_{i}(y)+P_{\mathcal{E}_{M_{i}^{\circ}}}(y)\right)\right\|_{\mathbf{p}}^{2}\] \[=\sum_{i\in\mathbb{I}}\eta_{i}\left(\left\|P_{\mathcal{E}_{M_{i} }}T_{i}(x)-P_{\mathcal{E}_{M_{i}}}T_{i}(y)\right\|_{\mathbf{p}}^{2}+\left\|P_{ \mathcal{E}_{M_{i}^{\circ}}}(x-y)\right\|_{\mathbf{p}}^{2}\right)\] \[=\sum_{i\in\mathbb{I}}\eta_{i}\left(\sum_{j\in M_{i}}\tfrac{1}{p_ {j}}\left\|T_{j}^{\prime}(x)-T_{j}^{\prime}(y)\right\|_{\mathcal{E}_{j}}^{2}+ \sum_{k\in M_{i}^{\circ}}\tfrac{1}{p_{k}}\left\|x_{k}-y_{k}\right\|_{\mathcal{ E}_{k}}^{2}\right). \tag{33}\]
Then (33) rearranges to
\[\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(y,\xi)\right\|_{ \mathbf{p}}^{2}\right] =\sum_{i\in\mathbb{I}}\eta_{i}\left(\sum_{j\in M_{i}}\tfrac{1}{p_ {j}}\left\|T_{j}^{\prime}(x)-T_{j}^{\prime}(y)\right\|_{\mathcal{E}_{j}}^{2}+ \sum_{k\in M_{i}^{\circ}}\tfrac{1}{p_{k}}\left\|x_{k}-y_{k}\right\|_{\mathcal{ E}_{k}}^{2}\right)\] \[=\sum_{j=1}^{m}p_{j}\tfrac{1}{p_{j}}\left\|T_{j}^{\prime}(x)-T_{j }^{\prime}(y)\right\|_{\mathcal{E}_{j}}^{2}+(1-p_{j})\tfrac{1}{p_{j}}\left\|x_ {j}-y_{j}\right\|_{\mathcal{E}_{j}}^{2}\] \[=\left\|T_{1}(x)-T_{1}(y)\right\|^{2}-\left\|x-y\right\|^{2}+ \left\|x-y\right\|_{\mathbf{p}}^{2}. \tag{34}\]
We simplify the expectation of the weighted transport discrepancy (32b) next.
\[\mathbb{E}\left[\psi_{\mathbf{p}}(x,y,\varPhi(x,\xi),\varPhi(y, \xi))\right] =\sum_{i\in\mathbb{I}}\eta_{i}\left(\left\|(x-T_{i}(x))-(y-T_{i}(x)) \right\|_{\mathbf{p}}^{2}\right)\] \[=\sum_{j=1}^{m}p_{j}\tfrac{1}{p_{j}}\left\|\left(x_{j}-T_{j}^{ \prime}(x)\right)-\left(y_{j}-T_{j}^{\prime}(y)\right)\right\|_{\mathcal{E}_{j}}^ {2}\] \[=\left\|(x-T_{1}(x))-(y-T_{1}(y))\right\|^{2}. \tag{35}\]
Combining (34) with \(\frac{1-\overline{\alpha}}{\overline{\alpha}}\) times (35) yields
\[\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(y,\xi)\right\|_{ \mathbf{p}}^{2}\right]+\tfrac{1-\overline{\alpha}}{\overline{\alpha}}\mathbb{E }\left[\psi_{\mathbf{p}}(x,y,\varPhi(x,\xi),\varPhi(y,\xi))\right]\] \[\quad=\left\|T_{1}(x)-T_{1}(y)\right\|^{2}-\left\|x-y\right\|^{2}+ \left\|x-y\right\|_{\mathbf{p}}^{2}+\tfrac{1-\overline{\alpha}}{\overline{ \alpha}}\left\|(x-T_{1}(x))-(y-T_{1}(y))\right\|^{2}. \tag{36}\]
Now by assumption (b), \(T_{1}\) is \(\alpha\)-fne with constant \(\overline{\alpha}\) and violation no greater than \(\overline{\epsilon}\) on \(G\). Therefore (36) is bounded by
\[\mathbb{E}\left[\left\|\varPhi(x,\xi)-\varPhi(y,\xi)\right\|_{ \mathbf{p}}^{2}\right]+\tfrac{1-\overline{\alpha}}{\overline{\alpha}}\mathbb{ E}\left[\psi_{\mathbf{p}}(x,y,\varPhi(x,\xi),\varPhi(y,\xi))\right] \leq\overline{\epsilon}\left\|x-y\right\|^{2}+\left\|x-y\right\|_{ \mathbf{p}}^{2}\] \[\leq\left(1+\overline{p\epsilon}\right)\left\|x-y\right\|_{ \mathbf{p}}^{2} \tag{37}\]
for all \(x,y\in G\) as claimed.
Following [20], we lift these notions to the analogous regularity of Markov operators on the space of probability measures. Let \(\mathcal{P}\) be the Markov operator with transition kernel
\[(x\in G\subset\mathcal{E})(A\in\mathcal{B}(G))\qquad p(x,A):=\mathbb{P}( \varPhi(x,\xi)\in A)\]
where \(\xi\) is an \(\mathbb{I}\)-valued random variable and \(\varPhi:\,G\times\mathbb{I}\to G\,\) is a measurable update function. The Markov operator is said to be _pointwise almost nonexpansive in measure at \(\mu_{0}\in\mathscr{P}(G)\)_ on \(\mathscr{P}(G)\), abbreviated _pointwise almost nonexpansive in measure_, whenever
\[\exists\epsilon\in[0,1):\quad d_{W_{2,\mathbf{p}}}(\mu\mathcal{P},\mu_{0} \mathcal{P})\leq\sqrt{1+\epsilon}\,d_{W_{2,\mathbf{p}}}(\mu,\mu_{0}),\qquad \forall\mu\in\mathscr{P}(G). \tag{38}\]
When the above inequality holds for all \(\mu_{0}\in\mathscr{P}(G)\) then \(\mathcal{P}\) is said to be _almost nonexpansive in measure on \(\mathscr{P}(G)\)_. As before, the violation is a value of \(\epsilon\) for which (38) holds. When the violation is \(0\), the qualifier "almost" is dropped. Let \(\mathcal{C}_{*}(\mu_{1},\mu_{2})\) denote the set of couplings where the distance \(d_{W_{2,\mathbf{p}}}(\mu_{1},\mu_{2})\) is attained (i.e. the optimal couplings between \(\mu_{1}\) and \(\mu_{2}\)) The Markov operator \(\mathcal{P}\) is said to be _pointwise almost \(\alpha\)-firmly nonexpansive in measure at \(\mu_{0}\in\mathscr{P}(G)\)_ on \(\mathscr{P}(G)\), abbreviated _pointwise \(\alpha\)-fne in measure_, whenever
\[\exists\epsilon\in[0,1),\alpha\in(0,1):\qquad\forall\mu\in \mathscr{P}(G),\forall\gamma\in C_{*}(\mu,\mu_{0})\] \[d_{W_{2,\mathbf{p}}}(\mu\mathcal{P},\mu_{0}\mathcal{P})^{2} \leq(1+\epsilon)d_{W_{2,\mathbf{p}}}(\mu,\mu_{0})^{2}-\] \[\tfrac{1-\alpha}{\alpha}\int_{G\times G}\mathbb{E}\left[\psi_{ \mathbf{p}}(x,y,\varPhi(x,\xi),\varPhi(y,\xi))\right]\gamma(dx,dy). \tag{39}\]
When the above inequality holds for all \(\mu_{0}\in\mathscr{P}(G)\) then \(\mathcal{P}\) is said to be _\(a\alpha\)-fne in measure on \(\mathscr{P}(G)\)_. The violation is a value of \(\epsilon\) for which (39) holds. When the violation is \(0\), the qualifier "almost" is dropped and the abbreviation \(\alpha\)-_fne in measure_ is employed. The notions above were defined in [20, Definition 2.8] on more general metric spaces.
**Proposition 2** (Proposition 2.10, [20]).: _Let \(G\subset\mathcal{E}\), let \(\varPhi:\,G\times\mathbb{I}\to G\,\) be given by \(\varPhi(x,i)=T_{i}x\) and let \(\psi_{\mathbf{p}}\) be defined by (32b). Denote by \(\mathcal{P}\) the Markov operator with update function \(\varPhi\) and transition kernel \(p\) defined by (5). If \(\,\,\varPhi\) is \(a\alpha\)-fne in expectation on \(G\) with constant \(\alpha\in(0,1)\) and violation \(\epsilon\in[0,1)\), then the Markov operator \(\mathcal{P}\) is \(a\alpha\)-fne in measure on \(\mathscr{P}_{2}(G)\) with constant \(\alpha\) and violation at most \(\epsilon\), that is, \(\mathcal{P}\) satisfies_
\[d_{W_{2,\mathbf{p}}}^{2}(\mu_{1}\mathcal{P},\mu_{2}\mathcal{P}) \leq(1+\epsilon)d_{W_{2,\mathbf{p}}}^{2}(\mu_{1},\mu_{2})-\tfrac{1- \alpha}{\alpha}\int_{G\times G}\mathbb{E}\left[\psi_{\mathbf{p}}(x,y,\varPhi(x, \xi),\varPhi(y,\xi))\right]\ \gamma(dx,dy)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
Proof.: This is an immediate consequence of Theorem 1 and Proposition 2.
Note also that, since \(\psi_{\mathbf{p}}\) is nonnegative, \(T_{i}\) is also almost nonexpansive in expectation whenever \(T_{1}\) is a\(\alpha\)-fne; the corresponding Markov operator is almost nonexpansive in measure with the corresponding violation whenever conditions (a)-(b) of Theorem 3 are satisfied.
In preparation for the next refinements, following [20] we lift the weighted transport discrepancy \(\psi_{\mathbf{p}}\) to the corresponding _invariant Markov transport discrepancy_\(\Psi:\,\mathscr{P}(G)\to\mathbb{R}_{+}\,\cup\,\{+\infty\}\) on the subset \(G\subset\mathcal{E}\) defined by
\[\Psi(\mu):=\inf_{\pi\in\mathrm{inv}\,\mathcal{P}}\inf_{\gamma\in\mathcal{C}_{ *}(\mu,\pi)}\left(\int_{G\times G}\mathbb{E}\left[\psi_{\mathbf{p}}(x,y,T_{ \xi}x,T_{\xi}y)\right]\;\gamma(dx,dy)\right)^{1/2}. \tag{41}\]
It is not guaranteed that both \(\mathrm{inv}\,\mathcal{P}\) and \(\mathcal{C}_{*}(\mu,\pi)\) are nonempty; when at least one of these is empty \(\Psi(\mu):=+\infty.\) It is clear that \(\Psi(\pi)=0\) for any \(\pi\in\mathrm{inv}\,\mathcal{P}.\)
### Special Case: consistent stochastic feasibility
The stochastic fixed point problem (6) is called _consistent_ in [18, 19, 20] when, for some closed subset \(G\subset\mathcal{E},\)
\[C:=\{x\in G\;|\,\mathbb{P}(x=T_{\xi}x)=1\,\}\neq\emptyset. \tag{42}\]
In this case, the notions developed above can be sharpened.
Recall that a _paracontraction_ is a continuous mapping \(T:\,G\to G\,\) possessing fixed points that satisfies
\[\|T(x)-y\|<\|x-y\|\quad\forall y\in\mathsf{Fix}\;T,\forall x\in G\setminus \mathsf{Fix}\;T.\]
Any \(\alpha\)-fne mapping on a Euclidean space, for example, is a paracontraction.
The notion of paracontractions extends to random function iterations for consistent stochastic feasibility. Continuous self-mappings \(T_{i}:\,G\to G\;\left(i\in\mathbb{I}\right)\) are _paracontractions in expectation_ with respect to the weighted norm \(\|z\|_{\mathbf{p}}\) whenever
\[C\neq\emptyset\quad\text{and}\quad\mathbb{E}\left[\|T_{\xi}x-y\|_{\mathbf{p}} \right]<\|x-y\|_{\mathbf{p}}\quad\forall y\in C,\forall x\in G\setminus \mathsf{Fix}\;T. \tag{43}\]
The next result shows that, for consistent stochastic feasibility, collections of mappings \(T_{i}\) defined in Theorem 1 with \(\overline{\epsilon}=0\) are paracontractions in expectation.
**Corollary 4** (paracontractions in expectation).: _Let the single-valued self-mappings \(\{T_{i}\}_{i\in\mathbb{I}}\) on \(G\) satisfy_
1. \(T_{i}\) _is the identity mapping on_ \(\mathcal{E}_{M^{0}_{i}}\)_;_
2. _for every_ \(z\in M^{0}_{i}\)_,_ \(T_{i}\) _is_ \(\alpha\)_-fne on_ \(G_{M_{i}}\bigoplus\{z\}_{M^{0}_{i}}\) _with constant_ \(\overline{\alpha}\) _for all_ \(i\)_;_
3. \(C:=\{x\in G\;|\,\mathbb{P}(x=T_{\xi}x)=1\,\}\neq\emptyset.\)__
_Then the mapping \(\Phi:\,G\times\mathbb{I}\to G\,\) given by \(\Phi(x,i)=T_{i}x\) is a paracontraction in expectation:_
\[\mathbb{E}\left[\|\Phi(x,\xi)-\Phi(y,\xi)\|^{2}_{\mathbf{p}}\right]<\|x-y\|^{ 2}_{\mathbf{p}}\quad\forall x\in G\setminus C,\forall y\in C. \tag{44}\]
Proof.: Note that \(\psi_{\mathbf{p}}\) takes the value \(0\) only when \(x\) and \(y\) are both in \(\mathsf{Fix}\;T_{i}\); hence, for all \(y\in C\)
\[\mathbb{E}\left[\|T_{\xi}(x)-T_{\xi}(y)\|^{2}_{\mathbf{p}}\right]<\|x-y\|^{2} _{\mathbf{p}}\quad\forall x\in G\setminus C. \tag{45}\]
To show the analogous result for the Markov operator \(\mathcal{P}\) requires more work. A Markov operator is a paracontraction with respect to the weighted Wasserstein metric \(d_{W_{2,M}}\) whenever
\[\mathrm{inv}\,\mathcal{P}\neq\emptyset\quad\text{and}\quad d_{W_{2,M}}(\mu \mathcal{P},\pi)<d_{W_{2,M}}(\mu,\pi)\quad\forall\pi\in\mathrm{inv}\,\mathcal{ P},\forall\mu\in\mathscr{P}(G)\setminus\mathrm{inv}\,\mathcal{P}. \tag{46}\]
In the case of consistent stochastic feasibility, the invariant Markov transport discrepancy reduces to a very simple form. Indeed, note first of all that a \(\delta\)-distribution centered on any point \(x\in C\) is invariant with respect to \(\mathcal{P}\) so the set of invariant measures supported on \(C\),
\[\mathscr{C}:=\left\{\mu\in\mathrm{inv}\,\mathcal{P}\;|\,\mathrm{supp}\,\mu \subset C\,\right\}, \tag{47}\]
is nonempty whenever \(C\) is. Now suppose \(\pi\in\mathscr{C}\). Then \(y=T_{\xi}y\) almost surely whenever \(y\in\operatorname{supp}\pi\) and (35) yields
\[\inf_{\gamma\in\mathscr{C}_{*}(\mu,\pi)}\left(\int_{G\times G} \mathbb{E}\left[\psi_{\mathbf{p}}(x,y,T_{\xi}x,T_{\xi}y)\right]\;\gamma(dx,dy) \right)^{1/2} =\inf_{\gamma\in\mathscr{C}_{*}(\mu,\pi)}\left(\int_{G\times G} \mathbb{E}\left[\|x-T_{\xi}x\|_{\mathbf{p}}^{2}\right]\;\gamma(dx,dy)\right)^{1 /2}\] \[=\left(\int_{G}\mathbb{E}\left[\|x-T_{\xi}x\|_{\mathbf{p}}^{2} \right]\;\mu(dx)\right)^{1/2}\] \[=\left(\int_{G}\|x-T_{1}x\|^{2}\,\mu(dx)\right)^{1/2}\quad\forall \pi\in\mathscr{C}. \tag{48}\]
Thus the invariant Markov transport discrepancy defined in (41) has the following simple upper bound:
\[\Psi(\mu) :=\inf_{\pi\in\operatorname{inv}\mathcal{P}}\inf_{\gamma\in \mathscr{C}_{*}(\mu,\pi)}\left(\int_{G\times G}\mathbb{E}\left[\psi_{\mathbf{ p}}(x,y,T_{\xi}x,T_{\xi}y)\right]\;\gamma(dx,dy)\right)^{1/2}\] \[\leq\inf_{\pi\in\mathscr{C}_{*}(\mu,\pi)}\inf_{G\times G}\mathbb{ E}\left[\psi_{\mathbf{p}}(x,y,T_{\xi}x,T_{\xi}y)\right]\;\gamma(dx,dy) \right)^{1/2}\] \[=\,\left(\int_{G}\|x-T_{1}x\|^{2}\,\mu(dx)\right)^{1/2}, \tag{49}\]
where the last equality follows from (48). Inequality (49) is tight for all \(\mu\) supported on \(C\), so clearly \(\mu\in\mathscr{C}\) implies that \(\Psi(\mu)=0\). On the other hand, if \(\Psi(\mu)=0\) implies that \(\operatorname{supp}\mu\subset C\), then \(\mathscr{C}=\operatorname{inv}\mathcal{P}\) and (49) holds with equality for all \(\mu\). This holds, in particular, when \(T_{i}\) is a paracontraction in expectation (see [18, Lemma 3.3] and Theorem 5 below).
Let's assume, then, that \(\Psi(\mu)=0\) if and only if \(\operatorname{supp}\mu\subset C\). Then
\[d_{W_{2,\mathbf{p}}}(\mu,\operatorname{inv}\mathcal{P})=\left(\int_{G}\inf_{z \in C}\|x-z\|_{\mathbf{p}}^{2}\mu(dx)\right)^{1/2},\]
and (49) holds with equality, so
\[d_{W_{2,\mathbf{p}}}(\mu,\operatorname{inv}\mathcal{P})=d_{W_{2,\mathbf{p}}}( \mu,\Psi^{-1}(0))=\left(\int_{G}\inf_{z\in C}\|x-z\|_{\mathbf{p}}^{2}\mu(dx) \right)^{1/2}. \tag{50}\]
**Theorem 5** (Markov operators of paracontractions in expectation).: _Let \(G\subset\mathcal{E}\) be closed. If the continuous self-mappings \(T_{i}:\,G\to G\) (\(i\in\mathbb{I}\)) defined by (3) are paracontractions in expectation on \(G\) with respect to the weighted norm \(\|\cdot\|_{\mathbf{p}}\) defined by (31), then_
1. _the associated Markov operator_ \(\mathcal{P}\) _is a paracontraction with respect to_ \(d_{W_{2,\mathbf{p}}}\)_;_
2. _if_ \(G\) _is bounded, the set of invariant measures for_ \(\mathcal{P}\) _is_ \(\{\pi\in\mathscr{P}(G)\;|\operatorname{supp}\pi\subset C\,\}\)_;_
3. _if_ \(G\) _is bounded,_ \[(\forall x\in G)\quad\Psi(\delta_{x}) =\|x-T_{1}(x)\|\qquad\text{and}\] (51a) \[\frac{1}{\sqrt{p}}\inf_{z\in C}\|x-z\| \leq\inf_{z\in C}\|x-z\|_{\mathbf{p}}=d_{W_{2,\mathbf{p}}}(\delta_{x}, \operatorname{inv}\mathcal{P})=d_{W_{2,\mathbf{p}}}\left(\delta_{x},\Psi^{-1} (0)\right).\] (51b)
Proof.: (i). For a random variable \(X\sim\mu\), we have \(T_{\xi}X=\Phi(X,\xi)\sim\mu\mathcal{P}\), and for a random variable \(Y\sim\pi\in\operatorname{inv}\mathcal{P}\) we have \(T_{\xi}Y=\Phi(Y,\xi)\sim\pi\mathcal{P}=\pi\), so
\[d_{W_{2,\mathbf{p}}}(\mu\mathcal{P},\pi) =\left(\inf_{\gamma\in\mathscr{C}(\mu\mathcal{P},\pi)}\int_{G\times G }\|x^{+}-y\|_{\mathbf{p}}^{2}\gamma(dx^{+},dy)\right)^{1/2}\] \[\leq\left(\inf_{\gamma\in\mathscr{C}(\mu,\pi)}\int_{G\times G} \mathbb{E}\left[\|T_{\xi}x-y\|_{\mathbf{p}}^{2}\right]\gamma(dx,dy)\right)^{1/2}\] \[<\left(\inf_{\gamma\in\mathscr{C}(\mu,\pi)}\int_{G\times G}\|x-y \|_{\mathbf{p}}^{2}\gamma(dx,dy)\right)^{1/2}\] \[=d_{W_{2,\mathbf{p}}}(\mu,\pi)\qquad\forall\pi\in\operatorname{ inv}\mathcal{P},\forall\mu\in\mathscr{P}_{2}(G)\setminus\operatorname{inv} \mathcal{P}, \tag{52}\]
where the last inequality follows from the assumption that \(T_{i}\) defined by (3) are a paracontractions in expectation with respect to the weighted norm \(\|\cdot\|_{\mathbf{p}}\). This establishes that \(\mathcal{P}\) is a paracontraction in the \(d_{W_{2,\mathbf{p}}}\) metric as claimed.
(ii). Our proof follows the proof of [18, Lemma 3.3]. It is clear that \(\pi\in\mathscr{P}(G)\) with \(\operatorname{supp}\pi\subset C\subset G\) is invariant, since \(p(x,\{x\})=\mathbb{P}(T_{\xi}x\in\{x\})=\mathbb{P}(x\in\operatorname{Fix}T_{ \xi})=1\) for all \(x\in C\) and hence \(\pi\mathcal{P}(A)=\int_{C}p(x,A)\pi(\operatorname{d}x)=\pi(A)\) for all \(A\in\mathcal{B}(G)\).
Suppose, on the other hand, that \(\operatorname{supp}\pi\setminus C\neq\emptyset\) for some \(\pi\in\operatorname{inv}\mathcal{P}\) with \(\operatorname{supp}\pi\subset G\). Then due to compactness of \(\operatorname{supp}\pi\) (it is closed in the compact set \(G\)) we can find \(s\in\operatorname{supp}\pi\) maximizing the continuous function \(d(\cdot,C):=\inf_{z\in C}\|\cdot-z\|\) on \(G\). So \(d_{\max}=d(s,C)>0\). We show that this leads only to contradictions, so the assumption of the existence of such a \(\pi\) must be false.
Define the set of points being more than \(d_{\max}-\epsilon\) away from \(C\):
\[K(\epsilon):=\{x\in G\mid d(x,C)>d_{\max}-\epsilon\,\}\,,\qquad\epsilon\in(0,d _{\max}).\]
This set is measurable, i.e. \(K(\epsilon)\in\mathcal{B}(G)\), because it is open. Let \(M(\epsilon)\) be the event in the sigma algebra \(\mathcal{F}\), that \(T_{\xi}s\) is at least \(\epsilon\) closer to \(C\) than \(s\), i.e.
\[M(\epsilon):=\big{\{}\omega\in\Omega\,\big{|}\,d(T_{\xi(\omega)}s,C)\leq d_{ \max}-\epsilon\,\big{\}}\,.\]
There are two possibilities, either there is an \(\epsilon\in(0,d_{\max})\) with \(\mathbb{P}(M(\epsilon))>0\) or no such \(\epsilon\) exists. In the latter case we have \(\mathbb{E}\left[d(T_{\xi}s,C)\right]=d_{\max}=d(s,C)\) since \(T_{i}\) is a paracontraction in expectation. By compactness of \(C\) there exists \(c\in C\) such that \(0<d_{\max}=\|s-c\|\). Hence the probability of the set of \(\omega\in\Omega\) such that \(s\not\in\operatorname{Fix}T_{\xi(\omega)}\) is positive and so \(\mathbb{E}\left[d(T_{\xi(\omega)}s,C)\right]\leq\mathbb{E}\left[\|T_{\xi( \omega)}s-c\|\right]<\|s-c\|\) - a contradiction.
Suppose next that there is an \(\epsilon\in(0,d_{\max})\) with \(\mathbb{P}(M(\epsilon))>0\). In view of continuity of the mappings \(T_{i}\) around \(s\), \(i\in\mathbb{I}\), define
\[A_{n}:=\big{\{}\omega\in M(\epsilon)\,\big{|}\,\|(T_{\xi(\omega)}x-T_{\xi( \omega)}s\|\leq\tfrac{\epsilon}{2}\quad\forall x\in\mathbb{B}(s,\tfrac{1}{n}) \,\big{\}}\quad(n\in\mathbb{N}).\]
It holds that \(A_{n}\subset A_{n+1}\) and \(\mathbb{P}(\bigcup_{n}A_{n})=\mathbb{P}(M(\epsilon))\). So in particular there is an \(m\in\mathbb{N}\), \(m\geq 2/\epsilon\) with \(\mathbb{P}(A_{m})>0\). For all \(x\in\mathbb{B}(s,\tfrac{1}{m})\) and all \(\omega\in A_{m}\) we have
\[d(T_{\xi(\omega)}x,C)\leq\|T_{\xi(\omega)}x-T_{\xi(\omega)}s\|+d(T_{\xi( \omega)}s,C)\leq d_{\max}-\frac{\epsilon}{2},\]
which means \(T_{\xi(\omega)}x\in G\setminus K(\tfrac{\epsilon}{2})\). Hence, in particular we conclude that
\[p(x,K(\tfrac{\epsilon}{2}))<1\quad\forall x\in\mathbb{B}(s,\tfrac{1}{m}).\]
Since \(p(x,K(\epsilon))=0\) for \(x\in G\) with \(d(x,C)\leq d_{\max}-\epsilon\) by the assumption that \(T_{i}\) is a paracontraction in expectation, it holds by invariance of \(\pi\) that
\[\pi(K(\epsilon))=\int_{G}p(x,K(\epsilon))\pi(\operatorname{d}x)=\int_{K( \epsilon)}p(x,K(\epsilon))\pi(\operatorname{d}x).\]
It follows, then, that
\[\pi(K(\tfrac{\epsilon}{2})) =\int_{K(\tfrac{\epsilon}{2})}p(x,K(\tfrac{\epsilon}{2}))\pi( \operatorname{d}x)\] \[=\int_{\mathbb{B}(s,\tfrac{1}{m})}p(x,K(\tfrac{\epsilon}{2}))\pi( \operatorname{d}x)+\int_{K(\tfrac{\epsilon}{2})\setminus\mathbb{B}(s,\tfrac{1}{ m})}p(x,K(\tfrac{\epsilon}{2}))\pi(\operatorname{d}x)\] \[<\pi(\mathbb{B}(s,\tfrac{1}{m}))+\pi(K(\tfrac{\epsilon}{2}) \setminus\mathbb{B}(s,\tfrac{1}{m}))=\pi(K(\tfrac{\epsilon}{2}))\]
which leads again to a contradiction. So the assumption that \(\operatorname{supp}\pi\setminus C\neq\emptyset\) is false, i.e. \(\operatorname{supp}\pi\subset C\) as claimed.
(iii). By part (ii), \(\operatorname{inv}\mathcal{P}=\{\pi\in\mathscr{P}(G)\mid\operatorname{supp}\pi \subset C\,\}\), so (49) holds with equality, and \(\Psi(\mu)=0\) if and only if \(\operatorname{supp}\mu\subset C\), hence writing (50) pointwise (i.e., for \(\mu=\delta_{x}\)) reduces the expression to
\[\inf_{z\in C}\|x-z\|_{\mathbf{p}}=d_{W_{2,\mathbf{p}}}(\delta_{x},\operatorname {inv}\mathcal{P})=d_{W_{2,\mathbf{p}}}(\delta_{x},\Psi^{-1}(0)).\]
The representation (51) then follows from \(\overline{p}:=\max_{j}\{p_{j}\}\).
## 4 Convergence
Contractive Markov operators have been extensively, almost exclusively, studied. When the update function \(\varPhi\) is a contraction in expectation, then [20, Theorem 2.12] shows that the corresponding Markov operator \(\mathcal{P}\) is \(\alpha\)-fine, and the sequence of measures \((\mu_{k})\) converges Q-linearly (geometrically) to an invariant measure from any starting measure \(\mu_{0}\in\mathscr{P}(\mathcal{E})\). When the mappings \(T_{i}\) are only \(\alpha\)-firmly nonexpansive on \(\mathcal{E}\), then \(\mu_{k}\) converges in the Prokhorov-Levi metric to an invariant measure from any initial measure [19, Theorem 2.9]. To obtain generic (weak) convergence of the iterates \(\mu_{k}\) one must show that the sequence is _tight_. This has been established for Markov operators with nonexpansive update functions [19, Lemma 3.19]. We skirt a study of whether tightness can be established under the assumption that the update functions \(\varPhi(x,i)\) are only nonexpansive in expectation; we suspect, however, that this is not the case.
### Generic proto-convergence
We establish a few properties that are cornerstones of a generic global convergence analysis. In particular, we show that when the Markov operator is \(\alpha\)-fine (which, as shown above, does not require that all the mappings \(T_{i}\) be \(\alpha\)-fne) this property together with an additional assumption about the decay of the invariant Markov transport discrepancy yields boundedness and _asymptotic regularity_ of the sequence of measures.
**Proposition 6** (asymptotic regularity).: _Let the Markov operator \(\mathcal{P}:\mathscr{P}_{2}(\mathcal{E})\to\mathscr{P}_{2}(\mathcal{E})\) with update functions \(\varPhi(x,i)\) possess at least one invariant measure and be pointwise \(\alpha\)-fne in measure at all \(\pi\in\operatorname{inv}\mathcal{P}\). If the invariant Markov transport discrepancy satisfies_
\[\exists c>0:\quad\varPsi(\mu)\geq cd_{W_{2,\mathfrak{p}}}(\mu,\mu\mathcal{P}) \quad\forall\mu\in\mathscr{P}_{2}(\mathcal{E}), \tag{53}\]
_then the sequence \((\mu_{k})_{k\in\mathbb{N}}\) defined by \(\mu_{k+1}=\mu_{k}\mathcal{P}\) for any \(\mu_{0}\in\mathscr{P}_{2}(\mathcal{E})\) is bounded and asymptotically regular, i.e. satisfies \(d_{W_{2,\mathfrak{p}}}(\mu_{k},\mu_{k+1})\to 0\)._
_Proof_. Note that (53) implies that there is a \(c>0\) such that
\[c^{2}d_{W_{2,\mathfrak{p}}}(\mu,\mu\mathcal{P})^{2}\leq\int_{\mathcal{E}\times \mathcal{E}}\mathbb{E}\left[\psi_{\mathbf{p}}(x,y,\varPhi(x,\xi),\varPhi(y, \xi))\right]\gamma(dx,dy)\quad\forall\pi\in\operatorname{inv}\mathcal{P},\ \forall\gamma\in C_{*}(\pi,\mu).\]
This together with the assumption that \(\mathcal{P}\) is \(\alpha\)-fne yields
\[0\leq d_{W_{2,\mathfrak{p}}}(\mu\mathcal{P},\pi)^{2} \leq d_{W_{2,\mathfrak{p}}}(\mu,\pi)^{2}-\frac{1-\alpha}{\alpha} \int_{\mathcal{E}\times\mathcal{E}}\mathbb{E}\left[\psi_{\mathbf{p}}(x,y, \varPhi(x,\xi),\varPhi(x,\xi))\right]\gamma(dx,dy)\] \[\leq d_{W_{2,\mathfrak{p}}}(\mu,\pi)^{2}-\frac{1-\alpha}{\alpha} c^{2}d_{W_{2,\mathfrak{p}}}(\mu,\mu\mathcal{P})^{2}\quad\forall\pi\in \operatorname{inv}\mathcal{P},\ \forall\gamma\in C_{*}(\pi,\mu). \tag{54}\]
Applying (54) to the sequence of measures generated by \(\mu_{k+1}=\mu_{k}\mathcal{P}\) with \(\mu_{0}\in\mathscr{P}_{2}(\mathcal{E})\) yields
\[\frac{1-\alpha}{\alpha}c^{2}\sum_{k=1}^{N}d_{W_{2,\mathfrak{p}}}(\mu_{k},\mu_ {k+1})^{2}\leq d_{W_{2,\mathfrak{p}}}(\mu_{0},\pi)^{2}\quad\forall\pi\in \operatorname{inv}\mathcal{P},\forall N\in\mathbb{N}.\]
Letting \(N\to\infty\) establishes that the left hand side is summable, hence \(\liminf d_{W_{2,\mathfrak{p}}}(\mu_{k},\mu_{k+1})=0\). But \(\mathcal{P}\) is also pointwise nonexpansive at all \(\pi\in\operatorname{inv}\mathcal{P}\) since it is pointwise \(\alpha\)-fne there, so \(d_{W_{2,\mathfrak{p}}}(\mu_{k},\pi)\leq d_{W_{2,\mathfrak{p}}}(\mu_{0},\pi)\) for all \(k\) and \(d_{W_{2,\mathfrak{p}}}(\mu_{k},\mu_{k+1})\to 0\); i.e. the sequence is bounded and asymptotically regular as claimed.
In the next section we pursue a quantitative local convergence analysis under the assumption of _metric subregularity_ of the invariant Markov transport discrepancy.
### Metric subregularity of the invariant Markov transport discrepancy, convergence and rates
Recall the inverse mapping \(\Psi^{-1}(y):=\{\mu\mid\Psi(\mu)=y\,\}\), which clearly can be set-valued. It is important to keep in mind that an invariant measure need not correspond to a fixed point of any individual mapping \(T_{i}\), unless these have common fixed points. See [19, 20] instances of this. We require that the invariant Markov transport discrepancy \(\Psi\) takes the value \(0\) at \(\mu\) if and only if \(\mu\in\operatorname{inv}\mathcal{P}\), and is _gauge metrically subregular for \(0\) relative to \(\mathscr{P}_{2}(G)\) on \(\mathscr{P}_{2}(G)\)_:
\[d_{W_{2,\mathfrak{p}}}(\mu,\operatorname{inv}\mathcal{P})=d_{W_{2,\mathfrak{p }}}(\mu,\Psi^{-1}(0))\leq\rho(\Psi(\mu))\quad\forall\mu\in\mathscr{P}_{2}(G). \tag{55}\]
Here \(d_{W_{2,\mathfrak{p}}}(\mu,\operatorname{inv}\mathcal{P})=\inf_{\pi\in \operatorname{inv}\mathcal{P}}d_{W_{2,\mathfrak{p}}}(\mu,\pi)\), and \(\rho:[0,\infty)\to[0,\infty)\) is a _gauge function_: it is continuous, strictly increasing with \(\rho(0)=0\), and \(\lim_{t\to\infty}\rho(t)=\infty\). The gauge of metric subregularity \(\rho\) is constructed implicitly from another nonnegative function \(\theta_{\tau,\epsilon}:\,[0,\infty)\to[0,\infty)\,\) with parameters \(\tau>0\) and \(\epsilon\geq 0\) satisfying
\[(i)\ \theta_{\tau,\epsilon}(0)=0;\quad(ii)\ 0<\theta_{\tau,\epsilon}(t)<t\ \forall t\in(0,\overline{t}]\text{ for some }\overline{t}>0 \tag{56}\]
and
\[\rho\left(\left(\frac{(1+\epsilon)t^{2}-(\theta_{\tau,\epsilon}(t))^{2}}{\tau }\right)^{1/2}\right)=t\quad\Longleftrightarrow\quad\theta_{\tau,\epsilon}(t) =\left((1+\epsilon)t^{2}-\tau\left(\rho^{-1}(t)\right)^{2}\right)^{1/2} \tag{57}\]
for \(\tau>0\) fixed. In the next theorem the parameter \(\epsilon\) is exactly the violation in \(\operatorname{a}\!\alpha\)-fne mappings; the parameter \(\tau\) is directly computed from the constant \(\alpha\).
In preparation for the results that follow, we will require at least one of the additional assumptions on \(\theta\).
**Assumption 2**.: _The gauge \(\theta_{\tau,\epsilon}\) satisfies (56) and at least one of the following holds._
1. \(\theta_{\tau,\epsilon}\) _satisfies_ \[\theta_{\tau,\epsilon}^{(k)}(t)\to 0\text{ as }k\to\infty\ \forall t\in(0, \overline{t}),\] (58) _and the sequence_ \((\mu_{k})\) _is Fejer monotone with respect to_ \(\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\)_, i.e._ \[d_{W_{2,\mathfrak{p}}}\left(\mu_{k+1},\,\pi\right)\leq d_{W_{2,\mathfrak{p}}} (\mu_{k},\pi)\quad\forall k\in\operatorname{N},\forall\pi\in\operatorname{ inv}\mathcal{P}\cap\mathscr{P}_{2}(G);\] (59)
2. \(\theta_{\tau,\epsilon}\) _satisfies_ \[\sum_{j=1}^{\infty}\theta_{\tau,\epsilon}^{(j)}(t)<\infty\ \forall t\in(0, \overline{t})\] (60) _where_ \(\theta_{\tau,\epsilon}^{(j)}\) _denotes the_ \(j\)_-times composition of_ \(\theta_{\tau,\epsilon}\)_._
In the case of linear metric subregularity this becomes
\[\rho(t)=\kappa t\quad\Longleftrightarrow\quad\theta_{\tau,\epsilon}(t)= \left((1+\epsilon)-\frac{\tau}{\kappa^{2}}\right)^{1/2}t\quad(\kappa\geq\sqrt{ \frac{\tau}{(1+\epsilon)}}).\]
The condition \(\kappa\geq\sqrt{\frac{\tau}{(1+\epsilon)}}\) is not a real restriction since, if (55) is satisfied for some \(\kappa^{\prime}>0\), then it is satisfied for all \(\kappa\geq\kappa^{\prime}\). The conditions in (56) in this case simplify to \(\theta_{\tau,\epsilon}(t)=\gamma t\) where
\[0<\gamma:=1+\epsilon-\frac{\tau}{\kappa^{2}}<1\quad\Longleftrightarrow\quad \sqrt{\frac{\tau}{(1+\epsilon)}}\leq\kappa\leq\sqrt{\frac{\tau}{\epsilon}}. \tag{61}\]
In other words, \(\theta_{\tau,\epsilon}(t)\) satisfies Assumption 2(b). The weaker Assumption 2(a) is used to characterize sublinear convergence.
**Theorem 7** (convergence rates).: _Let \(G\subset\mathcal{E}\) be compact. Let \(T_{i}:\,G\to G\,\) satisfy the assumptions of Theorem 1 for all \(i\in\mathbb{I}\). Assume furthermore that there is at least one \(\pi\in\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\) where \(\mathcal{P}\) is the Markov operator associated with \(T_{i}\). If, in addition, \(\Psi\) satisfies (55) with gauge \(\rho\) given implicitly by (57) in terms of \(\theta_{\tau,\epsilon}\) where \(\tau=(1-\overline{\alpha})/\overline{\alpha}\), \(\epsilon=\overline{\nu}\overline{\epsilon}\) as in Theorem 1, then for any \(\mu_{0}\in\mathscr{P}_{2}(G)\) the distributions \(\mu_{k}\) of the iterates of Algorithm 1 satisfy_
\[d_{W_{2,\mathbf{p}}}\left(\mu_{k+1},\operatorname{inv}\mathcal{P}\right)\leq \theta_{\tau,\epsilon}\left(d_{W_{2,\mathbf{p}}}\left(\mu_{k},\operatorname{ inv}\mathcal{P}\right)\right)\quad\forall k\in\mathbb{N}. \tag{62}\]
_In addition, let \(\tau\) and \(\epsilon\) be such that \(\theta_{\tau,\epsilon}\) satisfies (56) where \(t_{0}:=d_{W_{2,\mathbf{p}}}\left(\mu_{0},\operatorname{inv}\mathcal{P}\right)< \overline{t}\) for all \(\mu_{0}\in\mathscr{P}_{2}(G)\), and let at least one of the conditions in Assumption 2 hold. Then \(\mu_{k}\to\pi^{\mu_{0}}\in\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\) in the \(d_{W_{2,\mathbf{p}}}\) metric with rate \(O\left(\theta_{\tau,\epsilon}^{(k)}(t_{0})\right)\) in case Assumption 2(a) and with rate \(O(s_{k}(t_{0}))\) for \(s_{k}(t):=\sum_{j=k}^{\infty}\theta_{\tau,\epsilon}^{(j)}(t)\) in case Assumption 2(b)._
Proof.: First we note that, since \(G\) is assumed to be compact, \(\mathcal{P}\) is a nonempty self-mapping on \(\mathscr{P}_{2}(G)\) and \(\mathscr{P}_{2}(G)\) is locally compact ([1, Remark 7.19]). By Theorem 1, the update function \(\varPhi\) is a\(\alpha\)-fne in expectation with respect to the weighted norm \(\|\cdot\|_{\mathbf{p}}\) with constant \(\overline{\alpha}\) and violation \(\overline{\nu}\overline{\epsilon}\). The statement is an extension of [20, Theorem 2.6], which establishes (62) and convergence under Assumption 2(b).
To establish convergence under Assumption 2(a), we show first that \(d_{W_{2,\mathbf{p}}}\left(\mu_{k},S\right)\to 0\) where, to reduce notational clutter we define \(S:=\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\). Indeed, let \(\pi\in S\) and define \(d_{k}^{\pi}:=d_{W_{2,\mathbf{p}}}\left(\mu_{k},\pi\right)\). Since \(d_{k+1}^{\pi}\leq d_{k}^{\pi}\) for all \(k\), this establishes that the sequence \((d_{k}^{\pi})_{k\in\mathbb{N}}\) is bounded and monotone non-increasing, therefore convergent. Noting that \(d_{W_{2,\mathbf{p}}}\left(\mu_{k},S\right)\leq d_{k}^{\pi}\) for all \(k\) and any fixed \(\pi\in S\), this also shows that \(d_{W_{2,\mathbf{p}}}\left(\mu_{k},S\right)\) converges. The inequality (62) only requires assumption (56), and this together with assumption (58) yields
\[d_{W_{2,\mathbf{p}}}\left(\mu_{k},S\right)\leq\theta^{(k)}(t_{0})\to 0\text{ as }k\to\infty.\]
Since \(\mathscr{P}_{2}(G)\) is locally compact and \(\mathcal{P}\) is Feller since \(T_{i}\) is continuous for all \(i\), \(\operatorname{inv}\mathcal{P}\) is closed [16]; so for every \(k\in\mathbb{N}\) the infimum in \(d_{W_{2,\mathbf{p}}}\left(\mu_{k},S\right)\) is attained at some \(\pi_{k}\). Now, for such a \(\pi_{k}\) we have, again by Fejer monotonicity, that
\[d(\mu_{l},\mu_{k})\leq d(\mu_{l},\overline{x}^{k})+d(\mu_{k},\overline{x}^{k}) \leq d(\mu_{l-1},\overline{x}^{k})+d(\mu_{k},\overline{x}^{k})\leq\cdots\leq 2 d(\mu_{k},S).\]
Since the right hand side converges to \(0\) as \(k\to\infty\) this shows that the sequence is a Cauchy sequence on \((\mathscr{P}_{2}(G),W_{2})\) - a separable complete metric space [42, Theorem 6.9] - and therefore convergent to some probability measure \(\pi^{\mu_{0}}\in\mathscr{P}_{2}(G)\). The Markov operator \(\mathcal{P}\) is Feller and when a Feller Markov chain converges in distribution, it does so to an invariant measure: \(\pi^{\mu_{0}}\in\operatorname{inv}\mathcal{P}\) (see [16, Theorem 1.10]).
Note that
\[\forall\overline{\epsilon}>\epsilon,\forall t\in[0,t_{0}],\quad\theta_{\tau, \overline{\epsilon}}(t)>\theta_{\tau,\epsilon}(t).\]
It is common in optimization algorithms to encounter mappings whose violation \(\epsilon\) can be controlled by choosing a step length parameter small enough; the gradient descent operator is just such a mapping. This means that, if condition (56) and at least one of (a) or (b) in Assumption 2 is satisfied for _some_\(\epsilon\), and the violation of the fixed point mappings \(T_{i}\) can be made arbitrarily small, then Theorem 7 guarantees convergence with rate given by either \(O(\theta_{\tau,\epsilon}^{(k)})\) in case (a) or \(O(s_{k}(t_{0}))\) in case (b) for small enough step sizes on small enough neighborhoods of a fixed point.
#### 4.2.1 Special Case: consistent stochastic feasibility
Recall that, when \(\operatorname{inv}\mathcal{P}=\mathscr{C}\) defined by (47) (which, by Theorem 5 holds when \(\mathcal{P}\) is a paracontraction in measure) the relation (49) holds with equality, so condition (55) simplifies to
\[d_{W_{2,\mathbf{p}}}(\mu,\operatorname{inv}\mathcal{P})=d_{W_{2, \mathbf{p}}}(\mu,\Psi^{-1}(0)) \leq \rho(\Psi(\mu))\quad\forall\mu\in\mathscr{P}_{2}(G)\] \[\Longleftrightarrow\] \[\left(\int_{G}\inf_{z\in C}\|x-z\|_{\mathbf{p}}^{2}\mu(dx) \right)^{1/2} \leq \rho\left(\left(\int_{G}\|x-T_{1}x\|^{2}\,\mu(dx)\right)^{1/2} \right)\qquad\forall\mu\in\mathscr{P}_{2}(G). \tag{63}\]
Writing this pointwise (i.e., for \(\mu=\delta_{\pi}\)) reduces the expression to
\[\inf_{z\in C}\|x-z\|_{\mathbf{p}}\leq\rho\left(\|x-T_{1}x\|\right)\qquad\forall x \in G, \tag{64}\]
whereby, recalling that \(\overline{p}=\max_{j}\{p_{j}\}\), (55) yields
\[\tfrac{1}{\sqrt{p}}d(x,C)\leq\inf_{z\in C}\|x-z\|_{\mathbf{p}}\leq\rho\left(\| x-T_{1}(x)\|\right)\qquad\forall x\in G. \tag{65}\]
This is recognizable as a slight generalization of the error bound studied by Luo and Tseng [28].
The next result shows that, for paracontractions, metric subregularity is _automatically_ satisfied by Markov chains that are _gauge monotone_ with respect to \(\operatorname{inv}\mathcal{P}\). Let \((X_{k})_{k\in\mathbb{N}}\) be a sequence of random variables on the closed subset \(G\subset\mathcal{E}\) generated by Algorithm 1, and let \((\mu_{k})_{k\in\mathbb{N}}\) be the corresponding sequence of distributions. Let \(\operatorname{inv}\mathcal{P}\) be nonempty and let the continuous mapping \(\theta:\,\mathbb{R}_{+}\to\mathbb{R}_{+}\,\) satisfy
\[(i)\ \theta(0)=0;\quad(ii)\ 0<\theta(t)\leq t\ \forall t\in(0,\overline{t}) \text{ for some }\overline{t}>0. \tag{66}\]
This is obviously the same as (56) but without the parameters since in this case \(\epsilon=0\) and \(\tau\) is just some scaling. For \(t_{0}:=d_{W_{2,\mathbf{p}}}\left(\mu_{0},\operatorname{inv}\mathcal{P}\right)\), the sequence \((\mu_{k})_{k\in\mathbb{N}}\) is said to be _gauge monotone relative to \(\operatorname{inv}\mathcal{P}\) with rate \(\theta\)_ whenever
\[d_{W_{2,\mathbf{p}}}(\mu_{k+1},\operatorname{inv}\mathcal{P})\leq\theta\left(d _{W_{2,\mathbf{p}}}(\mu_{k},\operatorname{inv}\mathcal{P})\right)\,\forall k \in\mathbb{N} \tag{67}\]
where \(\theta\) satisfies (66) with \(t_{0}<\overline{t}\). The sequence \((\mu_{k})_{k\in\mathbb{N}}\) is said to be _linearly monotone relative to \(\operatorname{inv}\mathcal{P}\)_ with rate \(c\) if (67) is satisfied for \(\theta(t)\leq c\cdot t\) for all \(t\in[0,t_{0}]\) and some constant \(c\in[0,1]\).
A Markov chain \((X_{k})_{k\in\mathbb{N}}\) that converges to some law \(\pi^{\mu_{0}}\in\mathscr{P}_{2}(G)\) is said to converge _gauge monotonically_ in distribution whenever the corresponding of distributions \((\mu_{k})_{k\in\mathbb{N}}\) is gauge monotone with gauge \(\theta\) satisfying (66) with \(d_{W_{2,\mathbf{p}}}(\mu_{0},\operatorname{inv}\mathcal{P})\leq\overline{t}\).
**Proposition 8** (gauge monotonic paracontractions in measure converge to invariant measures).: _Let \(G\subset\mathcal{E}\) be compact. Let the Markov operator corresponding to Algorithm (1), \(\mathcal{P}:\,\mathscr{P}_{2}(G)\to\mathscr{P}_{2}(G)\,\), be a paracontraction with respect to the metric \(d_{W_{2,\mathbf{p}}}\). For a fixed \(\mu_{0}\in\mathscr{P}_{2}(G)\), let the sequence of measures \((\mu_{k})_{k\in\mathbb{N}}\) corresponding to the iterates of Algorithm 1 be gauge monotone relative to \(\operatorname{inv}\mathcal{P}\) with rate \(\theta\) satisfying (66) where \(t_{0}:=d_{W_{2,\mathbf{p}}}\left(\mu_{0},\operatorname{inv}\mathcal{P}\right)< \overline{t}\). Suppose furthermore that at least one of the conditions_ (a) _or_ (b) _of Assumption 2 are satisfied (replacing \(\theta_{\pi,\epsilon}\) with \(\theta\)). Then \((\mu_{k})_{k\in\mathbb{N}}\) converges gauge monotonically with respect to \(d_{W_{2,\mathbf{p}}}\) to some \(\pi^{\mu_{0}}\in\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\) with rate \(O(\theta^{(k)}(t_{0})\) if Assumption 2_(a) _holds, and in the case of Assumption 2_(b) _with rate \(O(s_{k}(t_{0}))\) for \(s_{k}(t):=\sum_{j=k}^{\infty}\theta^{(j)}(t)\) and \(t_{0}:=d_{W_{2,\mathbf{p}}}(\mu_{0},\operatorname{inv}\mathcal{P})\). Moreover, \(\operatorname{supp}\pi^{\mu_{0}}\subset C\) for \(C\) defined by (42)._
Proof.: In both cases, the proof of convergence with the respective rates follows exactly the proof of convergence in Theorem 7. For the last statement, Theorem 5(ii) establishes that \(\operatorname{supp}\pi^{\mu_{0}}\subset C\), which completes the proof.
The following is a generalization of [18, Theorem 3.15].
**Theorem 9** (necessity of metric subregularity for monotone sequences).: _Let \(G\subset\mathcal{E}\) be compact. Let the Markov operator corresponding to Algorithm (1), \(\mathcal{P}:\,\mathscr{P}_{2}(G)\to\mathscr{P}_{2}(G)\,\), be a paracontraction with respect to the weighted Wasserstein metric \(d_{W_{2,\mathbf{p}}}\). Suppose all sequences \((\mu_{k})_{k\in\mathbb{N}}\) corresponding to Algorithm 1 and initialized in \(\mathscr{P}_{2}(G)\) are gauge monotone relative to \(\operatorname{inv}\mathcal{P}\) with rate \(\theta\) satisfying (66) and at least one of the conditions in Assumption 2. Suppose, in addition, that \((\operatorname{Id}-\theta)^{-1}(\cdot)\) is continuous on \(\mathbb{R}_{+}\), strictly increasing, and \((\operatorname{Id}-\theta)^{-1}(0)=0\). Then \(\Psi\) defined by (41) is gauge metrically subregular for \(0\) relative to \(\mathscr{P}_{2}(G)\) on \(\mathscr{P}_{2}(G)\) with gauge \(\rho(\cdot)=(\operatorname{Id}-\theta)^{-1}(\cdot)\), i.e. \(\Psi\) satisfies (55)._
Proof.: If the sequence \((\mu_{k})_{k\in\mathbb{N}}\) is gauge monotone relative to \(\operatorname{inv}\mathcal{P}\) with rate \(\theta\) satisfying (66) and at least one of the conditions in Assumption 2, then by the triangle inequality
\[d_{W_{2,\mathbf{p}}}(\mu_{k+1},\mu_{k}) \geq d_{W_{2,\mathbf{p}}}(\mu_{k},\overline{\mu}_{k+1})-d_{W_{2, \mathbf{p}}}(\mu_{k+1},\overline{\mu}_{k+1})\] \[\geq d_{W_{2,\mathbf{p}}}(\mu_{k},\operatorname{inv}\mathcal{P})-d_{W_ {2,\mathbf{p}}}(\mu_{k+1},\operatorname{inv}\mathcal{P})\] \[\geq d_{W_{2,\mathbf{p}}}(\mu_{k},\operatorname{inv}\mathcal{P})- \theta\left(d_{W_{2,\mathbf{p}}}(\mu_{k},\operatorname{inv}\mathcal{P})\right) \geq 0\quad\forall k\in\mathbb{N}, \tag{68}\]
where \(\overline{\mu}_{k+1}\) is a metric projection of \(\mu_{k+1}\) onto \(\operatorname{inv}\mathcal{P}\) (exists since \(\operatorname{inv}\mathcal{P}\) is closed in \(\mathscr{P}_{2}(G)\)). On the other hand, by Theorem 5(ii), inequality (49) is tight, so \(\Psi^{-1}(0)=\operatorname{inv}\mathcal{P}\) and
\[\Psi(\mu_{k})=\left(\int_{\mathcal{G}}\|x-T_{1}x\|^{2}\mu_{k}(dx) \right)^{1/2} \geq\inf_{\gamma\in C(\mu_{k}\mathcal{P},\mu_{k})}\left(\int_{G \times G}\|x-y\|_{\mathbf{p}}^{2}\gamma(dx,dy)\right)^{1/2}\] \[=d_{W_{2,\mathbf{p}}}(\mu_{k+1},\mu_{k})\quad\forall k\in\mathbb{ N}. \tag{69}\]
Combining (68) and (69) yields
\[d(0,\Psi(\mu_{k}))=\Psi(\mu_{k})\geq d_{W_{2,\mathbf{p}}}(\mu_{k},\Psi^{-1}(0) )-\theta\left(d_{W_{2,\mathbf{p}}}(\mu_{k},\Psi^{-1}(0))\right)\quad\forall k \in\mathbb{N}. \tag{70}\]
By assumption \((\operatorname{Id}-\theta)^{-1}(\cdot)\) is continuous on \(\mathbb{R}_{+}\), strictly increasing, and \((\operatorname{Id}-\theta)^{-1}(0)=0\), so
\[(\operatorname{Id}-\theta)^{-1}\left(d(0,\Psi(\mu_{k}))\right)\geq d_{W_{2, \mathbf{p}}}(\mu_{k},\Psi^{-1}(0))\quad\forall k\in\mathbb{N}. \tag{71}\]
Since this holds for _any_ sequence \((\mu_{k})_{k\in\mathbb{N}}\) initialized in \(\mathscr{P}_{2}(G)\) and these converge by Proposition 8 to points in \(\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\), we conclude that \(\Psi\) is metrically subregular for \(0\) on \(\mathscr{P}_{2}(G)\) with gauge \(\rho=(\operatorname{Id}-\theta)^{-1}\).
## 5 Block-Stochastic Splitting for Composite Optimization
We return now to stochastic blockwise methods for solving (1). It is already understood that the critical points of \(f+\sum_{j=1}^{m}g_{j}\), denoted \(\operatorname{crit}\left(f+\sum_{j=1}^{m}g_{j}\right)\), are fixed points of the deterministic, non-block versions of Algorithms 2 and 3; and fixed points of the deterministic, non-block versions of these algorithms are invariant distributions corresponding to iterates of these same stochastic blockwise algorithms. When \(\operatorname{inv}\mathcal{P}=\mathscr{C}\) defined by (47), then in fact any \(\overline{x}\in C:=\left\{x\ |\ \mathbb{P}(x\in\mathsf{Fix}\ T_{\xi})=1\right\}\) is almost surely at least a stationary point. This leads to the following elementary observations.
**Lemma 10**.: _Let \(T_{i}\) defined by either (17) (if \(f\) is differentiable) or (19) be single-valued on \(\mathcal{E}\) and let \(\mathcal{P}\) be the Markov operator with update function \(T_{i}\). Then \(\operatorname{crit}\left(f+\sum_{j=1}^{m}g_{j}\right)\subset S:=\bigcup_{ \pi\in\operatorname{inv}\mathcal{P}}\operatorname{supp}\pi\), and if \(f\) and \(g_{j}\) (\(j=1,\ldots,m\)) are convex, then \(\overline{x}\in C\) if and only if \(\overline{x}\in\operatorname{crit}\left(f+\sum_{j=1}^{m}g_{j}\right)\) almost surely._
### Regularity
In this section we determine the regularity of the blockwise mappings \(T_{i}\) for the two cases (17) and (19). In Theorem 1, the regularity constants \(\epsilon_{i}\) and \(\alpha_{i}\) are bounded above by the constants of \(T_{1}\), which is the mapping including all of the blocks. It suffices, then, to determine the regularity of \(T_{1}\) for the two cases (17) and (19).
**Proposition 11** (regularity of partial resolvents).: _For \(j=1,2,\ldots,m\), for each vector of parameters \(x\in G\subset\mathcal{E}\), let \(f_{j}(\cdot;x):G_{j}\subset\mathcal{E}_{j}\to(-\infty,+\infty]\) defined by (16) be subdifferentially regular with subdifferentials satisfying_
\[\exists\tau_{f_{j}}\geq 0: \forall x\in G,\forall u_{j},v_{j}\in G_{j},\ \forall z_{j}\in t_{j}\partial f_{j}(u_{j};x),w_{j}\in t_{j}\partial f_{j}(v_{ j};x), \tag{72}\] \[-\tfrac{\tau_{f_{j}}}{2}\left\|(u_{j}+z_{j})-(v_{j}+w_{j})\right\| ^{2}\] \[\leq\left\langle z_{j}-w_{j},\ u_{j}-v_{j}\right\rangle.\]
_For \(f_{t}(u;x):=\sum_{j=1}^{m}t_{j}f_{j}(u_{j};x)\), the resolvent \(J_{0f_{t},1}\) is \(a\alpha\)-fne with constant \(\alpha_{f}=1/2\) and violation \(\tau_{f}=\max_{j}\{\tau_{f_{j}}\}\) on \(G\). If \(f_{j}\) is convex on \(\mathcal{E}_{j}\) for each \(j=1,2,\ldots,m\), then \(J_{0f_{t},1}\) is \(\alpha\)-fne with constant \(\alpha_{f}=1/2\) and no violation on \(\mathcal{E}\)._
Condition (72) generalizes the notion of hypomonotonicity [38] and is satisfied by any _prox-regular_ function.
Proof.: By [27, Proposition 2.3(iv)], condition (72) is equivalent to \(J_{\partial f_{j},t_{j}}\) being a\(\alpha\)-fne on \(G_{j}\) with constant \(\alpha_{f_{j}}=1/2\) and violation \(\tau_{f_{j}}\). Extending this, for \(f_{t}(x):=\sum_{j=1}^{m}t_{j}f_{j}(u_{j};x)\) we have \(\partial f_{t}(u;x)=\left[\left[t_{1}\partial_{u_{1}}f_{1}(u_{1};x)\right]^{T},\ldots,\left[t_{m}\partial_{u_{m}}f_{m}(u_{m};x)\right]^{T}\right]^{T}\) and
\[\forall v,u\in\subset G,\text{ for }z:=\partial f_{t}(u;x),w:= \partial f_{t}(v;x),\] \[\left\langle z-w,\ u-v\right\rangle =\sum_{j=1}^{m}\left\langle z_{j}-w_{j},\ u_{j}-v_{j}\right\rangle_ {G_{j}}\] \[\geq\sum_{j=1}^{m}\frac{-\tau_{f_{j}}}{2}\left\|(u_{j}+z_{j})-(v _{j}+w_{j})\right\|_{G_{j}}^{2}\] \[\geq\frac{-\max_{j}\left\{\tau_{f_{j}}\right\}}{2}\sum_{j=1}^{m} \left\|(u_{j}+z_{j})-(v_{j}+w_{j})\right\|_{G_{j}}^{2}\] \[=\frac{-\tau_{f}}{2}\left\|(u+z)-(v+w)\right\|^{2}.\]
Application of [27, Proposition 2.3(iv)] to \(f_{t}\) establishes the claim. The convex statement follows from monotonicity of the gradient.
The following corollary is just the specialization of Proposition 11 to the case that \(f_{j}(\cdot;x)\) is independent of the parameter \(x\).
**Corollary 12** (regularity of resolvents of block separable functions).: _In the setting of Proposition 11 let \(h_{j}(\cdot):\,G_{j}\subset\mathcal{E}_{j}\to(-\infty,+\infty]\) satisfy_
\[\exists\tau_{h_{j}}\geq 0: \forall x_{j},y_{j}\in\mathcal{E}_{j},\ \forall z_{j}\in t_{j}\partial h_{j}(x_{j}),w_{j}\in t_{j} \partial h_{j}(y_{j}),\] \[-\frac{\tau_{h_{j}}}{2}\left\|(x_{j}+z_{j})-(y_{j}+w_{j})\right\| ^{2}\leq\left\langle z_{j}-w_{j},\ x_{j}-y_{j}\right\rangle. \tag{73}\]
_Then for \(h_{t}(x):=\sum_{j=1}^{m}t_{j}h_{j}(x_{j})\), the resolvent \(J_{0h_{t},1}\) is \(a\alpha\)-fne with constant \(\alpha_{h}=1/2\) and violation \(\tau_{h}=\max_{j}\left\{\tau_{h_{j}}\right\}\) on \(G\). If \(h_{j}\) is convex on \(\mathcal{E}_{j}\) for each \(j=1,2,\ldots,m\), then \(J_{0h_{t},1}\) is \(\alpha\)-fne with constant \(\alpha_{h}=1/2\) and no violation on \(\mathcal{E}\)._
**Proposition 13** (regularity of gradient descent).: _Let \(f:\,\mathcal{E}\to\mathbb{R}\) be continuously differentiable with blockwise Lipschitz and hypomonotone gradient, that is \(f\) satisfies_
\[\forall j=1,2,\ldots,m,\ \exists L_{j}>0: \sum_{j=1}^{m}\left\|\nabla_{x_{j}}f(x)-\nabla_{x_{j}}f(y)\right\| ^{2}\leq\sum_{j=1}^{m}L_{j}^{2}\|x_{j}-y_{j}\|^{2}\] \[\forall x,y\in\mathcal{E},\] (74a) _and_ \[\forall j=1,2,\ldots,m,\ \exists\tau_{f_{j}}\geq 0:\sum_{j=1}^{m}- \tau_{f_{j}}\left\|x_{j}-y_{j}\right\|^{2}\leq\sum_{j=1}^{m}\left\langle\nabla _{x_{j}}f(x)-\nabla_{x_{j}}f(y),\ x_{j}-y_{j}\right\rangle\] \[\forall x,y\in\mathcal{E}. \tag{74b}\]
_Then the gradient descent mapping with blockwise heterogeneous step lengths defined by \(T_{GD}:=\mathrm{Id}-\bigoplus_{j=1}^{m}t_{j}\nabla_{x_{j}}f\) is \(a\alpha\)-fne on \(\mathcal{E}\) with violation at most_
\[\epsilon_{GD}=\max_{j}\left\{2t_{j}\tau_{j}+\frac{t_{j}^{2}L_{j}^{2}}{\overline {\alpha}}\right\}<1,\quad\text{ with constant }\quad\overline{\alpha}=\max_{j}\left\{\alpha_{j}\right\}\] (75a) _whenever the blockwise steps \[t_{j}\in\left(0,\frac{\overline{\alpha}\sqrt{\tau_{j}^{2}+L_{j}^{2}}-\overline {\alpha}\tau_{j}}{L_{j}^{2}}\right). \tag{75b}\]
_If \(f\) is convex then, with global step size \(t_{j}=t<\frac{2\overline{\alpha}}{L}\) (\(j=1,2,\ldots,m\)) for \(\overline{\alpha}\in(0,1)\) with \(\overline{L}=\max_{j}\left\{L_{j}\right\}\), the gradient descent mapping \(T_{GD}\) is \(\alpha\)-fne with constant \(\overline{\alpha}\) (no violation)._
Proof.: By [27, Proposition 2.1], the claim holds if and only if \(\mathrm{Id}-\frac{1}{\overline{\alpha}}\bigoplus_{j=1}^{m}t_{j}\nabla_{x_{j}}f\) is almost nonexpansive on \(\mathcal{E}\) with violation at most
\[\epsilon^{\prime}=\epsilon_{GD}/\overline{\alpha}=\tfrac{1}{\overline{\alpha} }\max_{j}\left\{2t_{j}\tau_{j}+\tfrac{t_{j}^{2}L_{j}^{2}}{\overline{\alpha}} \right\}. \tag{76}\]
To see this latter property, since \(f\) satisfies (74) we have
\[\left\|\left(x-\tfrac{1}{\overline{\alpha}}\bigoplus_{j=1}^{m}t_ {j}\nabla_{x_{j}}f(x)\right)-\left(y-\tfrac{1}{\overline{\alpha}}\bigoplus_{j= 1}^{m}t_{j}\nabla_{x_{j}}f\left(y\right)\right)\right\|^{2}\] \[\qquad=\left\|x-y\right\|^{2}-\tfrac{2}{\overline{\alpha}}\sum_{ j=1}^{m}t_{j}\left\langle x_{j}-y_{j},\ \nabla_{x_{j}}f(x)-\nabla_{x_{j}}f\left(y\right)\right\rangle+\tfrac{1}{ \overline{\alpha}^{2}}\sum_{j=1}^{m}t_{j}^{2}\left\|\nabla_{x_{j}}f(x)-\nabla _{x_{j}}f\left(y\right)\right\|^{2}\] \[\qquad\leq\left\|x-y\right\|^{2}+\tfrac{2}{\overline{\alpha}} \sum_{j=1}^{m}t_{j}\tau_{j}\left\|x_{j}-y_{j}\right\|^{2}+\tfrac{1}{\overline{ \alpha}}\sum_{j=1}^{m}t_{j}^{2}L_{j}^{2}\left\|x_{j}-y_{j}\right\|^{2}\] \[\qquad\leq\left(1+\tfrac{1}{\overline{\alpha}}\max_{j}\left\{2t_ {j}\tau_{j}+\tfrac{t_{j}^{2}L_{j}^{2}}{\overline{\alpha}}\right\}\right)\left\| x-y\right\|^{2} \tag{77}\]
for all \(x,y\in\mathcal{E}\). A simple calculation shows that the violation does not exceed \(1\) whenever the step \(t_{j}\) is bounded by (75b). This proves the result for the nonconvex setting.
If \(f\) is convex, then [21, Proposition 3.4] shows that a different bound on the steps is possible. Note that by [3, Corollaire 10]
\[\tfrac{1}{L}\|\nabla f(x)-\nabla f(y)\|^{2}\leq\langle\nabla f(x)-\nabla f(y),\ x-y\rangle\]
Let \(\overline{\alpha}=\max_{j}\{\alpha_{j}\}\) with \(\alpha_{j}\in(0,1)\) and \(\overline{L}=\max_{j}\{L_{j}\}\). For \(t=\tfrac{2\overline{\alpha}}{\overline{L}}\) we have \(2t=\tfrac{t^{2}\overline{L}}{\overline{\alpha}}\) and
\[\tfrac{t^{2}\overline{L}}{\overline{\alpha}}\tfrac{1}{L}\|\nabla f (x)-\nabla f(y)\|^{2} \leq 2t\,\langle\nabla f(x)-\nabla f(y),\ x-y\rangle\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
1. _Fully nonconvex__._ _For all_ \(j\in\{1,2,\ldots,m\}\) _let_ \(f:\,G\to\mathbb{R}\) _be subdifferentially regular with subdifferential satisfying (_72_) and let_ \(h_{j}:\,G_{j}\to(-\infty,+\infty]\,\) _be proper, l.s.c., and subdifferentially regular satisfying (_73_)._ 1. _The partial blockwise Douglas-Rachford mapping_ \(T_{i}^{DR}\) _defined by (_19_) (_\(j\in M_{i}\)_) is_ \(a\alpha\)_-fine on_ \(G_{M_{i}}\bigoplus\{z\}_{M_{i}^{o}}\) _for any fixed_ \(z_{M_{i}^{o}}\in G_{M_{i}^{o}}\) _with respective constant and violation_ \[\alpha_{DR}=\frac{2}{3},\text{ and }\quad\epsilon_{DR}\leq\tau_{f}+\tau_{h}+ \tau_{f}\tau_{h}\] (78) _where_ \(\tau_{h}:=\max_{j}\{\tau_{h_{j}}\}\) _and_ \(\tau_{f}:=\max_{j}\{\tau_{f_{j}}\}\)_._ 2. _If_ \(f\) _is continuously differentiable on_ \(\mathcal{E}\) _and satisfies (_74_), the partial blockwise forward-backward mapping_ \(T_{i}^{FB}\) _defined by (_17_) with step lengths_ \(t_{j}\) _satisfying (_75b_) (_\(j\in M_{i}\)_) is_ \(a\alpha\)_-fine on affine subspaces_ \(G_{M_{i}}\bigoplus\{z\}_{M_{i}^{o}}\) _for any fixed_ \(z_{M_{i}^{o}}\in G_{M_{i}^{o}}\) _with respective constant and violation_ \[\alpha_{FB}:=\frac{2}{1+\frac{1}{\max\{\frac{1}{2},\,\overline{\alpha}\}}}, \text{ and }\quad\epsilon_{FB}\leq\epsilon_{GD}+\tau_{h}+\epsilon_{ GD}\tau_{h}\] (79) _where_ \(\overline{\alpha}:=\max_{j}\{\alpha_{j}\},\,\tau_{h}:=\max_{j}\{\tau_{h_{j}}\}\) _and_ \(\epsilon_{GD}\) _is no larger than (_75a_)._
2. _Partially nonconvex__._ _For all_ \(j\in\{1,2,\ldots,m\}\) _let_ \(f:\,\mathcal{E}\to\mathbb{R}\) _be continuously differentiable with gradient satisfying (_74_) and let the functions_ \(h_{j}\) _be convex on_ \(G_{j}\) _(_\(j=1,2,\ldots,m\)_). Then for all_ \(i\in\mathbb{I}\)_,_ \(T_{i}^{FB}\) _is_ \(a\alpha\)_-fine on_ \(G_{M_{i}}\bigoplus\{z\}_{M_{i}^{o}}\) _for any_ \(z_{M_{i}^{o}}\in G_{M_{i}^{o}}\) _with constant_ \(\alpha_{FB}\) _given by (_79_), violation_ \(\epsilon_{FB}\) _at most_ \(\epsilon_{GD}\)_, and this can be made arbitrarily small by choosing the step lengths_ \(t_{i}\) _small enough._
3. _Convex__._ _If_ \(f\) _and_ \(h_{j}\) _are convex on_ \(\mathcal{E}\) _(_\(j=1,2,\ldots,m\)_), then_ 1. \(T_{i}^{DR}\) _is_ \(\alpha\)_-fine on_ \(\mathcal{E}_{M_{i}}\bigoplus\{z\}\) _with constant_ \(\alpha_{DR}=2/3\) _and no violation;_ 2. _if_ \(f\) _is continuously differentiable and_ \(\nabla f\) _satisfies (_74a_),_ \(T_{i}^{FB}\) _with global step size_ \(t<\frac{2\overline{\alpha}}{\overline{L}}\) _for_ \(\overline{L}=\max_{j}\{L_{j}\}\) _is_ \(\alpha\)_-fine on_ \(\mathcal{E}_{M_{i}}\bigoplus\{z\}\) _with constant_ \(\alpha_{FB}\) _given by (_79_) and no violation._
Proof.: Part (i). By Theorem 1, the respective regularity constants \(\epsilon_{i}\) and \(\alpha_{i}\) are bounded above by the respective constants of \(T_{1}^{FB}\) and \(T_{1}^{DR}\), which are the mappings including all of the blocks. It suffices, then, to determine the regularity of \(T_{1}^{FB}\) and \(T_{1}^{DR}\). Part (ia). By Proposition 11 and Corollary 12\(J_{\partial f_{i}}\) and \(J_{\partial h_{t}}\) are \(a\alpha\)-fine with constant \(\alpha_{h_{t}}=1/2\) and violation \(\tau_{f}=\max_{j}\{\tau_{f_{j}}\}\) (respectively \(\tau_{h}=\max_{j}\{\tau_{h_{j}}\}\)) on \(G\). Then by [27, Proposition 2.4]\(T_{1}^{DR}\) is \(a\alpha\)-fine with constant \(\alpha_{DR}=2/3\) and (maximal) violation given by (78) on \(G\).
Part (ib). By Proposition 13, \(T_{GD}\) is \(a\alpha\)-fine on \(G\) with violation \(\epsilon_{GD}\) no larger than (75a) and constant \(\overline{\alpha}=\max_{j}\{\alpha_{j}\}\). By Corollary 12\(J_{\partial h_{t}}\) is \(a\alpha\)-fine with constant \(\alpha_{h_{t}}=1/2\) and violation \(\tau_{h}=\max_{j}\{\tau_{h_{j}}\}\) on \(\mathcal{E}\). Then by [27, Proposition 2.4/Proposition 3.7]\(T_{1}^{FB}\) is \(a\alpha\)-fine with constant \(\alpha_{FB}\) and (maximal) violation given by (79) on \(G\).
Parts (ii)-(iii) follow immediately from part (i) and Propositions 11-13.
**Corollary 15**.: _For \(G\subset\mathcal{E}\), let \(\Phi:\,G\times\mathbb{I}\to G\,\) be the update function given by \(\Phi(x,i)=T_{i}x\) where \(T_{i}\) is either \(T_{i}^{FB}\) or \(T_{i}^{DR}\) defined respectively by (17) and (19)._
1. _Fully nonconvex__._ _Under the assumptions of Proposition_ 14_\((\mathrm{i})\)_, that is both_ \(f\) _and_ \(h\) _in (_17_) are nonconvex, the corresponding update function_ \(\Phi(x,i)\) _is_ \(a\alpha\)_-fine in expectation with respect to the weighted norm_ \(\|\cdot\|_{\mathbf{p}}\) _with regularity constants_ \(\overline{p}\epsilon_{DR}\) _and_ \(\alpha_{DR}\) _(respectively_ \(\overline{p}\epsilon_{FB}\) _and_ \(\alpha_{FB}=2/3\)_) corresponding to (_78_) (respectively (_79_))._
2. _Partially nonconvex__._ _Under the assumptions of Proposition_ 14_\((\mathrm{ii})\)_, that is_ \(f\) _smooth nonconvex with Lipschitz and hypomonotone gradient and_ \(h_{j}\) _convex in (_17_),_ \(\Phi(x,i)=T_{i}^{FB}(x)\) _is_ \(a\alpha\)_-fine in expectation with respect to the weighted norm_ \(\|\cdot\|_{\mathbf{p}}\) _with constant_ \(\alpha_{FB}\) _as above and violation at most_ \(\overline{p}\epsilon_{GD}\) _with_ \(\epsilon_{GD}\) _given by (_75a_); this violation can be made arbitrarily small by choosing the step lengths_ \(t_{i}\) _small enough._
3. _Convex__._ _If both_ \(f\) _and_ \(h_{j}\) _are convex on_ \(\mathcal{E}\) _(_\(j=1,2,\ldots,m\)_), then_ \(T_{i}^{DR}(x)\) _is_ \(\alpha\)_-fine in expectation with respect to the weighted norm_ \(\|\cdot\|_{\mathbf{p}}\) _(no violation) and constant_ \(\alpha_{DR}=2/3\) _on_ \(\mathcal{E}\)_. In the case of_ \(T^{FB}\)_, if_ \(\nabla f\) _satisfies (_74a_) and the global step size is bounded by_ \(t<\frac{2\overline{\alpha}}{\overline{L}}\) _for_ \(\overline{L}=\max_{j}\{L_{j}\}\)_,_ \(T_{i}^{FB}(x)\) _is_ \(\alpha\)_-fine in expectation with respect to the weighted norm_ \(\|\cdot\|_{\mathbf{p}}\) _(no violation) and constant_ \(\alpha_{GD}\) _on_ \(\mathcal{E}\)
Proof.: This is an immediate consequence of Proposition 14 and Theorem 12.
Before presenting the convergence results it is worthwhile pointing out that the partial blockwise forward-backward mappings \(T_{i}^{FB}\) and \(T_{i}^{DR}\) have common fixed points, and these are critical points of (1). In other words, the stochastic fixed point problem is consistent. As shown in Section 4.2.1, in this case the metric subregularity condition (55) simplifies to (65) when \(\varPsi(\mu)=0\) if and only if \(\mu\in\operatorname{inv}\mathcal{P}\) and \(\operatorname{supp}\mu\subset C\). In the convex setting we have the following correspondence between invariant measures of the stochastic block iterations and minima of (1).
**Proposition 16**.: _Let \(\mathcal{P}\) be the Markov operator associated with either Algorithm 2 or 3. In the setting of Lemma 10, if \(f\) and \(g_{j}\) (for all \(j=1,\ldots,m\)) are convex, then \(\operatorname{inv}\mathcal{P}=\{\pi\mid\operatorname{supp}\pi\subset C\,\}\) and whenever \(x\in C\) then almost surely \(x\in\operatorname{argmin}\left\{f+\sum_{j=1}^{m}g_{j}\right\}\)._
Proof.: When \(f\) and \(g_{j}\) (for all \(j=1,\ldots,m\)) are convex, the corresponding mappings \(T_{i}\) defined by either (17) or (19) are single-valued self-mappings on \(\mathcal{E}\) and \(\alpha\)-fne on \(\mathcal{E}_{M_{i}}\oplus\{z\}_{M_{i}^{\circ}}\) for every \(z\in M_{i}^{\circ}\) as long as the step size \(t_{i}\) is small enough ([27, Propositions 3.7 and 3.10] specialized to the convex case). Then by Corollary 4 the mappings \(T_{i}\) are paracontractions in expectation on \(\mathcal{E}\). The claim then follows from Theorem 5(ii) and Lemma 10 since in this case \(\operatorname{crit}\left(f+\sum_{j=1}^{m}g_{j}\right)=\operatorname{argmin} \left\{f+\sum_{j=1}^{m}g_{j}\right\}\).
The final result of this study collects all of these facts in the context of the Markov chain underlying Algorithm 2 and 3.
**Proposition 17**.: _Let \(\mathcal{P}\) be the Markov operator associated with the S-BFBS Algorithm 2 or the S-BDRS Algorithm 3 and let \(\left(\mu_{k}\right)_{k\in\mathbb{N}}\) be the corresponding sequence of measures initialized by any \(\mu_{0}\in\mathscr{P}_{2}(G)\), where \(G\) is a closed subset of \(\mathcal{E}\). Assume that \(G\supset\operatorname{crit}\left(f+\sum_{j=1}^{m}g_{j}\right)\neq\emptyset\) and \(\operatorname{inv}\mathcal{P}=\mathscr{C}\) defined by (47). Let \(\varPsi\) given by (41) be such that \(\varPsi(\mu)=0\) if and only if \(\mu\in\operatorname{inv}\mathcal{P}\). Additionally, let the mappings \(T_{i}\) be self-mappings on \(G\) where \(\varPsi\) satisfies (65) with gauge \(\rho\) given by (57) with \(\tau=(1-\alpha_{*})/\alpha_{*}\), \(\epsilon=\overline{\nu}\epsilon_{*}\) for constants \(\alpha_{*}\) and violation \(\epsilon_{*}\) given by either (79) or (78) (depending on the algorithm), and \(\theta_{\tau,\epsilon}\) satisfying (56)._
1. _Fully nonconvex._ _Under the assumptions of Proposition_ 14_(i), _the sequence_ \((\mu_{k})\) _satisfies_ \[d_{W_{2,\mathbf{p}}}\left(\mu_{k+1},\operatorname{inv}\mathcal{P}\right)\leq \theta_{\tau,\epsilon}\left(d_{W_{2,\mathbf{p}}}\left(\mu_{k},\operatorname{ inv}\mathcal{P}\right)\right)\quad\forall k\in\mathbb{N}.\] _If_ \(\tau\) _and_ \(\epsilon\) _are such that at least one of the conditions in Assumption_ 2 _holds, then_ \(\mu_{k}\to\pi^{\mu_{0}}\in\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\) _in the_ \(d_{W_{2,\mathbf{p}}}\) _metric with rate_ \(O\left(\theta_{\tau,\epsilon}^{(k)}(t_{0})\right)\) _in the case of Assumption_ 2_(a) _where_ \(t_{0}=d_{W_{2,\mathbf{p}}}\left(\mu^{0},\operatorname{inv}\mathcal{P}\right)\)_, and with rate_ \(O(s_{k}(t_{0}))\) _for_ \(s_{k}(t):=\sum_{j=k}^{\infty}\theta_{\tau,\epsilon}^{(j)}(t)\) _in the case of Assumption_ 2_(b)_. Moreover,_ \(\operatorname{supp}\pi^{\mu_{0}}\subset C:=\{x\in G\mid\mathbb{P}\left(T_{ \xi}x=x\right)=1\}\)_._
2. _Partially nonconvex._ _Under the assumptions of Proposition_ 14_(ii), _that is_ \(f\) _smooth nonconvex with Lipschitz and hypomonotone gradient and_ \(h_{j}\) _convex in (_17_), if there exist_ \(\tau\) _and_ \(\epsilon\) _such that at least one of the conditions in Assumption_ 2 _holds, then for all step lengths_ \(t_{i}\) _small enough in (_17_) and any initial distribution_ \(\mu_{0}\) _close enough to_ \(\operatorname{inv}\mathcal{P}\)_, the sequence_ \(\mu_{k}\to\pi^{\mu_{0}}\in\operatorname{inv}\mathcal{P}\cap\mathscr{P}_{2}(G)\) _in the_ \(d_{W_{2,\mathbf{p}}}\) _metric with rate at least_ \(O\left(\theta_{\tau,\epsilon}^{(k)}(t_{0})\right)\) _in the case of Assumption_ 2_(a)_, and with rate at least_ \(O(s_{k}(t_{0}))\) _in the case of Assumption_ 2_(b)_; moreover,_ \(\operatorname{supp}\pi^{\mu_{0}}\subset C\)_._
3. _Convex._ _If_ \(f\) _and_ \(h_{j}\) _are convex on_ \(\mathcal{E}\)_, and there exists_ \(\tau\) _such that at least one of the conditions in Assumption_ 2 _holds when_ \(\epsilon=0\)_, then the sequence_ \((\mu_{k})\) _corresponding to Algorithm_ 3 _initialized from any_ \(\mu_{0}\in\mathscr{P}_{2}(\mathcal{E})\)_, converges in the metric_ \(d_{W_{2,\mathbf{p}}}\) _to an invariant distribution with rate at least_ \(O\left(\theta_{\tau,\epsilon}^{(k)}(t_{0})\right)\) _in the case of Assumption_ 2_(a)_, and with rate at least_ \(O(s_{k}(t_{0}))\) _in the case of Assumption_ 2_(b)_. Moreover_ \(\operatorname{supp}\pi^{\mu_{0}}\subset C:=\operatorname{argmin}\left(f+\sum_{j= 1}^{m}g_{j}\right)\)_. If_ \(f\) _is continuously differentiable and satisfies (_74_a), then the stated convergence in the case of Algorithm_ 2 _holds for the global step length_ \(t<\frac{2\overline{\alpha}}{L}\)_._
## 6 Final Remarks
There a several open technicalities lurking between the lines above, and one rather obvious challenge hiding in plain sight. To the hidden technicalities belong the question of whether metric subregularity
is necessary for quantitative convergence in some appropriate metric of Markov operators that are not paracontractions in measure. We conjecture that this is true. Another open technical issue concerns the statement of asymptotic regularity in Proposition 6. This result is incomplete without some extension to a weak type of convergence in distribution. For consistent stochastic fixed point problems, if each of the update functions \(T_{i}\) were \(\alpha\)-fne, then almost sure weak convergence of the iterates is guaranteed [18, Theorem 3.9]; at issue here is whether this holds when \(T_{i}\) is pointwise \(\alpha\)-fne _in expectation_ at invariant measures of the corresponding Markov operator. We expect that there should be a counterexample to this claim. Characterization of the supports of invariant measures in the inconsistent case is quite challenging and essential for meaningfully connecting the limiting distributions of the algorithms to solutions to the underlying optimization problem. Finally, the restriction of the study to single-valued mappings does not allow one to capture the full extent of behavior one sees with nonconvex problems. Projection methods for sparse affine feasibility, for instance, have the property that the projection onto a sparsity constraint can be multi-valued on all neighborhoods of a solution (see [17, Lemma III.2]). An extension of the analysis presented here to multi-valued mappings, is required.
The most difficult challenge to all of this is the task of numerically monitoring convergence in distribution of random variables. To do this completely one needs first of all efficient means for computing the Wasserstein distance between measures; in other words, one needs to solve optimal transport problems efficiently. Again, for consistent stochastic feasibility when convergence of the iterates can be guaranteed almost surely, optimal transport is not needed; more generally, however, this machinery is essential. Secondly, one needs to numerically estimate the distributions whose distances are to be computed. These are significant challenges worthy of attention.
## 7 Funding and/or Conflicts of interests/Competing interests
This work was supported in part by a grant from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 432680300 - SFB 1456. The manuscript has not been submitted to any other journal for simultaneous consideration. The author has no financial or non-financial interests that are directly or indirectly related to the work submitted for publication.
|
2305.00596 | Investigating Vision Foundational Models for Tactile Representation
Learning | Tactile representation learning (TRL) equips robots with the ability to
leverage touch information, boosting performance in tasks such as environment
perception and object manipulation. However, the heterogeneity of tactile
sensors results in many sensor- and task-specific learning approaches. This
limits the efficacy of existing tactile datasets, and the subsequent
generalisability of any learning outcome. In this work, we investigate the
applicability of vision foundational models to sensor-agnostic TRL, via a
simple yet effective transformation technique to feed the heterogeneous sensor
readouts into the model. Our approach recasts TRL as a computer vision (CV)
problem, which permits the application of various CV techniques for tackling
TRL-specific challenges. We evaluate our approach on multiple benchmark tasks,
using datasets collected from four different tactile sensors. Empirically, we
demonstrate significant improvements in task performance, model robustness, as
well as cross-sensor and cross-task knowledge transferability with limited data
requirements. | Ben Zandonati, Ruohan Wang, Ruihan Gao, Yan Wu | 2023-04-30T22:38:44Z | http://arxiv.org/abs/2305.00596v1 | # Investigating Vision Foundational Models
###### Abstract
Tactile representation learning (TRL) equips robots with the ability to leverage touch information, boosting performance in tasks such as environment perception and object manipulation. However, the heterogeneity of tactile sensors results in many sensor- and task-specific learning approaches. This limits the efficacy of existing tactile datasets, and the subsequent generalisability of any learning outcome. In this work, we investigate the applicability of vision foundational models to sensor-agnostic TRL, via a simple yet effective transformation technique to feed the heterogeneous sensor readouts into the model. Our approach recasts TRL as a computer vision (CV) problem, which permits the application of various CV techniques for tackling TRL-specific challenges. We evaluate our approach on multiple benchmark tasks, using datasets collected from four different tactile sensors. Empirically, we demonstrate significant improvements in task performance, model robustness, as well as cross-sensor and cross-task knowledge transferability with limited data requirements.
## I Introduction
The sense of touch allows humans to feel, understand and ultimately manipulate through physical interaction. It is vital for exploration, object discrimination and fine-grained control, especially where visual perception lacks the resolution to detect surface changes, or is denied entirely. Inspired by the human sense of touch, robotic tactile learning has improved performance in tasks such as object/environment recognition [1, 2], pick-and-place [3] and in-hand manipulation [4].
Tactile representation learning (TRL) leverages machine learning (ML) to make sense of the rich data generated by specialized tactile sensors. Design choices such as sampling resolution, operating conditions and cost result in different tactile sensors adopting distinct sensing mechanisms (e.g. visual signals [5] and barometric signals [6]). Ideally, TRL should be sensor-agnostic, accommodating various data formats of different sensors and able to construct consistent representations of objects and environments. In practice, however, most methods developed are sensor-specific with tailored architectures and data processing routines [e.g. 5, 7, 8, 9].
This siloed approach has multiple limitations. First, individual tactile datasets are usually small due to the high cost of data collection. The tactile representation derived from such small datasets often generalize less well, especially for out-of-distribution data [e.g., 8, 10]. Even calibration differences and expected wear from regular usage present domain shifts detrimental to model performance. Furthermore, the lack of a unifying data format for different tactile sensors makes it difficult to reuse knowledge captured in learned representations. For a new sensor design, the accompanying tactile representation model has to be learned from scratch, along with expensive data collection. All these limit the effectiveness and efficiency of TRL.
The above limitations are further highlighted when we contrast TRL with other application domains like computer vision (CV), and natural language processing (NLP). Both CV and NLP benefit from a unifying input format (images and text respectively), which permits fully shared model architectures for convenient knowledge transfer. In particular, foundational models [11] are trained on massive datasets such as ImageNet [12] and CommonCrawl [13] to derive general representational knowledge, which can be specialized to diverse downstream tasks, such as semantic segmentation [14] in CV, and sentiment analysis [15] in NLP. Foundational models improves learning efficiency and model robustness of downstream tasks, especially for limited training data [15].
Biologically, the human somatosensory system shares similar neural mechanisms with the visual cortex responsible for processing spatial features [16]. This implies that tactile properties such as texture are largely descriptions of surface spatial properties [17], motivating the question of whether _a vision foundational model could be exploited to tackle the aforementioned challenges in TRL_. Specifically, we investigate the following:
* Can vision models be agnostic to data from heterogeneous tactile sensors?
* Can vision foundational models improve model performance and robustness for TRL?
* Can vision architecture facilitate efficient knowledge transfer between downstream learning tasks and models trained on different sensor data?
In this work, we present a unified approach to address the above questions. We first present the use of _tactile images_ as a simple unifying data format for heterogeneous tactile sensory outputs, to encode them as spatial features. This recasts TRL as a vision task, but with different input image sizes for different sensors. We adopt convolutional models [18] as the fully shared architecture for all sensors, exploiting convolution's agnosticity to image sizes.
The above construct enables efficient knowledge transfer in multiple ways. First, we show that a foundational vision model pre-trained on natural images can be directly applied to tactile
learning tasks by simply performing least square regression to the last layer, providing evidence on the connection between visual and tactile perception in a non-biological system. Second, the foundational model can also be fine-tuned into tactile representation models with improved performance and robustness. In particular, we leverage data augmentation to counteract the limited tactile data during fine-tuning. Lastly, we demonstrate that the fine-tuned tactile representation model retains general features to allow cross-task and cross-sensor transfer.
To evaluate our proposed approach, we consider multiple benchmark tasks including standard material classification, continual learning for material classification and detection of fabric composition. We specifically test on data collected from four different sensors, with different data collection procedures, to demonstrate the general applicability of our approach.
**Contributions.** Our key contributions are summarized below:
* We extensively investigate on the feasibility, effectiveness, efficiency and robustness of using a vision foundational model for TRL. We use tactile images as a unified model input transformed from any tactile sensors.
* We introduce a new evaluation benchmarks for tactile learning, namely fabric composition detection.
* We contribute two new tactile datasets, including a material classification dataset using GelSight sensor and a fabric composition dataset using Contactile sensor.
* Empirically, we demonstrate that our proposed approach learns robust models for all sensors evaluated and outperforms baseline models tailored to specific sensors.
## II Preliminaries and Related Work
We present three task settings to support the comprehensive evaluation of our proposed approach. The first two tasks are standard benchmarks for TRL while the third one is a novel task of composition detection task. We also review relevant works.
### _Tactile Representation Learning Tasks_
**Material Classification.** This is a common benchmark for TRL [e.g. 8, 19, 20, 21, 22, 23, 24]. Similar to image classification, material classification determines the source material measured by a tactile sensor, from a finite number of classes. For example, early research involved classification of the textural information gathered via sliding an electret microphone across the surface of materials [25]. The task remains a standard benchmark amid the rapid development of different sensor designs.
A natural extension to standard material classification investigates the learned model's robustness to out-of-distribution data. This includes varying data length and the moving speed of the tactile sensor (as controlled by a robot). For example, [26] achieved improved robustness to the sensor's movement speed via additional sensing modalities. [8] also proposed a customized spiking neural network to reduce the data length needed for classification.
**Continual Learning for New Materials.** For real-world applications, robots are expected to continuously learn and adapt to novel environments. This also applies to TRL and was investigated in [27, 28], where robots learn new objects continuously by touch. In this work, we similarly extend material classification to the continual learning (CL) [29] setting. Formally, let \(\mathrm{D}=\{B_{1},B_{2},\ldots,B_{T}\}\) be a data sequence with \(B_{t}\) denoting the data for material \(t\). We wish to design a CL algorithm \(\text{Alg}(\cdot)\) in the form of
\[(f_{t},\ M_{t})=\text{Alg}(B_{t},f_{t-1},M_{t-1}), \tag{1}\]
where \(f_{t}\) is the current classification model after learning the novel material \(t\). \(f_{t}\) should be capable of classifying all materials observed so far (i.e., \(B_{1}\) through \(B_{t}\)). A small memory buffer \(M\) is allowed to store data about previous materials to mitigate model forgetting. \(M_{t}\) denotes the current content of the memory buffer.
Intuitively, the CL algorithm \(\text{Alg}(\cdot)\) must learn each material sequentially. It also cannot access training data for previous materials except for those stored in the memory buffer. The algorithm is thus forced to learn new materials on the fly without forgetting its existing knowledge. In contrast, standard material classification learns all materials in \(\mathrm{D}\) concurrently and with unlimited access to all data. CL thus represents a more challenging and realistic benchmark.
**Fabric Composition Detection.** We introduce a new evaluation benchmark for TRL. Concretely, we design a fine-grained fabric composition detection task, in which the learned tactile model must predict the constituents of a specific fabric material, instead of simply identifying it. This task serves as a more challenging benchmark compared to standard material classification. It also allows us to investigate knowledge transfer between sensors and tasks (e.g., from material classification to constituents detection). We will describe the new dataset collected for this task in Sec. III.
### _Existing Methods_
There exists a wide range of tactile sensor designs leveraging various sensing modalities, including strain gauges [24], piezo-resistive layers [30], accelerometers [31], capacitive [32], optical [5, 33] and those combining multiple sensing mechanisms [34, 35]. Most tactile learning methods tailor their respective model architectures and learning algorithms to the specific sensors used [e.g., 8, 23, 24, 36]. These existing approaches learn sensor-specific mappings from raw sensor output to some latent representation, and adjust the model size based on size of sensor output. These tailored decisions inevitably lead to a siloed state for TRL: the developed models can't be easily reused for different sensors, even when the desired ML task remains identical.
[10] partially addresses the above issues by learning a shared latent representation for two different sensors. This approach demonstrates improved performance compared to independently learning each sensor's data. However, it must still learn sensor-specific mappings from raw data to the shared representation, thus limiting its reuse potential for additional
sensors. In contrast, our proposed approach standardises the transformation to map any raw sensor data to tactile images, to be processed by a fully shared ML model. As we will demonstrate in our experiments, our approach grants more flexibility towards knowledge transfer.
## III Sensors and Datasets
We present the sensors and the associated datasets considered in this work. They are intended to validate the general applicability of our approach, and to contextualize the challenge posed by heterogeneous sensors. Each dataset is used for one or more learning tasks described in Sec. II-A.
**RoboSkin.** Roboskin is a capacitive sensor designed for iCub [32]. Taunyazov et al. [36] collected a material classification dataset using the RoboSkin on the iCub robot forearm, sweeping across multiple materials without strict control of velocity and exerted forces. This public dataset contains 20 different materials with 50 samples in each class. Each sample contains 75 sensor readings.
**BioTac.** SynTouch BioTac(r) is a multi-modal tactile sensor using fluid pressure sensor and thermistor [37]. Gao et al. [10] released a material classification dataset using the BioTac sensor fitted as an extended end-effector on a KUKA LBR iiwa 14 robot arm, sliding laterally across different materials with controlled speed and contact force. BioTac-20 dataset contains the same 20 materials as the RoboSkin dataset with 50 samples in each class. Each sample contains 400 readings. A larger BioTac-50 dataset was later released.
We contribute two new datasets using alternative sensors. We will release both datasets publicly to support future research in the community.
**GelSight.** Gelsight is a camera-based sensor producing images of the contact surface, showing surface geometry and deformation with a soft elastomer [5]. Each reading is an image of \(480\times 640\). A material classification dataset consists of 45 materials with 50 samples in each class. As the elastomer is vulnerable to abrasion from sliding motion, data is collected by rolling the sensor locally on material surfaces. The sensor, mounted on a KUKA LBR iiwa 14 robot arm, touches the material surface from above with a 1N force threshold. The sensor is then rotated clockwise by 1 degree, anticlockwise by 2 degrees, and finally clockwise by 1 degree back to the centre position (illustrated in Fig. 0(a)).
**Contactile.** Contactile(r) sensor uses a soft, silicone array based on PapillArray [7]. The sensor measures deflection, force and vibration. We collect the data using two protocols. Protocol 1 is identical to that of BioTac dataset. In Protocol 2, the sensor is handheld and slid across materials casually with different contact forces, speeds and along different directions, to mimic more realistic and natural movements. The dataset contains samples collected from 32 fabrics, each consisting of possible 6 constituent materials: Linen, Viscose, Cotton, Wool, Polyester and Elastane (see Tab. I for examples). 40 and 10 samples per material are collected for Protocols 1 and 2 respectively. The collection setup is illustrated in Figs. 0(b) and 0(c).
## IV Method
We present a unified approach to tackle heterogeneous sensors and efficient knowledge transfer in TRL. Our approach relies on a unifying format for different sensor data, and exploits convolution's agnosticity to input size to enable fully shared models. These fully shared models in turn enables convenient knowledge transfer. We also discuss data augmentations to counteract limited tactile training data. Lastly, we discuss a continual tactile learning approach as a direct application of knowledge transfer.
### _Tactile Images and Convolutional Architectures_
We use simple transformations to convert data generated by various sensors into 2D images, which serves as the unified input format for the subsequent ML models. Specifically, tactile images aims to transform tactile sensory output into an encoding of the global geometry for the contact surface. This transformation is inspired by the processing similarities between the human visual cortex and somatosensory system [16], and captures the intuition that significant tactile properties are fundamentally spatial [17].
Camera-based sensors such as GelSight directly capture _global_ surface geometry as images and can be used as model
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Material**} & \multirow{2}{*}{**Image**} & \multirow{2}{*}{Linen} & \multicolumn{3}{c}{**\% by mass**} \\ & & & Viscose & Cotton & Wool & Polyester \\ \hline \hline \multirow{3}{*}{Cotton-Linen} & \multirow{3}{*}{45} & \multirow{3}{*}{0} & \multirow{3}{*}{55} & \multirow{3}{*}{0} & \multirow{3}{*}{0} \\ & & & & & \\ \cline{1-1} \cline{6-6
input directly. However, non-camera-based sensors typically have sparse sensing points that only produce _localized_ signals about the contact surface. To better encode the global surface geometry, we thus require more local samples that span across the contact surface. This could be conveniently achieved by concatenating consecutive vectors from the tactile data stream, as the sensor slides over the contact surface. Formally, let \(S=\{s_{1},s_{2},\ldots,s_{T}\}\) be the data stream produced by a sensor sliding across a surface, where \(s_{t}\in\mathbb{R}^{n}\) is a single reading from the sensor. We define a tactile image as a matrix \(\text{Im}(S)=[s_{j},s_{j+1},\ldots,s_{k}]\) for some constant \(j,k\). Intuitively, \(Im(S)\) leverage the temporal dimension of tactile data stream to better encode global surface properties (see also Fig. 2 for an illustration).
We note that tactile images of different sensors still have different dimensions. To achieve fully shared models for knowledge transfer, we thus adopt convolutional architectures such as ResNet [38], since convolution does not require a fixed input size. ResNet is also a representative state-of-the-art model for processing spatial input, including the surface geometry encoded in tactile images.
### _Model Training_
With tactile images and our chosen model architecture, we effectively recast TRL as a vision task. For training, we minimise the empirical cross-entropy loss
\[\operatorname*{arg\,min}_{f}\sum_{(x,y)\in\text{D}}\ell_{ce}(f(x),y) \tag{2}\]
where \(f\) is the model and \(\ell_{ce}\) is the cross-entropy loss. D denotes the dataset containing labeled tactile images \((x,y)\).
Crucially, we can initialize \(f\) with a pre-trained model to enable knowledge transfer. In particular, we may interpret TRL as a downstream task for a vision foundational model on general spatial features. In our experiments, we will demonstrate that a foundational model trained on natural images already robustly encodes the general features required for tactile images.
**Data Augmentation.** As discussed earlier, tactile datasets are typically small due to the high cost of data collection due the interactivity of the modality and significant wear and tear. Data augmentation is therefore important to mitigate model overfitting, especially for larger architectures like ResNet. We propose to directly apply standard CV augmentations: _resizing, cropping, flipping_ and _jittering_. We observe that each of these augmentations encodes a meaningful variation to the data collection process, even for non-camera-based sensors. For instance, cropping the tactile images encodes varying the duration of robot motion during data collection. Tab. II lists all chosen augmentations and their interpretation.
The chosen augmentations are readily accessible from common deep learning frameworks [39] and may be directly applied. We will demonstrate empirically that the augmentations is crucial to model robustness.
### _Continual Tactile Learning_
As robots are increasingly expected to work in unstructured environments, continual learning of unordered new percepts is important. Sec. II-A introduced continual learning (CL) of new materials as a natural extension to standard material classification. The two key challenges for CL are: 1) whether robots could learn about new materials on the fly, and 2) continuous learning does not cause catastrophic forgetting of current knowledge [40, 41].
We adopt schedule-robust online continual learning (SCROLL) [42] to tackle CL of new materials. We choose SCROLL because the method leverages pre-trained models for efficient knowledge transfer, thus allowing new materials to be learned with limited interaction. In addition, SCROLL is robust to the schedule under which the data is presented (e.g., the order in which each material is learned), a crucial property to ensure model reliability in real-world situations.
Using the notations introduced in Eq. (1), we characterize SCROLL as a two-phase process. Given a suitable pre-trained embedding model \(\psi\), we first learn an online linear classifier \(\phi_{t}\) via recursive least squares [43] as novel material data \(B_{t}\) is observed. We then fine-tune the composite model \(f_{t}=\psi\circ\phi_{t}\) using the current memory buffer \(M_{t}\) to yield \(f_{t}^{*}\). Both \(f_{t}\) and \(f_{t}^{*}\) are valid CL models for all data observed so far, with \(f_{t}^{*}\) having a fine-tuned representation based on the observed data.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Augmentation Technique** & **Physical Interpretation** \\ \hline \hline Flipping (along data axis) & Reversing the direction of robot motion. \\ Resizing (along temporal axis) & Vary the speed of robot motion. \\ Cropping (along temporal axis) & Vary the duration of robot motion. \\ Jittering & Simulate sensor noise and drift. \\ \hline \hline \end{tabular}
\end{table} TABLE II: Tactile images augmentations and their physical interpretation
Fig. 3: Tactile image representations for the BioTac, RoboSkin and GelSight sensors for two material classes.
Fig. 2: Tactile Image processing for non-camera-based sensors.
SCROLL uses exemplar selection [44] for updating \(M_{t}\). The overall algorithm is presented in Alg. 1,
```
Initialization: Buffer \(M_{0}=\varnothing\), data statistics \(c^{0}_{y}=0,A_{0}=0\) Input: Embedding model \(\psi\), next data batch \(B_{t}\), current buffer \(M_{t-1}\), current data statistics \(c^{t-1}_{y},A_{t-1}\) \(c^{t}_{y},A_{t}=\text{RecursiveLeastSquare}(c^{t-1}_{y},A_{t-1})\) \(\phi_{t}=\text{RidgeRegressor}(c^{t}_{y},A_{t})\) \(f_{t}=\phi_{t}\circ\psi\) \(M_{t}=\text{SelectExemplar}(M_{t-1},B_{t},\psi)\) \(f^{\prime}_{t}=\text{FineTune}(f_{t},M_{t})\) Return\(c^{t}_{y},A_{t},M_{t},f_{t}\) and \(f^{\prime}_{t}\)
```
**Algorithm 1** SCROLL (incremental)
## V Experiments
We evaluate our approach extensively across a wide variety of sensors and tasks, as introduced in Sec. II and III. Our experiments address the following questions:
* Is our approach generally applicable to heterogeneous tasks and sensors? How does our approach compared to sensor-specific methods?
* What are the effects of tactile image augmentation?
* Does our approach allow efficient knowledge transfer? What are the effects of knowledge transfer?
**Data Pre-Processing.** Following Sec. IV-A, we transform BioTac data into \(19\times 400\) images by stacking 400 consecutive vectors. This corresponds to 4 seconds of data. RoboSkin data is transformed into \(60\times 75\) images, corresponding to 1.5 seconds of data. Lastly, Concatelic data is transformed into \(27\times 599\) images, which is 6 seconds of data. We note that the exact size for the temporal dimension is not crucial, since we will also leverage random cropping and resizing along the temporal dimension for data augmentation. Since these tactile images only have a single channel, the channel is repeated three times to match the input dimension for the vision foundational model used in the experiments. All tactile images and GelSight data is normalized to the range of \([-1,1]\).
**Model Architecture and Pre-training.** We choose a ResNet-\(18\) pre-trained on MetaDataset [45] as our foundational vision model. It is chosen for its balanced accuracy and computational efficiency. We emphasize that other foundational models may be easily chosen given the trade-off between accuracy and efficiency. We also highlight all experiments use the _identical_ foundational model without any modification, as our approach allows fully shared models.
### _Standard Material Classification_
We compare our approach with baseline methods on standard material classification using BioTac-20, RoboSkin and GelSight datasets. We highlight that the baselines are specifically tailored to the BioTac or RoboSkin sensors, whilst our model is generic.
**Model Details.** Our model is trained for 100 epochs using stochastic gradient descent (SGD). A validation set is employed to schedule the learning rate, mitigating performance plateaus. An initial learning rate of \(0.01\) is chosen empirically, with a momentum of \(0.9\) and a weight decay of \(0.0001\). 5-fold cross validation is performed for all experiments.
**Baseline Methods.** We compare our approach to a diverse set of methods investigated in [8], including a spiking neural network (SNN), LSTM, regular support vector machine (SVM) and spike-encoded SVM (SVM Spike).
Table III reports the classification accuracy for all evaluated methods. Our generic ResNet outperforms the baselines by more than 4%, suggesting the viability of our tactile image approach. In addition, the results clearly shows that fine-tuning from the foundational model is more advantageous than random initialization. This indicates positive knowledge transfer from the pre-trained model and improved generalization. This is especially visible for the GelSight dataset owing to the imbalance between the small size of the dataset and the large input dimension.
Pre-training also noticeably improves learning efficiency, as reported in Fig. 4. For both BioTac-20 and RoboSkin datasets, transferring from the foundational model (i.e., with pre-training) achieves higher accuracy with fewer iterations over the training data. Learning efficiency is a desirable property for robots requiring fast adaptation to novel environments.
**Foundational Models and Tactile Images.** To better understand the connection between our foundational model and tactile images, we introduce another baseline in Tab. III denoted by "Least Square". This baseline encodes all tactile images into fixed representations using the pre-trained ResNet, and only learns a least-squares classifier over the fixed representation. The accuracy of this baseline thus directly reflects the usefulness of the pre-trained model towards tactile images. Surprisingly, the results show that the foundational vision model trained from natural images already contains the general features required for tactile texture representation, despite the apparent distributional shift. This provides direct support to the connection between visual and tactile perception, resembling the similarities between the human visual cortex and somatosensory system. The results also provide empirical justification for our choice of tactile images as model input.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **BioTac-20** & **RoboSkin** & **GelSight** \\ \hline \hline SVM & \(94.2\pm 0.7\) & \(50.5\pm 5.6\) & n.a \\ SVM (spikes) & \(93.5\pm 1.5\) & \(63.3\pm 1.8\) & n.a \\ Conv-LSTM & \(94.5\pm 1.5\) & \(93.5\pm 0.5\) & n.a \\ SNN & \(94.6\pm 1.3\) & \(92.2\pm 0.5\) & n.a \\ \hline Least Square w/ Pre-train & \(93.8\pm 1.2\) & \(84.8\pm 1.3\) & \(67.1\pm 0.8\) \\ ResNet (ours) & \(98.0\pm 0.3\) & \(95.0\pm 0.6\) & \(92.9\pm 0.3\) \\ ResNet w/ Pre-train (ours) & \(\mathbf{98.9\pm 0.2}\) & \(\mathbf{96.0\pm 0.5}\) & \(\mathbf{95.1\pm 0.3}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Material Classification Accuracy (%). Numbers for baseline methods are originally reported in [8]. Pre-train denotes initialization with the foundational vision model.
### _Augmentation and Model Robustness_
As noted in Sec. IV-B, data augmentations applied to tactile images may be interpreted as diversifying the conditions of data collection. This is crucial for tactile datasets as they are generally expensive to collect. We investigate the effects of augmentation in the following experiments.
**Robustness to Sampling Length.** For material classification, it is desirable to shorten the sampling length without sacrifice to accuracy. This corresponds to classifying randomly cropped tactile images in our formulation. It was also investigated in [8] as a strength of spiking neural architecture. In Fig. 5, we investigate how random cropping affects classification accuracy over varying data length, and compare our approach to previous methods.
The results clearly show that our model with augmentation outperforms the previous methods, achieving higher test accuracy with less data required. For both datasets, ResNet with augmentation is able to accurately classify the materials with about 0.3 seconds of sensor data. As the data length increases, the test accuracy rapidly increases and remains high, suggesting that our model could efficiently accumulate information over short duration while maintaining robustness over long run. In addition, Fig. 5 shows that augmentation is crucial for robust performance. The same model trained without augmentation performed the worst among all methods, suggesting overfitting to the original data length and less robust features learned.
**Robustness to Movement Speed.** While some tactile datasets are collected under a tightly controlled robot motion, it is preferable that the learned model generalizes to more varied motions. We simulate different speeds of the robot's sliding motion during tactile sensing by sub-sampling the test set data along the temporal axis, and investigate the effects of augmentation on this out-of-distribution test set.
Fig. 6 shows that the model trained with random resizing augmentation is robust against varying robot speed, achieving consistent accuracy across different movement speed. In contrast, the model with no augmentation generalized poorly even with slight speed deviation. The figure also shows that random cropping improves model robustness against varying movement speed.
**Robustness to Sensor Noise.** Similar to the previous experiment, we construct another out-of-distribution test set by injecting random sensor noise. Fig. 7 and evaluates the effects of augmentations.
Fig. 7 shows that model trained without random jittering augmentation generalizes poorly to noisy data, especially on BioTac dataset. This is due to the BioTac data being collected under a strict condition, including fixed force and movement speed. The model trained on non-augmented BioTac data thus
Fig. 4: Test Accuracy over the first 20 epochs for both BioTac-20 (red) and RoboSkin 20 (blue), with (solid) and without (dashed) pre-training.
Fig. 5: Test accuracies of our approach with and without random cropping augmentations for varying data length. Baseline methods included for comparison.
Fig. 6: The effects of augmentation with respect to varying robot movement speed during tactile sensing. X-axis denotes the multiples of the original robot speed.
Fig. 7: The effect on test accuracy with respect to sensor noise. X-axis denotes maximum noise level added to tactile images.
overfits to the homogeneous data and lack robustness. In contrast, RoboSkin data contains more diverse samples since it is collected without strict speed or force control. As reflected in Fig. 7, the non-augmented model trained on RoboSkin data is therefore naturally robust to a low level of sensor noise. However, as the noise level increases, the test accuracy of all non-augmented models still deteriorate rapidly.
Fig. 7 also indicates that the model trained with augmentation can significant sensor noise, with the noise level of 0.5 representing a potentially 50% deviation from the intended value range. At this level, the augmented model still retains a test accuracy of 80% for RoboSkin and 73% for BioTac-50. Lastly, we observe that even for the original test set (i.e, noise level = 0), the augmented model still outperforms the non-augmented version, suggesting more robust features learned with augmentation.
Overall, we have demonstrated that standard CV augmentations can be directly applied to tactile images to appreciably boost model robustness in various aspects, including sampling length, movement speed and sensor noise. As several of our experiments relied on simulated test data, we will further demonstrate the usefulness of augmentation with real out-of-distribution data in Sec. V-D.
### _Continual Tactile Representation Learning_
As described in Sec. II-A, we cast material classification in a CL setting, which requires our model to learn each material sequentially. CL enables robots to continuously acquire new tactile experiences, without having to perform expensive retraining from scratch.
**Model Detail.** The same foundational vision model is used as the embedding model for Alg. 1. During fine-tuning with memory buffer \(M_{t}\), we adopt data augmentation and a cosine learning schedule [46] to mitigate overfitting. For all experiments, we perform a 5-fold cross-validation.
Fig. 8 shows the CL performance for each dataset over different memory buffer sizes. We report the performance of \(f_{t}\) and the fine-tuned \(f_{t}^{*}\). We also include the test accuracy of standard material classification as a performance reference. Note that \(f_{t}\) obtained via recursive least squares is equivalent to the least-squares baseline discussed in Sec. V-A. Thanks to the foundational vision model, \(f_{t}\) thus guarantees a robust minimum performance level for CL (see red lines in Fig. 8). \(f_{t}^{*}\) is obtained by adapting \(f_{t}\) with the memory buffer. Its performance improves with larger memory buffers, closing the gap with standard material classification. For BioTac and RoboSkin particularly, the CL performance is comparable with standard supervised learning, using a moderate memory buffer of 1500 and 600 respectively. The memory buffer required only represents a fraction of the original datasets, suggesting that our approach also allows efficient and accurate CL of new materials with limited memory requirements.
### _Fabric Composition Detection_
Introduced in Sec. II-A, fabric composition detection involves predicting the presence of six constituent materials, including Linen, Viscose, Cotton, Wool, Polyester and Elastane, in different fabrics. A single model is learned to detect the presence of all constituents concurrently, with one prediction head for each constituent. This task is more challenging than standard material classification, due to the "similar feels" of different fabrics. The physical weave of a fabric also contributes to its feel, adding a potential confounding factor for the task.
For this task, the data is collected using Concattile sensor. As discussed in Sec. III, we deliberately used two protocols for data collection. The training set is collected using strict force and velocity control while the test set is collected with more natural movements. The test set thus presents a more realistic setting and a clear domain shift with respect to the training data.
**Model Details.** The training procedure is similar to that used for standard material classification. The only change is that the number of training epochs is reduced from 100 to 50. Data augmentations are applied to model training when specified. For evaluation, we consider the average classification score for all constituents materials. For instance, Felt contains Viscose and Wool. The learned model only achieves a score of 1 for
Fig. 8: CL performance across all BioTac-50, RoboSkin-20, and GelSight-45 datasets, for varying buffer sizes. Accuracy from supervised upper bound and ridge regression are shown to illustrate the performance changes associated with adaptation. With increasing memory buffer, CL achieves better test accuracy and narrows the gap against standard supervised learning.
predicting precisely the two constituents. Any false positive or false negative detection will decrease the score by \(\frac{1}{6}\).
Tab. IV shows the average classification score for different model setups. We investigate both knowledge transfer from foundational vision model and model pre-trained on other sensors. We also study the effects of data augmentation.
In Tab. IV, we again leverages least-squares classifier over a fixed representation to quantify the effectiveness of a pre-trained model. We see that directly applying the foundational vision model achieves 74.2%, while applying the BioTac model obtained in Sec. V-A achieves 76.1%. The result is our first demonstration of _successful cross-task and cross-sensor transfer_: the BioTac model trained on standard material classification can be directly applied to Contactile data for fabric composition detection. This result demonstrates the general applicability of our approach, and its ability for robust and flexible knowledge transfer.
Tab. IV further demonstrates the usefulness of data augmentations on real out-of-distribution data, with augmentation contributes over 2% in test accuracy compared to the non-augmented model. The results validate our physical interpretations for the applied augmentations, showing that the augmented model is indeed more robust against more varied motions. From another perspective, we may also leverage the synthetic data produced by augmentation to reduce data collection load. This is important if a robot is only allowed limited (exploratory) interaction with environments. Lastly, we remark that the best model is obtained by combining both knowledge transfer and augmentation, achieving 80.6% in test accuracy.
### _Observations on the Learned Representation_
Results from previous sections suggest robust knowledge transfer across sensors despite the varied sensing mechanisms and data format. We hypothesize that this could be a result of a learned invariant descriptor of the tactile properties of the contact surfaces. Since the processing of texture in the human somatosensory cortex is a relatively lower-level function, we are thus interested in understanding if the lower-level abstraction in the learned model recovers similar latent representation for diverse sensor data.
Fig. 9 shows the feature activation for different sensors using Deep Dream technique [47]. This qualitative visualization of the learned features shows that feature activation generated right after the first block for 3 ResNets, each fine-tuned on a separate tactile dataset in standard material classification. All 3 feature activation maps have high resemblance of one another, suggesting that learned model indeed recovers consistent representation of tactile properties despite diverse sensing mechanisms. This further supports the knowledge transferability between different sensors and related tasks.
## VI Conclusion
In this work, we presented a foundational model approach to tactile representation learning. In contrast to sensor-specific tactile models, our approach is characterized by a standardized ML pipeline, including a unifying data format for diverse tactile data, fully shared model architecture and learning techniques, all of which are key requirements for foundational models. Further, the experiment results suggest that our approach not only outperforms sensor-specific models, but crucially allows efficient knowledge transfer between models trained on different sensors and tasks, satisfying the remaining property for foundational models. In particular, we demonstrated the connection between visual and tactile perception, showing that foundational vision models trained on natural images can be a readily accessible source of knowledge for tactile representation learning. This also allows us to effectively perform, with the same unified model, downstream tasks which were previously achieved with an array of methods in the literature. We believe that this investigation thus contributes a robust and general approach to tactile representation learning and provides a strong baseline for future research.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Model** & **Test Accuracy Score** \\ \hline \hline Least Squares w/ vision Pre-train & \(74.2\) \\ Least Squares w/ BioTac Pre-Train & \(76.1\) \\ \hline ResNet & \(76.3\) \\ ResNet + Augmentation & \(78.9\) \\ ResNet + Augmentation (BioTac Pre-train) & \(80.6\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Fabric Composition Detection Accuracy (%)
Fig. 9: Feature activation after block 1 of ResNet. **(a)** Original feature activation from foundational CV model. **(b)**, **(c)**, **(d)** Feature activation after fine-tuning with specific sensor data. |
2309.16021 | HuntGPT: Integrating Machine Learning-Based Anomaly Detection and
Explainable AI with Large Language Models (LLMs) | Machine learning (ML) is crucial in network anomaly detection for proactive
threat hunting, reducing detection and response times significantly. However,
challenges in model training, maintenance, and frequent false positives impact
its acceptance and reliability. Explainable AI (XAI) attempts to mitigate these
issues, allowing cybersecurity teams to assess AI-generated alerts with
confidence, but has seen limited acceptance from incident responders. Large
Language Models (LLMs) present a solution through discerning patterns in
extensive information and adapting to different functional requirements. We
present HuntGPT, a specialized intrusion detection dashboard applying a Random
Forest classifier using the KDD99 dataset, integrating XAI frameworks like SHAP
and Lime for user-friendly and intuitive model interaction, and combined with a
GPT-3.5 Turbo, it delivers threats in an understandable format. The paper
delves into the system's architecture, components, and technical accuracy,
assessed through Certified Information Security Manager (CISM) Practice Exams,
evaluating response quality across six metrics. The results demonstrate that
conversational agents, supported by LLM and integrated with XAI, provide
robust, explainable, and actionable AI solutions in intrusion detection,
enhancing user understanding and interactive experience. | Tarek Ali, Panos Kostakos | 2023-09-27T20:58:13Z | http://arxiv.org/abs/2309.16021v1 | HuntGPT: Integrating Machine Learning-Based Anomaly Detection and Explainable AI with Large Language Models (LLMs)
###### Abstract
Machine learning (ML) methods for network anomaly detection are emerging as effective proactive strategies in threat hunting, substantially reducing the time required for threat detection and response. However, the challenges in training and maintaining ML models, coupled with frequent false positives, diminish their acceptance and trustworthiness. In response, Explainable AI (XAI) techniques have been introduced to enable cybersecurity operations teams to assess alerts generated by AI systems more confidently. Despite these advancements, XAI tools have encountered limited acceptance from incident responders and have struggled to meet the decision-making needs of both analysts and model maintainers. Large Language Models (LLMs) offer a unique approach to tackling these challenges. Through tuning, LLMs have the ability to discern patterns across vast amounts of information and meet varying functional requirements. In this research, we introduce the development of HuntGPT, a specialized intrusion detection dashboard created to implement a Random Forest classifier trained utilizing the KDD99 dataset. The tool incorporates XAI frameworks like SHAP and Lime, enhancing user-friendliness and intuitiveness of the model. When combined with a GPT-3.5 Turbo conversational agent, HuntGPT aims to deliver detected threats in an easily explainable format, emphasizing user understanding and offering a smooth interactive experience. We investigate the system's comprehensive architecture and its diverse components, assess the prototype's technical accuracy using the Certified Information Security Manager (CISM) Practice Exams, and analyze the quality of response readability across six unique metrics. Our results indicate that conversational agents, underpinned by LLM technology and integrated with XAI, can enable a robust mechanism for generating explainable and actionable AI solutions, especially within the realm of intrusion detection systems.
Intrusion Detection, Security, ChatGPT, XAI, Chatbots, OpenAI, Explainable AI, Security Awareness
## I Introduction
In recent decades, there has been a substantial escalation in cyber-attacks targeting critical and enterprise infrastructure. By 2025, anticipated annual financial damages from these cyber-attacks are projected to reach $10.5 trillion USD, a substantial leap from $3 trillion USD recorded in 2015 [1]. To counteract the evolving cyber threats, the National Institute of Standards and Technology (NIST) introduced a Cybersecurity Framework in 2014. This framework prescribes iterative cybersecurity policies for identification, protection, detection, response, and recovery processes related to cyber incidents [2].
In this backdrop, human experts play a vital role in analysing extensive telemetry data and Indicators of Compromise (IoC) to isolate real threats [3, 4]. Consequently, building on the foundation laid by the NIST Framework, an extensive ecosystem--comprising tools, methodologies, and techniques--has been established to enable the proactive identification of threats, a process referred to as Cyber Threat Hunting (CTH) [5, 6, 7]. Threat hunting tools enable analysts to apply their specialized knowledge to formulate and test threat hypotheses by analysing system telemetry as well as threat intelligence from external sources [3].
Machine learning-based anomaly detection tools are particularly noteworthy, designed to uncover both known and unknown threats. Generally, network anomalies are categorized into performance-related, such as file server failures and transient congestion, and security-related anomalies like Denial of Service attacks, spoofing, and network intrusions [8]. Evidently, the infusion of machine learning into CTH tools has notably increased the incidence of false positives in real-world operational environments [9].
Explainable Artificial Intelligence (XAI) is at the focus of several proposed conceptual enhancements to existing cybersecurity frameworks, aiming to address the challenges brought forth by integrations of machine learning [10]. A pivotal advancement in this realm is the evolution of _Cybertrust_ frameworks, accentuating the necessity to integrate explainable, interpretable, and actionable AI in cybersecurity operations [11]. Nevertheless, the swift advancements in the domain may lead to information overload for incident responders and ML model maintainers, potentially resulting in a sluggish adoption rate [12].
Large Language Models (LLMs), driving the rapid development of autonomous agents, show significant potential to transform the landscape of cybersecurity. Their capability to seamlessly integrate diverse AI tasks and adapt to various use cases, positions them as versatile solutions that could boost the adoption of XAI as well as drive down operational
costs. Specifically, Large Language Models, and conversational agents in particular, have showcased outstanding capabilities in promoting applications for actionable AI, which are vital in providing response suggestions to threat responders.
In this paper, we introduce a novel prototype, HuntGPT, aimed at integrating actionable, interpretable, and explainable AI in cybersecurity operations. The prototype is designed to perform analysis on network traffic, utilizing a Random Forest classifier as the anomaly detection model. This model, trained utilizing the KDD99 dataset [13], is deployed to classify the acquired packets systematically. We utilize explainability frameworks like SHAP and LIME, in tandem with a conversational agent powered by the Language Learning Model API from OpenAI, predominantly known to users as ChatGPT. The prototype undergoes evaluation for technical accuracy through the utilization of Certified Information Security Manager Practice (CISM) certification [14] and is appraised across six different metrics to gauge the quality of response readability.
The remaining of the paper is structured as follows: Section II reviews relevant literature, summarizing findings from related studies. Section III details the system's architecture and development process. Section IV presents the results and findings of our research. Finally, in Section V, we provide summaries, draw conclusions, and offer suggestions for future research directions.
## II Background
Adopting efficient cybersecurity strategies is challenging for small and medium-sized enterprises (SMEs) due to constraints such as limited budgets, a lack of skilled personnel, and insufficient time allocated to cybersecurity planning [15]. Table I illustrates the typical expenses, totaling $1,635,000, required to sustain a medium-sized Security Operations Center (SOC) team, accounting for both personnel and infrastructure. Effectively, this cost analysis sheds light on the challenges faced by smaller organizations and emphasizes the importance of accessible and cost-efficient cybersecurity solutions. The remainder of the section explores three key enabling technology areas contributing to the advancements in cybersecurity operations.
#### Ii-1 Network anomaly detection
The purpose of an anomaly detection mechanism is to analyze, understand and characterize network traffic behavior, as well as to identify or classify the abnormal traffic instances such as malicious attempts from normal instances. Thus, from a machine learning perspective, the anomaly detection problem is a classification problem [17]. Over the years, detection systems have experienced considerable evolution, resulting in the development of diverse approaches and deployment methods, including those in fifth Generation (5G) communication networks and decentralised architectures.
Several Machine Learning (ML) techniques have been extensively applied in the domain of network anomaly detection, encompassing both supervised and unsupervised algorithms. Yihunie et al. [18] reviewed five representative algorithms: Stochastic Gradient Descent, Random Forests, Logistic Regression, Support Vector Machine, and Sequential Model, applying them to the NSL-KDD dataset. The empirical results from their study indicated that the Random Forest Classifier surpassed the other examined classifiers in terms of performance.
Eltanbouly et al. [19] introducing a hybrid system that combines the Random Forest and K-means algorithms. The proposed system is bifurcated into two distinct phases. The initial phase, known as the online phase, focuses on misuse detection by leveraging the Random Forest algorithm, followed by the offline phase, which categorizes random attacks through the use of the weighted K-means algorithm. Similarly, Zhao et al. [20] proposed a Multi-Task Deep Neural Network in Federated Learning (MT-DNN-FL) to simultaneously detect network anomalies, recognize VPN (Tor) traffic, and classify traffic, while preserving data confidentiality. Experimental results on representative datasets demonstrated superior performance in detection and classification compared to several baseline methods in centralized training architecture.
Preuveneers et al. [21] proposed a blockchain-based federated learning method, allowing for the auditing of model updates without centralizing training data, thus providing enhanced transparency and accountability in detecting malicious behavior. The experiments show that while integrating blockchain increases complexity, the impact on performance is minimal (varying between 5 and 15%), and the method is adaptable to more sophisticated neural network architectures and diverse use cases. Noticeably, the adoption of federated self-learning for anomaly detection and theat hunting is a rising trend in IoT devices, focusing on enhancing detection accuracy while prioritizing data privacy [22, 23, 24].
Lately, the widespread adoption of 5G networks has increased the demand for the development of automated Intrusion Detection Systems, leading to a boost in specialized research in this domain [25]. Sheikhi et al. [26] focused on employing a federated learning-based method to identify DDoS attacks on the GTP protocol within a 5G core network. This approach capitalizes on the collective intelligence of various devices to proficiently and confidentially recognize DDoS attacks. While ML models exhibit strong performance in controlled settings, their efficacy in real-world environments is perceived as a significant barrier to their adoption [9]. In the
following section, we will discuss various explainability methods that have been proposed as potential solutions to overcome the barriers associated with the real-world performance of ML models.
#### Iii-B2 Explainable AI
Explainable artificial intelligence (XAI) refers to advanced techniques that aim to make the results of machine learning models understandable and more transparent to users. These techniques allow a practical deployment of ML models as they provide methods to ensure that the trained model is trustworthy by detecting biases in the model or in the corresponding training data, and increasing transparency in the predictions of the models by providing explanations for which input features had the most impact in the output of the model. For many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners [27].
Recent research in Explainable Artificial Intelligence (XAI) has been actively applied to cybersecurity, particularly in specialized use cases like intrusion detection and malware identification. [28]. Nguyen et al. develop GEE [29], a framework for detecting and explaining anomalies in network flow traffic. GEE comprises of two components. The first one consists of an unsupervised Variational Autoencoder (VAE) model for detecting network anomalies. The second one is a gradient-based fingerprinting technique for explaining the detected anomalies in the VAE. The evaluation shows that their approach is effective in detecting different anomalies as well as identifying fingerprints that are good representations of these attacks.
Han et al. developed the DeepAID framework [30] to interpret unsupervised DL-based anomaly detection systems for cybersecurity. The approach helps security analysts understand why a certain sample is considered anomalous by searching the difference between the anomaly and a normal reference data point. Additionally, they propose a model distiller that serves as an extension to the black-box DL model using a simpler and easier to understand finite-state machine model that allows analysts to get involved in the model decision-making process. While XAI holds promise in enhancing the adoption of ML models within existing cybersecurity frameworks, there are still several challenges and considerations to be address.
Nyre-Yu et al. [12] conducted a pilot study within an operational environment to assess an Explainable Artificial Intelligence (XAI) tool, focusing on insights gleaned from real-time interactions between cybersecurity analysts and XAI. The initial findings disclosed that, contrary to the goal of fostering trust and improving efficiency through XAI tools, their actual deployment was restricted and failed to considerably improve the accuracy of the decisions made by the analysts. Similarly, a recent systematic review [31] highlighted that research on XAI for cybersecurity shows that many XAI applications are crafted without a thorough understanding of their integration into analyst workflows. Moreover, security literature frequently fails to differentiate between diverse use cases or clearly separate the roles of model users and designers, which could potentially lead to diminished adoption.
#### Iii-B3 Chatbots for security & ChatGPT
Conversational agents, often referred to as chatbots, have gained attention for their role in supporting cybersecurity within businesses through sharing network and security information with non-technical staff [32].
The Security focused chatbot introduced in [33] named SecBot demonstrates the role of a conversational agent for the support of cybersecurity planning and management. SecBot applies concepts of neural networks and Natural Language Processing (NLP), to interact and extract information from a conversation to (a) identify cyberattacks, (b) indicate solutions and configurations, and (c) provide insightful information for the decision on cybersecurity investments and risks.
Another notable advancement in Natural Language Understanding (NLU) with proven success in cybersecurity is the development of Generative Pre-trained Transformers (GPT) language models.
These models can operate as standalone tools; for example, the application of GPT in formulating cybersecurity policies, as demonstrated in McIntosh et al. [34], heps deter and mitigate the impact of ransomware attacks involving data exfiltration. The results of the study indicated that, in specific scenarios, policies generated by GPT could surpass those created by humans, especially when supplied with customized input prompts. Similarly, Setianto et al. [35] developed a run-time system, GPT-2C, that utilizes a fine-tuned GPT-2 model to parse logs from a live Cowrie SSH honeypot effectively, achieving 89% inference accuracy in parsing Unix commands with minimal execution latency.
Furthermore, research emphasis has been placed on exploring the potential threats posed by Large Language Models (LLMs) like OpenAI's ChatGPT and Bard, particularly regarding their abilities to facilitate cyberattacks [36, 37]. Nonetheless, limited research has been conducted on the potential integration of LLMs into cyber hunting interfaces. Bringing together conversational agents and GPT offers new opportunities for delivering knowledge and insights to non-professionals. Preliminary research and trials from other domains provide insight into this potential. For instance, [38] examines the viability of leveraging Explainable AI (XAI) and language models like ChatGPT to transform how financial knowledge is conveyed to those outside the financial sector. The findings suggest that ChatGPT holds significant promise in demystifying intricate financial principles for a broader audience.
Finally, prompt engineering [39], an emerging research area, is showing promise as a potential solution for some of the shortcomings of LLMs since simple and effective prompts have been proven to improve GPT-3's reliability. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets: generalizability, social biases, calibration, and factuality [40]. However, evaluating the performance of Large Language Models across diverse domains presents an ongoing challenge. Several research studies have examined ChatGPT's performance in diverse
fields. Gilson et al. [41] investigated its performance on the United States Medical Licensing Examination, highlighting the ability of ChatGPT to provide logical and informative context in the majority of its responses. Notably, when ChatGPT delivers accurate and up-to-date information, users report higher satisfaction levels and make more informed decisions [42]. Therefore, understanding and quantifying the accuracy of the data provided in the cybersecurity domain is of high importance.
## III Architecture
Our system is organised into three distinct layers, each designed to fulfill specific functions and ensure optimal performance, as illustrated in Figure 1. Below, we outline the elements and functions of each layer, clarifying their roles and interactions within the overall system architecture.
1. Analytics engine: This is a powerhouse layer responsible for performing the network packet analysis, examines network data, identifying and processing anomalies and irregularities within the network flow.
2. Data Storage: We leverage Elasticsearch as our primary document storage, prized for its near real-time search capabilities, scalability, and reliability. It houses all detected network anomalies and the corresponding original flow data. For storing plots and images, Amazon S3 buckets are our go-to, guaranteeing security and accessibility.
3. User Interface (UI): The dashboard UI, constructed with Gradio, is the interactive front-end of our system, presenting the analytic engine's outcomes to human analysts in a user-friendly manner. It is integrated with OpenAI's Language Model API, facilitating seamless interactions between analysts and the system for ongoing discussions and analysis.
We chose this configuration due to its modular design, placing a premium on the segregation of components. This allows each layer to be developed, maintained, and enhanced independently, providing flexibility and promoting efficient scalability. The autonomous nature of each module ensures streamlined adaptability to evolving requirements, reinforcing the robustness and responsiveness of the system.
### _Component Diagram_
In this section, we narrow our focus to provide more detailed insights into the architecture of our system, engaging in a concise discussion on each primary component. Figure 2 depicts a comprehensive illustration of the diverse components comprising our system, which include i) the Anomaly Detection Application Server and ii) the Intrusion Detection System Dashboard.
#### Iii-A1 Anomaly Detection Application server
The Anomaly Detection Application Server serves as the central orchestrator of the entire anomaly detection process. This component integrates several sub-modules to facilitate the efficient and accurate identification of anomalous network behavior.
1. **ML Model Loader:** The First step in our anomaly detection pipeline is the loading of our pre-trained machine learning model. The model was extensively trained on the KDD99 Dataset [13] which is a widely used benchmark dataset in the field of intrusion detection and network security. We use the model to assess incoming data points against learned patterns. We also load the data into our engine which can be used to by explainability frameworks (SHAP and LIME) to provide interpretable explanations for predictions.
2. **Elasticsearch Connector:** Enabling seamless communication with Elasticsearch, the connector module handles authentication and index management, ensuring secure access to the Elasticsearch cluster. This components handles establishing a secure connection, creating and verifying indices, eventually enabling efficient storage of information pertaining to detected and original packets.
3. **Prediction**: The prediction component analyzes individual network flows to determine the presence of anomalies.
4. **Explainer**: This component generates prediction factors, plots, and JSON documents. The explainer indexes the generated data into Elasticsearch, constructing a structured repository that facilitates efficient querying and exploration of detected and original network flows. To augment the interpretability of our findings, the explainer component uploads plot images to an AWS S3 bucket. These plots enhance our understanding of the model dynamics.
5. **Elasticsearch**: Elasticsearch plays a pivotal role in storing and organizing information. Our system leverages Elasticsearch to manage both the "Detected Packets Index" and the "Original Packets Index," optimizing data accessibility and analysis.
6. **AWS S3 Bucket:** Serving as a centralized repository for our visual resources housing the uploaded plots.
#### Iii-A2 Intrusion Detection System Dashboard
Our IDS Dashboard enhances trust in the AI system by revealing model insights and contributing features. It lets users inspect original data packets for manual anomaly inspection. AI-generated explanations further clarify predictions and suggest appropriate course of action. The dashboard also facilitates interactive
Fig. 1: High level diagram of dashboard integration.
discussions with the AI assistant, offering custom insights into the model's operations. This component integrates several sub-modules to facilitate its work in presenting explainability and fairness to the end user.
1. **OpenAI Connector:** The OpenAI Connector is mainly used for authentication with the OpenAI API and also initiates the prerecorded message history. It also keeps track of the user conversations.
2. **Anomaly Packet Data Fetching:** It looks through all the documents in Elasticsearch index and extracts all the important information we need from that document.
3. **OpenAI API Unit:** Integrates the detected packet flow data with the curated fine-tuning prompts and feed them to the OpenAI API.
4. **AI Assistant Analysis:** The AI Assistant receives all the data preserved in the document, accompanied by the refined prompts from the unit, and it generates a comprehensive analysis for the human agent. This analysis not only reveals the details but also facilitates interactive communication directly with the human agent, enabling a seamless exchange of information and insights.
### _Elasticsearch data schemas_
For telemetry storage, we principally utilize two Elasticsearch indices: detected-packets, original-packets. The "detected-packets" index serves as a structured repository for crucial information related to detected network anomalies. This schema is designed to store the attributes extracted using the two explainability frameworks SHAP and LIME, each contributing to the comprehensive understanding of identified anomalies.
The schema for the "detected-packets" index is designed to align with the objective of offering insights into the workings of the anomaly detection model, ensuring transparency and a deeper understanding of its mechanisms. It offers a structured means of storing textual predictions, influential factors, and other data, enhancing the dashboard's capabilities to explain anomalies effectively. Table II provides an overview of the "Detected Packets" index, detailing each label and its corresponding description to ensure clarity and a comprehensive understanding of the data structure. The table emphasizes key attributes pivotal for anomaly detection and subsequent analysis.
The "original-packets" index requires a more complex and dynamic mapping. An in-depth review of each field in the document is superfluous since the KDD99 dataset description [13] already offers a comprehensive coverage. The schema for the original packets is crucial, aiding anomaly detection through manual reviews and enhancing trust between the human agent and the Intrusion Detection System (IDS).
### _User Experience & Use Cases_
Collectively, all the elements of the IDS Dashboard we previously discussed assist the user in making well-informed decisions regarding network anomalies by incorporating explanatory visualizations, manual inspection, AI-generated explanations, and interactive conversations. The system is intended to be versatile and can aid the user in multiple scenarios.
First, the Detection Engine offers updated threat identification and classification of incidents, swiftly classifying network threats and allowing security analysts to make informed responses through insightful visualizations and AI-generated explanations available on the Dashboard. Second, the dashboard enables ML model Interpretability, serving as a crucial tool for developers and data scientists to comprehend model functionality and its shortcomings, aiding the creation of more
Fig. 2: System Components Diagram
advanced models. Third, addressing the prevalent concern of soaring security operation costs, our dashboard promotes collaborative analysis and reporting by providing a user-friendly interface suitable for individuals with varied security expertise and equipping users with essential tools, thereby optimizing operational effectiveness and cost efficiency.
## IV Evaluation and Results
### _Prototype Functionality_
The dashboard shown in Figure 3 features a neatly organized layout that seamlessly integrates the multiple components of the HuntGPT. In the upper section, the dashboard provides a visual representation of the ML model inner workings. Moving down, the middle area presents the original data for manual inspection when needed. Lastly, the lower part is the AI assistant that provides context and explainability to the end user and also enables on going conversations. The system user can download a complete report of the incident including all the graphs an data regarding the incident using the generate report function. This setup ensures user-friendliness while uniting various functions.
### _Response Quality Analysis_
In this section, we examine how well our prototype functions in explaining detected anomalies and assisting users via its chatbot feature. Assessing the responses of the chatbot, a common practice with conversational agents, posed distinctive challenges in quantifying the requisite metrics for performance appraisal. Our analysis can be segmented into two key components: i) Technical Knowledge in Cybersecurity and ii) Response Evaluation.
We first examine whether ChatGPT (i.e., GPT-3.5 Turbo) possesses the requisite technical knowledge in the field of cybersecurity to effectively assist the user. A critical aspect of this analysis involves comparing ChatGPT's knowledge to that of a certified IT professional. Following this knowledge assessment, we evaluate the responses provided by ChatGPT in terms of their quality and appropriateness. Special attention is paid to the level of difficulty in the answers since our goal is to provide knowledge to users with minimal cybersecurity experience.
#### Iv-B1 Technical Knowledge in Cybersecurity
In this assessment, our primary focus is to measure the accuracy and precision of answers generated by GPT-3.5-turbo, the model currently powering the HuntGPT system. To evaluate the model's performance, we conducted tests using a set of standardized certification exams in the field of cybersecurity. The detailed list of the standardized exams used for our assessment is presented below:
1. [leftmargin=*]
2. CISM Certified Information Security Manager Practice Exams: The updated self-study guide Written by Peter H. Gregory featuring hundreds of practice exam questions that match those on the live test [14].
3. ISACA official CISM practice Quiz. A free practice quiz including questions from ISACA's test prep solutions that are the same level of difficulty of ISACA's official CISM exam [43].
4. ISACA official cybersecurity fundamentals practice quiz: A practice quiz including questions from ISACA's test prep solutions that are the same level of difficulty of ISACA's official Cybersecurity Fundamentals exam [43].
Based on the results presented in Table III, GPT-3.5-turbo demonstrates substantial proficiency in cybersecurity knowledge, with success rates ranging from 72% to 82.5% across varied and reputable exams. However, there is room for improvement, particularly in the ISACA official cybersecurity fundamentals practice quiz where the model achieved a lower success rate of 72%. The challenging CISM certification demands profound knowledge and experience in risk management, illustrated by the fact that only 50-60% of first-time examinees attain success [44]. Nevertheless, preliminary results from our evaluation are highly promising and indicate that the model possesses the capacity to provide well-informed security decisions.
#### Iv-B2 Response Evaluation
We evaluated the responses provided by GPT-3.5-turbo in terms of their quality and appropriateness. Table IV presents the comprehensive findings of our study, where we evaluated 20 different generated explanations
Figure 3: Sections of Detection and Explainability in the Dashboard.
and 20 chat logs. Each chat log included approximately six questions regarding the detected anomaly. An example of a chat log is illustrated in Table V.
We evaluated the responses using six unique readability formulas from the Python _py-readability-metrics_ library [45], each applicable to different fields due to their distinct characteristics. The Flesch-Kincaid Grade Level, adopted by the U.S. Army, is used for assessing the complexity of technical manuals. The Flesch Reading Ease serves as a standard test for the U.S. Department of Defense, employed to determine the readability of its documents and forms. The Dale Chall readability formula is widely adopted in schools and educational institutes to assess the difficulty level of texts. The Automated Readability Index was initially designed for real-time monitoring of readability on electric typewriters. The Coleman Liau Index has been specifically used to evaluate the readability of medical documents. Finally, the Linear Write was purportedly developed for the United States Air Force to assess the readability of their technical manuals.
The readability results in Table IV for both AI explainability and Chatbot logs were evaluated by most formulas as being at the graduate level or equivalent, revealing the presence of complex linguistic structures in the generated texts. While chat conversations can display different readability levels depending on question types and user-required detail, the generated text is usually readable, typically necessitating only a basic, non-specialized college education level for comprehension.
## V Conclusion and Future Work
In conclusion, the study illustrates the efficacy of integrating LLM-based conversational agents with Explainable AI (XAI) within intrusion detection systems. Our prototype, HuntGPT, combines the advanced capabilities of GPT-3.5-turbo with a user-friendly dashboard and adeptly elucidates the latent details of detected anomalies. The model demonstrated substantial proficiency in cybersecurity knowledge, as evident from its success rates ranging between 72% and 82.5% on various reputable standardized certification exams in cybersecurity. However, these results also highlight areas for enhancement, mainly focusing on improving proficiency in fundamental cybersecurity concepts where the model achieved a lower success rate.
The extensive readability analysis of generated responses indicates that the content produced is generally comprehensible for individuals with a basic college education level, fostering user understanding and interaction. The implemented conversational agent effectively generates actionable responses and promotes user engagement by communicating complex concepts, thereby making a substantial contribution to enhancing user comprehension in cybersecurity.
This research contributes valuable insights into integrating advanced AI models within interactive, user-focused applications in cybersecurity. The achieved success rates and the readability of generated responses emphasize the potential of implementing such integrated models in real-world applications, providing a solid foundation for developing more sophisticated, explainable, interpretable, actionable, and user-friendly cybersecurity solutions. This study serves as a stepping stone for further research and development in creating LLM-driven security tools that integrate XAI in response to the evolving landscape of cybersecurity threats and user needs.
In our future work, we aim to advance in two simultaneous directions, which are crucial for meeting the real-time detection and response requirements. Our first approach involves refining our Machine Learning model and incorporating real-time detection capabilities by adapting existing frameworks, thereby enabling immediate and proactive responses to threats.
Concurrently, we focus on embedding actionable AI within the chatbot, enabling it to issue active commands directly to the Security Information and Event Management (SIEM) component, facilitating instantaneous and intelligent responses to security events.
## Acknowledgment
This work was funded by the European Commission grants IDUNN (grant no. 101021911) and the Academy of Finland 6Genesis Flagship program (grant no. 318927).
|
2309.13159 | Estimating a k-modal nonparametric mixed logit model with market-level
data | We propose a group-level agent-based mixed (GLAM) logit model that is
estimated using market-level choice share data. The model non-parametrically
represents taste heterogeneity through market-specific parameters by solving a
multiagent inverse utility maximization problem, addressing the limitations of
existing market-level choice models with parametric taste heterogeneity. A case
study of mode choice in New York State is conducted using synthetic population
data of 53.55 million trips made by 19.53 million residents in 2019. These
trips are aggregated based on population segments and census block group-level
origin-destination (OD) pairs, resulting in 120,740 markets/agents. We
benchmark in-sample and out-of-sample predictive performance of the GLAM logit
model against multinomial logit, nested logit, inverse product differentiation
logit, and random coefficient logit (RCL) models. The results show that GLAM
logit outperforms benchmark models, improving the overall in-sample predictive
accuracy from 78.7% to 96.71% and out-of-sample accuracy from 65.30% to 81.78%.
The price elasticities and diversion ratios retrieved from GLAM logit and
benchmark models exhibit similar substitution patterns among the six travel
modes. GLAM logit is scalable and computationally efficient, taking less than
one-tenth of the time taken to estimate the RCL model. The agent-specific
parameters in GLAM logit provide additional insights such as value-of-time
(VOT) across segments and regions, which has been further utilized to
demonstrate its application in analyzing NYS travelers' mode choice response to
the congestion pricing. The agent-specific parameters in GLAM logit facilitate
their seamless integration into supply-side optimization models for revenue
management and system design. | Xiyuan Ren, Joseph Y. J. Chow, Prateek Bansal | 2023-09-22T19:50:55Z | http://arxiv.org/abs/2309.13159v2 | # Nonparametric estimation of k-modal taste heterogeneity for group level agent-based mixed logit
###### Abstract
Estimating agent-specific taste heterogeneity with a large information and communication technology (ICT) dataset requires both model flexibility and computational efficiency. We propose a group-level agent-based mixed (GLAM) logit approach that is estimated with inverse optimization (IO) and group-level market share. The model is theoretically consistent with the RUM model framework, while the estimation method is a nonparametric approach that fits to market-level datasets, which overcomes the limitations of existing approaches. A case study of New York statewide travel mode choice is conducted with a synthetic population dataset provided by Replica Inc., which contains mode choices of 19.53 million residents on two typical weekdays, one in Fall 2019 and another in Fall 2021. Individual mode choices are grouped into market-level market shares per census block-group OD pair and four population segments, resulting in 120,740 group-level agents. We calibrate the GLAM logit model with the 2019 dataset and compare to several benchmark models: mixed logit (MXL), conditional mixed logit (CMXL), and individual parameter logit (IPL). The results show that empirical taste distribution estimated by GLAM logit can be either unimodal or multimodal, which is infeasible for MXL/CMXL and hard to fulfill in IPL. The GLAM logit model outperforms benchmark models on the 2021 dataset, improving the overall accuracy from 82.35% to 89.04% and improving the pseudo R-square from 0.4165 to 0.5788. Moreover, the value-of-time (VOT) and mode preferences retrieved from GLAM logit aligns with our empirical knowledge (e.g., VOT of NotLowIncome population in NYC is $28.05/hour; public transit and walking is preferred in NYC). The agent-specific taste parameters are essential for the policymaking of statewide transportation projects.
**Keywords:** random utility maximization, nonparametric estimation, ubiquitous dataset, machine learning, New York State, discrete choice model
## 1 Introduction
Choice modeling is a critical step of demand analysis and revenue management, in which the value that a customer places on each attribute of the products is reflected by a set of taste parameters. These parameters can be estimated by the random utility maximization (RUM) model class developed since the 1970s, which assumes that individuals make choices by maximizing the overall utility they can expect to gain (McFadden, 1977; Train, 2009). Understanding taste heterogeneity in the population allows firms or government institutions to identify population groups with similar tastes, predict demand in new scenarios, and improve customer satisfaction (Becker et al., 2018; Bowman and Ben-Akiva, 2001; Lemp and Kockelman, 2012; Sarrias, 2020).
The availability of large-scale information and communication technology (ICT) data makes it possible to consider the heterogeneity among a wide range of population segments living in different regions. For instance, by differentiating the travel preferences of students living in Manhattan or seniors living in Utica, NY, new mobility scenarios can be analyzed to help New York State (NYS) policymakers forecast ridership, estimate revenue, and access consumer surplus changes for targeted groups (Chow et al., 2020; He et al., 2020). However, this requires the estimation of individual- or agent-level taste parameters from ubiquitous datasets, which is challenging to conventional RUM models and nonparametric logit models in recent studies (Jagabathula et al., 2020; Swati, 2022). On the one side, mixed logit (MXL) with conditional taste density requires previous choice observations to estimate posterior individual parameter, which does not work if panel data is unavailable (Train, 2009). MXL assumes parametric distributions of taste parameters, imposing restrictions on ground-truth data that result in biased parameter estimates and inaccurate demand predictions (Farias et al., 2013). On the other side, recent studies provide nonparametric estimation for individual-level parameters, such as individual parameter logit (Swati, 2022), but most of them depend on solving a large-scale optimization problem that is computationally demanding and hard to estimate parameters within a logical boundary. Given ubiquitous datasets with choice observations of millions of individuals, estimation algorithms are required to be both theoretically flexible and computationally efficient.
These challenges can be addressed if taste parameters can be specified deterministically with bottom-up logic, i.e., a heterogeneous set of parameters is estimated for each individual or agent within the RUM framework and specific constraints. Considering the large sample size of ubiquitous datasets, individual choice observations can be grouped into market-level data and represented by an agent (individuals in an agent are assumed to have similar tastes). We call this "group level agent-based mixed logit" (GLAM logit), which is an extension of agent-based mixed logit (AMXL) proposed by Ren and Chow (Ren and Chow, 2022).
GLAM logit is suitable for large-scale ICT data (smartphone data) or synthetic population data (He et al., 2020; Replica Inc., 2021) where attributes from almost the whole population can be obtained or monitored instead of just from a sample, and the model is used to predict the outcome of new scenarios for the same population. We assume such a setting, where data for a population is sufficient to estimate agent-level tastes, resulting in a RUM model with heterogeneous and deterministic parameters. As for the proposed methodology, we formulate an inverse optimization (IO) problem for each agent in each iteration, in which a unique set of parameters is solved with constraints regarding inverted market share (Berry et al., 1995) and logical parameter signs. This insight keeps the theoretical consistency with RUM framework and allows modelers to capture heterogeneous tastes with great flexibility. Agent-specific parameters are then classified into several modes, or "taste clusters", using a k-means algorithm (Hartigan and Wong, 1979) at the end of each iteration. A Method of Successive Average (MSA) (Sheffi and Powell, 1982) is applied to smooth the iterative convergence and obtain a stable fixed point.
A case study of New York statewide travel mode choice is conducted with a synthetic population dataset provided by Replica Inc., which contains trip details of 19.57 million residents on two typical weekdays, one in Fall 2019 and another in Fall 2021. According to Replica's data quality report, demographic attributes of the synthetic population are 95% accurate (compared with census data), and the commute mode share by census tract is 90% accurate (compared with Census Transportation Planning Products (CTPP) data). These trips are grouped into 120,740 agents based on population segments and census block-group OD pairs. We calibrate the GLAM logit model with the 2019 dataset and benchmark its performance against a mixed logit (MXL), conditional mixed logit (CMXL), and
individual parameter logit (IPL) models. To facilitate future research, we uploaded the algorithms and examples to a GitHub repository: [https://github.com/BUILTNYU/GLAM-Logit](https://github.com/BUILTNYU/GLAM-Logit).
The remainder of the paper is organized as follows. The literature review section reviews studies on estimating individual-level taste heterogeneities. The methods section describes the general framework of GLAM logit, including the general framework, inverse optimization algorithm, and an illustrative example. The case study section presents a study case of New York statewide travel mode choice that compares the performance of GLAM logit and benchmark models. The final section concludes the findings and points out future work.
## 2 Literature review
Though the mixed logit framework allows the distribution of taste parameters to be arbitrary, it is rarely estimated as such in practice. For purposes of tractability and computational efficiency, the mixed distribution is restricted to parametric distributions (e.g., normal distribution, uniform distribution, etc.). A few studies have applied conditional distributions, genetic algorithms, and gradient-descent approaches to capture taste heterogeneity in a more flexible manner. Given a ubiquitous data set, however, modeling with individual-level data is sometimes unstable and requires huge computing resources. In that case, estimation approaches using market-level data are more feasible ways. This section reviews the related studies.
### Mixed logit (MXL) with conditional taste density
The RUM model class assumes individuals or agents to make decisions by maximizing the overall utility they can expect to gain (Becker, 1965; Bowman and Ben-Akiva, 2001). McFadden and Train (2000) formulated a general framework that includes any RUM model with discrete choice probabilities to approximate. They called this mixed multinomial logit (MMNL) or mixed logit (MXL), which is a multinomial logit (MNL) model with random parameters \(\beta\) drawn from a probability distribution function \(g(\beta|\,\theta)\), as shown in Equations (1)-(2).
\[U_{jt}=\beta^{T}X_{jt}+\varepsilon_{jt},\qquad\forall j\in\{1,...,J\},t\in\{1,...,T\} \tag{1}\]
\[P(j,t|\,\theta)=\int\frac{e^{\beta^{T}X_{jt}}}{\sum_{jt}e^{\beta^{T}X_{jt}}}g( \beta|\,\theta)d\beta\,\quad\forall j\in\{1,...,J\},t\in\{1,...,T\} \tag{2}\]
where \(U_{jt}\) is the overall utility of choosing alternative \(j\) in situation \(t\), which consists of a systematic utility \(\beta^{T}X_{jt}\) and a random utility \(\varepsilon_{jt}\). \(X_{jt}\) denotes a set of observed variables of alternative \(j\) in situation \(t\). \(\beta\) is a vector of taste parameters assumed to vary randomly across individuals with density \(g(\beta|\,\theta)\), where \(\theta\) represents the parameters of this distribution. For instance, if \(\beta\) is normally distributed in the population, \(\theta\) would contain the mean and covariance. \(\varepsilon_{jt}\) is the random utility usually assumed to be independent and identically distributed (i.i.d.) following a Gumbel distribution. Accordingly, the probability of choosing alternative \(j\) in situation \(t\) given \(\theta\), \(P(j,t|\theta)\), can be defined as Equation (2).
To determine where each individual's taste parameter lies in the general distribution, Revelt & Train (Revelt & Train, 2000) proposed a conditional approach, in which the taste density of each sampled individual is derived conditional on the choice observations for that individual and the population distribution estimated by MXL. Let \(\mathcal{y}_{nt}\) denote the observed sequence of choice made by individual \(n\) in situation \(t\), and let \(h(\beta_{nt}|\mathcal{y}_{nt},\theta)\) denote individual \(n\)'s taste density conditional on the observed choice in situation \(t\), \(\mathcal{y}_{nt}\), and population distribution parameters \(\theta\). By Bayes' rule, individual-specific taste density can be defined as Equation (3).
\[h(\beta_{nt}|\mathcal{y}_{nt},\theta)=\frac{P(\mathcal{y}_{nt}|\,\beta_{nt}).g( \beta|\,\theta)}{P(\mathcal{y}_{nt}|\theta)},\qquad\forall n\in\{1,...,N\},t \in\{1,...,T\} \tag{3}\]
where \(P(\mathcal{y}_{nt}|\theta)\) denotes the probability of choice sequence \(\mathcal{y}_{nt}\) given \(\theta\). \(P(\mathcal{y}_{nt}|\beta_{nt})\) denotes the probability of choice sequence \(\mathcal{y}_{nt}\) given individual-specific taste parameters. \(g(\beta|\,\theta)\) and \(h(\beta_{nt}|\mathcal{y}_{nt},\theta)\) are treated as prior and posterior taste density, respectively. To estimate MXL with conditional taste density, a simulated approximation to this expectation is obtained by taking \(R\) draws of the random parameters.
However, there are at least three limitations in this approach. First, the conditional taste density requires previous choice observations of the same individual. In Revelt & Train (2000)'s study, the posterior taste distribution differs from the population distribution after fitting the MXL with five to ten observed choices per individual. This does not work if panel data is unavailable. Second, this approach assumes parametric distribution of individual tastes. Many studies have stated that restrictions on ground-truth distributions might result in biased parameter estimates and inaccurate demand predictions, especially when the sample size is large (Fox & Gandhi, 2016; Gautier & Kitamura, 2013; Train, 2008). Third, probabilistic parameters for each individual fit poorly within large-scale optimization models. The demand response from a mixed logit model is possible but would require simulation to predict the response of individuals (Pacheco et al., 2021; Ren & Chow, 2022).
### Nonparametric logit models
Nonparametric logit models do not restrict the joint taste distribution and allow it to be described by any member of the RUM model class (Jagabathula & Rusmevichientong, 2017). Estimation methods for these models are typically not maximum loglikelihood estimation (MLE). Instead, sophisticated mathematical techniques are used to search for individual-level parameters that have the best fit to the observed data (Fox & Gandhi, 2016; Jagabathula et al., 2020; Swait, 2022). Therefore, the population taste distribution \(g(\beta\mid\theta)\) is replaced by a vector set, \(\theta=(\theta^{1},...,\theta^{-})\), referring to individual- or agent-specific parameters. These hybrid RUM/machine learning approaches generally work best when the volume of data is large where behavioral elasticity is desired more than model transferability (as the parameters for a new individual in the population would be unknown), which has increasingly been the case with the availability of large-scale ICT data. We briefly introduce Swait's (Swait, 2022) work that provides a state-of-the-art approach. For more applications, we refer readers to (Berry & Haile, 2014; Fox et al., 2011; Jagabathula et al., 2020; Van Ryzin & Vulcano, 2015).
The nonparametric approach proposed by Swait (Swait, 2022) refers to the individual parameter logit (IPL). Individual taste parameters are estimated in a distribution-free manner under the objective of maximizing sample posterior loglikelihood. Estimation of the IPL model contains two stages. In the first stage, the generation index \(g\) and candidate index \(m\) are introduced. Let \(\Pi_{g}\) be the candidate pool for generation \(g\), \(\Pi_{g}=\{\pi_{1g},...,\pi_{mg},...,\pi_{Mg}\}\), where \(\pi_{mg}\) is a point in the k-dimensional parameter space referring to candidate \(m\) in generation \(g\). For a given pool member in one generation, the likelihood of individual \(n\)'s observed choice over \(O\) choice observations is defined in Equation (4).
\[l_{n|mg}=\prod_{o=1}^{O}\prod_{j=1}^{J}\left[P_{nj}\big{(}\pi_{mg}\big{|}x_{njo }\big{)}\right]^{\delta_{njo}} \tag{4}\]
where \(P_{nj}(.)\) is the choice probability model (typically MNL) evaluated at pool member \(\pi_{mg}\). \(\delta_{njo}\) are choice indicators, equal to 1 for the single chosen alternative and 0 otherwise. The posterior preference \(\hat{\theta}_{ng}\) for individual \(n\) are calculated by applying Bayes Theorem over the \(M\) pool members in \(\Pi_{g}\). Accordingly, the posterior loglikelihood for the population can be calculated in Equations. (5)-(7). The updating of one generation to another occurs according to an update candidate pool (UCP) logic that consists of sorting, pairing, recombining, and cloning steps to ensure the general posterior loglikelihood improves from generation to generation. We refer interested readers to Swait (2022)'s work.
\[\hat{\theta}_{ng}=\frac{\sum_{m}\pi_{mg}\cdot l_{n|mg}}{\sum_{m}l_{ n|mg}},\qquad\forall n\in\{1,...,N\} \tag{5}\] \[l_{n}\big{(}\hat{\theta}_{ng}\big{)}=\prod_{o=1}^{O}\prod_{j=1}^{ J}\left[P_{nj}\big{(}\hat{\theta}_{ng}\big{|}x_{njo}\big{)}\right]^{\delta_{njo}}, \qquad\forall n\in\{1,...,N\},g\in\{1,...,G\}\] (6) \[L_{Ng}=\sum_{n=1}^{N}ln\left[l_{n}\big{(}\hat{\theta}_{ng}\big{)} \right],\qquad\forall g\in\{1,...,G\} \tag{7}\]
In the second stage, a gradient descent approach is applied to determine the final parameters per individual. The search direction is determined at the general level and the step size is assumed to be the same for all individuals. In detail, the updating of the posterior individual parameters proceeds by simply applying the same step size to the individual preference of all individuals, as shown in Equations (8) - (9).
\[\theta_{n,s+1}=\theta_{ns}+\varphi.\Delta_{s},\ \ \forall n\in\{1,...,N\},s\in\{1,...,S\} \tag{8}\] \[\Delta_{s}{=}\sum_{n=1}^{N}\nabla_{\theta}\ln[l_{n}(\theta_{ns}) ]\,,\qquad\forall\ s\in\{1,...,S\} \tag{9}\]
where \(s\) is an iteration of the algorithm, \(\Delta_{s}\) is the multidimensional direction of movement for all \(n\) at iteration \(s\), and \(\varphi\) is the scalar step-size to be taken. Equation (9) defines \(\Delta_{s}\) for the gradient descent search method.
Despite the innovative ideas, limitations still exist in current nonparametric logit models. First, it remains unclear how to define an effective algorithm to find taste candidates that shape the individual parameter distribution. The sorting, pairing, recombining, and cloning steps in Swait (2022)'s evolutionary algorithm introduce many hyperparameters. Second, it is hard for these kinds of approaches to estimate parameters within a logical boundary. For instance, if we want some parameters logically to be negative for all individuals (e.g., preference for travel time), we have to filter out illogical taste candidates in searching algorithms (Swait, 2022), or to assume a reversed-sign lognormal distribution that makes the estimation semiparametric (Berry & Haile, 2014; Train, 2009). Third, most nonparametric estimation approaches are designed for individual choice observations, which are computationally demanding when the sample size is large. Given ubiquitous datasets with choice observations of millions of individuals (e.g., statewide mode choice observations enabled by ICT data), it is almost impossible for individual parameter logit models to converge.
To this end, individual observations should be grouped back into market-level data, and the large optimization problem should be split into smaller ones that can be solved in parallel. Given that modeling with ubiquitous datasets has increasingly been the case in the digital era, an innovative approach is necessary to estimate group or agent-level taste parameters from large-scale market-level data.
### Inverse optimization (IO) for choice modeling
Inverse optimization (IO) has been widely applied to traffic assignment, route choice, and activity scheduling problems (Chow and Recker, 2012; Hong et al., 2017; Chow, 2018; Xu et al., 2018). Given an optimization problem, an IO problem can be formulated to impute missing parameters from observed data that represents sub-optimal solutions of the original optimization problem (Ahuja and Orlin, 2001; Burton and Toint, 1992). A typical IO problem is defined as follows: for a given prior \(\theta_{0}\) of missing parameters and observed decision variables \(\mathbf{x}^{*}\), determine a posterior parameter set \(\theta\) such that \(\mathbf{x}^{*}\) is optimal while minimizing its \(L_{1}\) norm from the prior, as shown in Equation (10).
\[\min_{\theta}|\theta_{0}-\theta|\!:\!x^{*}=\arg\min\left\{\theta^{\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\begin{split}&\underset{\theta_{n}}{min}\sum_{n\in\{1,...,N\}}( \theta_{0}-\theta_{n})^{2}\\ &\text{subject to:}\\ &\vec{\theta_{n}}X_{nj^{*}}+\varepsilon_{nj^{*}}\geq\vec{\theta_{ n}}X_{nj}+\varepsilon_{nj}+b,\qquad\forall n\in\{1,...,N\},j\in\{1,...,J\},j\neq j^{*} \end{split} \tag{12}\]
where \(\theta_{0}\) is a vector of prior taste parameters; \(\theta_{n}\) is a vector of posterior taste parameters for individual \(n\); \(X_{nj}\) is a vector of alternative attributes; \(\vec{\theta_{n}}X_{nj}\) is the systematic utility derived from individual \(n\) choosing alternative \(j\); \(\varepsilon_{nj}\) is the random utility that is Gumbel distributed; \(b\) is a safe boundary in case \(\varepsilon_{nj^{*}}\) is much larger than \(\varepsilon_{nj}\) in a single draw (making the comparison of systematic utilities meaningless).
We propose a hybrid machine learning/econometric approach to estimate agent-specific parameters from a ubiquitous data set. The approach is based on the IO method of estimating a RUM model and is an extension of the AMXL model (Ren and Chow, 2022) to deal with multiple choice observations of individual groups. Considering the large sample size of ubiquitous datasets, a group of individual choice observations are aggregated into market-level data and represented by an agent, in which individual tastes are assumed to be the same. The utility function is still linear, in order to ensure its compatibility with large-scale optimization models.
## 3 Proposed model
We propose a random utility model with agent-specific coefficients estimated from market-level data resulting in a heterogeneous distribution of taste parameters. We call this group level agent-based mixed (GLAM) logit. GLAM logit is designed for ubiquitous datasets where attributes from almost the whole population can be obtained or monitored instead of just from a sample, and the model is used to predict the outcome of new scenarios for the same population. A typical use case of GLAM logit is transportation behavior modeling with large-scale ICT data. Notations used in this section are shown in Table 1.
### Architecture of GLAM logit model
Our study formalizes the multiagent inverse utility maximization (MIUM) problem to estimate agent-level coefficients. Consider within each agent \(i\in I\), there is a group \(N\) of individuals behaviorally seeking to maximize their overall utilities. In line with random utility theory, the total utility derived from an individual \(n\in N\) choosing mode \(j\in J\) is defined in Equation (13).
\[U_{nj}=V_{nj}+\varepsilon_{nj}=\theta_{i}X_{nj}+\varepsilon_{nj},\qquad\forall n \in N,i\in I,j\in J \tag{13}\]
\begin{table}
\begin{tabular}{l l} \hline \(N\) & The set of all individuals \\ \(I\) & The set of all agents \\ \(J\) & The set of \\ \(U_{nj}\) & Total utility derived from individual \(n\) choosing alternative \(j\) \\ \(V_{nj}\) & Systematic utility derived from individual \(n\) choosing alternative \(j\) \\ \(\varepsilon_{nj}\) & Random utility derived from individual \(n\) choosing alternative \(j\) \\ \(S_{ij}\) & Market share of alternative \(j\) in agent \(i\) \\ \(X_{ij}\) & Market level attributes of alternative \(j\) to agent \(i\), \(X_{ij}=\frac{1}{|N|}\sum_{n\in N}X_{nj}\) \\ \(\theta_{i}\) & A vector of taste parameters for agent \(i\) \\ \(\vec{\theta_{i}}\) & A shorter vector of taste parameters for agent \(i\) requiring specific boundaries \\ \(\theta_{0}^{k}\) & The fixed-point prior of cluster \(k\) identified using K-Means algorithm \\ \(l^{k}\) & The agent set of the \(k^{th}\) cluster \\ \(tol\) & A hyperparameter to draw the balance of goodness-of-fit and feasible solutions \\ \hline \end{tabular}
\end{table}
Table 1: Notations used in the proposed model
where \(U_{nj}\) is the total utility, which is composed of a systematic utility \(V_{nj}\) and a Gumbel-distributed random utility \(\varepsilon_{nj}\). \(X_{nj}\) is a set of observed variables related to individual \(n\) choosing alternative \(j\). \(\theta_{i}\) is a vector of taste parameters of individuals in agent \(i\). By doing this, we capture individual heterogeneities within agent \(i\) with \(\varepsilon_{nj}\) and we assume individuals in the same agent share the same set of parameters \(\theta_{i}\). Here, we introduce the market-level nonparametric approach in the spirit of Berry, Levinsohn, and Pakes (BLP) (1995). In such a setting, the relationship between the utility function and market share can be defined as Equations (14) - (15).
\[P_{ij}=\frac{\exp\bigl{(}V_{ij}\bigr{)}}{\sum_{q\in j}\exp\bigl{(} V_{iq}\bigr{)}},\qquad\forall i\in I,j\in J \tag{14}\] \[\ln\left(\frac{P_{ij}}{P_{ij^{*}}}\right)=\ln\bigl{(}P_{ij} \bigr{)}-\ln\bigl{(}P_{ij^{*}}\bigr{)}=\,\theta_{i}\bigl{(}X_{lj}-X_{lj^{*}} \bigr{)},\qquad\forall i\in I,j,j^{*}\in J,j\neq j^{*} \tag{15}\]
where \(P_{ij}\) is the predicted market share of alternative \(j\) in agent \(i\); \(\ln\left(\frac{P_{ij}}{P_{ij^{*}}}\right)\) is called inverted market share, which can be measured as observed market-level attributes \(X_{lj}=\frac{1}{|N|}\sum_{n\in N}X_{nj}\)and agent-specific parameters \(\theta_{i}\).
The agent-level parameter set \(\theta_{i}\) can be estimated by solving a multiagent inverse utility maximization (MIUM) problem under \(L_{2}\)-norm. Since the distribution of taste parameters can be multimodal, we use \(|\mathrm{K}|\) fixed-point priors referring to \(|\mathrm{K}|\) peaks in the multimodal distribution (Friston et al., 2015). \(\delta_{ik}\) are introduced as binary allocation variables with \(\delta_{ik}=1\) indicating that parameters of agent \(i\) belong to peak \(k\). Based on \(\delta_{ik}\), the agents can then be categorized into \(|\mathrm{K}|\) taste clusters. Equations (16) - (23) illustrate the model formulation.
\[\min_{\delta_{i},\theta_{i}}\sum_{k\in K}\sum_{l\in I}\delta_{ik} (\theta_{0}^{k}-\theta_{i})^{2}\] (16) subject to: \[\theta_{i}\bigl{(}X_{nj}-X_{nj^{*}}\bigr{)}\geq\ln\bigl{(}S_{lj} \bigr{)}-\ln\bigl{(}S_{lj^{*}}\bigr{)}-tol,\qquad\forall i\in I,j,j^{*}\in J,j \neq j^{*} \tag{17}\] \[\theta_{i}\bigl{(}X_{nj}-X_{nj^{*}}\bigr{)}\leq\ln\bigl{(}S_{lj} \bigr{)}-\ln\bigl{(}S_{lj^{*}}\bigr{)}+tol,\qquad\forall i\in I,j,j^{*}\in J,j \neq j^{*}\] (18) \[\tilde{\theta}_{i}\geq lb,\qquad\forall i\in I\] (19) \[\tilde{\theta}_{i}\leq ub,\qquad\forall i\in I\] (20) \[\theta_{0}^{k}=\frac{\sum_{i\in I}\theta_{i}\delta_{ik}}{\sum_{i \in I}\delta_{ik}},\qquad\forall k\in K\] (21) \[\sum_{k\in K}\delta_{ik}=1,\qquad\forall i\in I\] (22) \[\delta_{ik}\in\{0,1\},\qquad\forall i\in I,k\in K \tag{23}\]
where \(\theta_{0}^{k}\) is the \(k^{th}\) fixed-point prior corresponding to a peak of the multimodal distribution; K is the set of peaks or taste clusters. If we set \(|K|=1\), the model will collapse into the original form used in (Ren & Chow, 2022). \(S_{lj}\) is the observed market share of alternative \(j\) in agent \(i\); and \(tol\) is a manually set tolerance to draw a balance of goodness-of-fit and feasible solutions (a larger value of \(tol\) leads to a lower goodness-of-fit while a higher proportion of feasible solutions). A recommend range of \(tol\) is [0.1,1.5], which leads to 60%-95% consistency between predicted and observed market share in our experiments. In our study, we set \(tol=0.5\). In Equations (19) - (20), \(lb\) and \(ub\) are the lower and upper boundaries determining the parameter space and \(\tilde{\theta}_{i}\) is a shorter vector of parameters requiring specific
boundaries. Equation (21) makes sure that the estimated agent parameters have a consistency with one of the fixed-point priors.
### Estimation framework for GLAM logit
Solving the model in Equations (16) - (23) would be computationally costly as it would lead to a highly sparse diagonal matrix and nonlinear constraints. Instead, we propose a decomposition method to initialize \(\theta_{0}^{k}\) and \(\delta_{ik}\) and update them iteration by iteration. In each iteration, we solve Equations (16) - (20) \(|I|\) times with \(\theta_{i}\) as the decision variables and \(\delta_{ik}\) fixed, which are much smaller QP problems. At the end of each iteration, we apply a k-means algorithm to \(\theta_{t}\) to identify \(|K|\) taste clusters and update \(\delta_{ik}\) using the label of each agent (if agent \(i\) belongs to cluster \(k\), then \(\delta_{ik}\)=1). Fixed-point priors \(\theta_{0}^{k}\) are then updated using Equation (21). We set a stopping criterion (percentage change of \(\theta_{0}^{k}\) is smaller than 0.5%) to check if the algorithm has converged. If so, we output the estimated agent-specific parameters \(\theta_{t}\). Otherwise, we use the updated \(\theta_{0}^{k}\) and \(\delta_{ik}\) for the next iteration. The iterations continue until all priors \(\theta_{0}^{k}\) stabilize (see (Xu et al., 2018) for an example of this kind of decomposition for the \(|\)K\(|\)=1 case).The subproblem with fixed \(\delta_{ik}\) can be solved using any optimizer software or package that can handle QP like Gurobi, CVXPY, etc.
The iterative updating method used in our study is the Method of Successive Average (MSA). The MSA ensures that the decomposition algorithm converges to a fixed point (Powell & Sheffi, 1982). The whole estimation approach is summarized in Algorithm 1. The reason we use MSA instead of more advanced algorithms like (Liu et al., 2007) is that we found our algorithm with MSA converged after 10-20 iterations. Though MSA is generally slow, its converging rate is acceptable at least in this study. The computational time is proportional to the total number of iterations and the time spent at each iteration. In each iteration, \(|I|\) QP inverse optimization problems are estimated in parallel for all agents. For each QP problem, the computational time is proportional to the number of constraints decided by the size of the choice set \(|I|\). Hence, the computational time of our proposed algorithm would increase proportionally by (\(|I|\times|I|\)).
```
1. Given observed variables and market share \(X_{ij}\), \(S_{ij}\), initialize with s=0, tol=0.5, fixed-point priors \(\theta_{0}^{k(s)}=[0,...,0],\forall k\in K\), and randomly assign agent \(i\) to one of the peaks.
2. For each \(i\in I\) and \(k\in K\), solve a QP if \(\delta_{ik}^{(s)}\)=1 to get \(\theta_{t}^{(s)}\): \(\underset{\theta_{t}^{(s)}}{min}\;\delta_{ik}^{(s)}(\theta_{0}^{(s)}-\;\theta _{i}^{(s)})^{2}\;\) subject to constraints in Equations (14) - (17)
3. Apply the k-means algorithm to \(\theta_{t}^{(s)}\) to identify \(|K|\) taste clusters, and update to get \(\delta_{ik}^{(s+1)}\) using cluster labels.
4. Set average to \(y^{k(s)}\)=\(\sum_{i\in I}\;\theta_{t}^{(s)}\;\delta_{ik}^{(s+1)}\;/\sum_{i\in I}\; \delta_{ik}^{(s+1)}\;,\;\forall k\in K\) as shown in Equation (18).
5. Using MSA to update and get \(\theta_{0}^{k(s+1)}\): \(\theta_{0}^{k(s+1)}=\dfrac{n}{n+1}\;\theta_{0}^{k(s)}+\dfrac{1}{n+1}y^{k(s)}\)
6. If the stopping criteria for \(\theta_{0}^{k}\) reached, stop and output \(\theta_{0}^{k(s)}\), \(\theta_{t}^{(s)}\), \(\delta_{ik}^{(s+1)}\); else, set \(s=s+1\) and go back to Step 2
```
**Algorithm 1************Parameter estimation in GLAM logit
Figure 1 illustrates the estimation logics of conditional MXL (Train, 2009), individual parameter logit (IPL) (Swait, 2022), and GLAM logit proposed by our study. The significance of GLAM logit is as follows. First, taste heterogeneity is captured through a bottom-up logic, i.e., a heterogeneous set of parameters is estimated for each agent instead of assuming parametric distribution or searching taste candidates. This allows GLAM logit to capture taste heterogeneity with great flexibility and capable of identifying taste clusters. Second, our approach splits a large-scale optimization problem into much smaller IO problems that can be solved in parallel. By doing this, we avoid generating highly diagonal sparse matrix that might use up CPU memories when the sample size is large. Also, it is feasible to set logical boundaries for some taste parameters (e.g., preference for travel time, cost, etc.) by adding
constraints to the IO problems. Third, GLAM logit estimates agent-specific parameters from market-level data, which is random-utility-consistent while computationally tractable and efficient given ubiquitous datasets. Since each agent's representative utility function is fully specified, GLAM logit can be directly integrated into system design optimization models as demand estimates instead of dealing with simulation-based approaches needed for MXL and some nonparametric logit models.
### Illustrative example
We built a simple example with 8 agents to illustrate how the GLAM logit model and its estimation algorithm work. In this example, each agent refers to a population segment selecting transport modes for their trips. Only two modes, taxi and transit, are considered for simplicity. Each row of the sample data contains the ID of the agent, travel time and cost of taxi, travel time and cost of transit, and mode share of the two modes. The sample data containing 8 agents is shown in Table 2. The derived utilities of the two modes are defined in Eq. (24)-(25).
\[V_{taxi,i}=\theta_{time,i}taxi\_time_{i}+\theta_{cost,i}taxi\_cost_{i}\,,\qquad \forall i\in I \tag{24}\]
\[V_{transit,i}=\theta_{time,i}transit\_time_{i}+\theta_{cost,i}transit\_cost_{i}+ \theta_{c,transit,i}\,,\forall i\in I \tag{25}\]
where \(V_{taxi,i}\) and \(V_{transit,i}\) are utilities derived from choosing taxi and transit. \(\theta_{time,i}\) and \(\theta_{cost,i}\) are the taste parameters of travel time and cost for agent \(i\in I\). \(\theta_{c\_transit,i}\) is the mode constant for agent \(i\). It is noted that we added two "fake" agents (agent 7 and 8) into the dataset. The mode shares of these two agents are unreasonable since the mode with a longer travel time and a higher cost has a higher market share.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Agent ID** & **Taxi time** & **Taxi cost** & **Transit time** & **Transit cost** & **Taxi share** & **Transit share** \\ \hline
1 & 10 min & \$ 10 & 30 min & \$ 3 & 80\% & 20\% \\
2 & 20 min & \$ 15 & 40 min & \$ 3 & 70\% & 30\% \\
3 & 40 min & \$ 25 & 60 min & \$ 3 & 60\% & 40\% \\
4 & 10 min & \$ 10 & 30 min & \$ 3 & 20\% & 80\% \\
5 & 20 min & \$ 15 & 40 min & \$ 3 & 30\% & 70\% \\
6 & 40 min & \$ 25 & 60 min & \$ 3 & 40\% & 60\% \\
7 & 10 min & \$ 3 & 30 min & \$ 10 & 10\% & 90\% \\
8 & 60 min & \$ 25 & 10 min & \$ 3 & 90\% & 10\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset (wide form) for the illustrative sample
Figure 1: Estimation logics of conditional MXL, individual parameter logit, and GLAM logit
We ran Algorithm 1 with the number of peaks \(|K|=3\). The estimated parameters are shown in Table 3. The estimated market share E_Taxi (%) and E_Transit (%) are quite close to the input data. Moreover, the results reflect diverse tastes at the agent level though the three clusters: (1) agent 1-3 have negative \(\theta_{time,i}\) and \(\theta_{cost,i}\) close to zero, indicating a preference for shorter travel time; (2) agent 4-6 have negative \(\theta_{cost,i}\) and \(\theta_{time,i}\) close to zero, indicating a preference for lower travel cost; (3) agent 7 & 8 have positive \(\theta_{cost,i}\) and \(\theta_{time,i}\), indicating an "irregular" preference for longer travel time and higher travel cost. In ubiquitous datasets, "irregular" preference is often related to errors in data collection. In that case, GLAM logit can be used to check the data quality to some extent. Another way to tease out problematic agents is to add constraints setting \(\theta_{time,i}\) and \(\theta_{cost,i}\) to be negative (using Eq. (20)-(21)), resulting in infeasible solution of IO problems for agent 7 & 8 to form a unique cluster.
## 4 Case study: New York statewide mode choice modeling
This section presents case study of New York statewide mode choice modeling. We use a synthetic population dataset provided by Replica Inc., which contains mode choices of 19.53 million residents on two weekdays, one in Fall 2019 season and another in Fall 2021 season. Individual choice observations are grouped into 120,740 agents based on census block-group OD pairs four population segments (namely, NotLowIncome, LowIncome, Senior, and Student). We use the dataset of Fall 2019 to calibrate the GLAM logit model and use the dataset of Fall 2021 as test data. Mixed logit (MXL), conditional mixed logit (CMXL) and individual parameter logit (IPL) are built as benchmarks. All of the experiments are conducted on a local machine with Intel(R) Core(TM) i7-10875H CPU and 32GB installed RAM. Codes are written in Python.
### Setup of experiments
#### 4.1.1 Data preparation
The synthetic population data is provided by Replica Inc., which contains demographic information and trip details of 19.53 million residents. The demographic information contains a set of attributes of each synthetic person, including gender, age, income, employment status, etc. The trip details are synthetized for two typical weekdays, one in Fall 2019 season and another in Fall 2021 season. Information for each trip record includes its origin, destination, travel mode, travel time, and distance. Trip cost, transit access time, transit egress time, transit in-vehicle time, and number of transfers can be further retrieved from the original dataset. The dataset of Fall 2019 is treated as training data, and the dataset of Fall 2021 is treated as test data. For each dataset we identify four population segments based on demographic attributes and consider six trip modes recorded by synthetic trips. For population segments, we first pick out the "Student" population who are still in school, colleges, and universities. We then identify a "Senior" population whose age is over 65. To differentiate "LowIncome" and "NotLowIncome" population, we refer to U.S. Federal Poverty Guidelines. The six trip modes include driving (trips made by drivers in private auto vehicles), public transit (trips made by buses, light rail, and subways), on-demand auto (trips made by taxi, or TNC services such as Uber or Lyft), biking (trips made by bikes or
\begin{table}
\begin{tabular}{c c c c c c c} \hline
**Agent ID** & \(\theta_{time,i}\) & \(\theta_{cost,i}\) & \(\theta_{c,transit,i}\) & **Class ID** & **E_Taxi (\%)** & **E_Transit (\%)** \\ \hline
1 & -0.107 & -7.30*10\({}^{-8}\) & -0.005 & 1 & 89.54\% & 10.46\% \\
2 & -0.067 & -1.71*10\({}^{-8}\) & -0.003 & 1 & 79.19\% & 20.81\% \\
3 & -0.040 & -1.68*10\({}^{-9}\) & -0.002 & 1 & 68.94\% & 31.06\% \\
4 & -7.69*10\({}^{-10}\) & -0.301 & 0.043 & 2 & 10.46\% & 89.54\% \\
5 & -3.90*10\({}^{-10}\) & -0.111 & 0.009 & 2 & 20.81\% & 79.19\% \\
6 & -9.46*10\({}^{-9}\) & -0.036 & 0.001 & 2 & 31.06\% & 89.54\% \\
7 & 0.095 & 0.034 & 0.005 & 3 & 10.46\% & 89.54\% \\
8 & 0.036 & 0.016 & -7.19*10\({}^{-4}\) & 3 & 89.54\% & 10.46\% \\ \hline \end{tabular}
\end{table}
Table 3: Estimated parameters for the illustrative example
e-bikes), walking (trips made on foot), and carpool (trips made with several passengers in a private auto vehicle).
Since the statewide dataset is quite large (19.53 million residents times 2.74 trips per resident), it is infeasible to estimate model at the individual level with the whole sample. Hence, we aggregate individual-level mode choices into market-level mode share based on census block-group OD pair and population segment, resulting in 120,740 agents with trip volumes larger than 50 trips per day in 2019. Another reason for using market-level dataset is that average travel time and cost at the OD pair level are more robust for modeling, considering that synthetic population datasets might contain noise and errors in areas with poor data coverage. Table 4 shows the field definition of the market-level dataset, which fits both the 2019 dataset and 2021 dataset. Each row (agent) contains the population segment ID, block-group ID of the origin and destination, number of trips per day along the OD pair, average travel time of each mode, average monetary cost of each mode, and the mode share. Figure 2 visualizes trip OD pairs per population segment in New York State.
\begin{table}
\begin{tabular}{l l l} \hline
**Field name** & **Sample value** & **Description** \\ \hline
**Agent information** & & \\ Segment\_ID** & NotLowIncome & ID of the population segment. \\ origin\_brgp & 360810201001 & ID of the block group from which the trips originated. \\ destination\_brgp & 360810238330 & ID of the block group from which the trips ended. \\ Trip num & 502 & Number of trips per day along the OD pair. \\ \hline
**Average travel time** & & \\ Dur\_private\_auto & 0.16 & Average travel time (hour) of private auto. \\ Dur\_access & 0.09 & Average access time (hour) of public transit. \\ Dur\_egress & 0.09 & Average egress time (hour) of public transit. \\ Dur\_in\_vehicle & 0.17 & Average in-vehicle time (hour) of public transit. \\ Num\_transfer & 0.0 & Average number of transfers of public transit. \\ Dur\_on\_demand\_auto & 0.20 & Average travel time (hour) of on-demand auto. \\ Dur\_biking & 0.24 & Average travel time (hour) of biking. \\ Dur\_walking & 0.71 & Average travel time (hour) of walking. \\ Dur\_carpool & 0.23 & Average travel time (hour) of carpool. \\ \hline
**Average monetary cost** & & \\ Cost\_private\_auto & 4.94 & Average monetary cost (\$) of private auto. \\ Cost\_public\_transit & 2.75 & Average monetary cost (\$) of public transit. \\ Cost\_on\_demand\_auto & 5.89 & Average monetary cost (\$) of on-demand auto. \\ Cost\_biking & 0 & Average monetary cost (\$) of biking. \\ Cost\_walking & 0 & Average monetary cost (\$) of walking. \\ Cost\_carpool & 2.47 & Average monetary cost (\$) of carpool. \\ \hline
**More share** & & \\ Pro\_private\_auto & 52.07\% & Mode share of private auto. \\ Pro\_public\_transit & 10.05\% & Mode share of public transit. \\ Pro\_on\_demand\_auto & 8.43\% & Mode share of on-demand auto. \\ Pro\_biking & 7.97\% & Mode share of biking. \\ Pro\_walking & 14.45\% & Mode share of walking. \\ Pro\_carpool & 7.03\% & Mode share of carpool. \\ \hline \end{tabular}
\end{table}
Table 4: Field definition of the market-level choice dataset
#### 4.1.2 Model specification and performance metrics
Like RUM models, we assume that individuals choose trip modes to maximize the total utility they can expect to gain. The systematic utilities of choosing six modes are identified at the agent level, as shown in Eq. (26)-(31).
\[V_{driving,i}=\theta_{auto\_tt,i}time_{i}^{driving}+\theta_{cost,i}cost_{i}^{driving }+asc_{driving,i}\,,\qquad\forall i\in I \tag{26}\]
\[V_{transit,i}=\theta_{transit\_at,i}at_{i}+\theta_{transit\_et,i}e_{t}+\theta_ {transit\_wt,i}ivt_{i}+\theta_{transit\_nt,i}nt_{i} \tag{27}\]
\[V_{on\_demand,i}=\theta_{auto\_tt,i}time_{i}^{on\_demand}+\theta_{cost,i}cost_{i }^{on\_demand}+asc_{on\_demand,i}\,, \tag{28}\]
\[V_{biking,i}=\theta_{non\_vehicle,tt,i}time_{i}^{biking}+\theta_{cost,i}cost_{i }^{biking}+asc_{biking,i}\,,\forall i\in I \tag{29}\]
\[V_{walking,i}=\theta_{non\_vehicle,tt,i}time_{i}^{walking}+\theta_{cost,i} cost_{i}^{walking}+asc_{walking,i}\,,\forall i\in I \tag{30}\]
\[V_{carpool,i}=\theta_{auto\_tt,i}time_{i}^{carpool}+\theta_{cost,i}cost_{i}^{ carpool}+asc_{carpool,i}\,,\qquad\forall i\in I \tag{31}\]
where \(i\) denotes the index of an agent (composed of origin_brgp, destination brgp, and Segment_ID); \(I\) is the set of total agents; \(V_{driving,i}\), \(V_{transit,i}\), \(V_{on\_demand,i}\), \(V_{biking,i}\), \(\overline{V}_{walking,i}\), \(V_{carpool,i}\) are systematic utilities of selecting the six modes in agent \(i\); \(time_{i}\)- and \(cost_{i}\)- are the travel time and monetary cost of different modes; \(at_{i}\), \(et_{i}\), \(ivt_{i}\), \(nt_{i}\) are the access time, egress time, in-vehicle time, and number of transfers for public transit in agent \(i\). All these observed attributes are from Replica's synthetic data. We separate the tastes parameters for auto (driving, on demand auto, and carpool) travel time, transit (only public transit) travel time, and non-vehicle (biking and walking) travel time. Five alternative specific constants are introduced with carpool as the reference level. Therefore, each agent has a set of 12 taste parameters to be estimated, including \(\theta_{auto\_tt,i}\), \(\theta_{transit\_at,i}\), \(\theta_{transit\_e,i}\),
Figure 2: Agents per population segment in New York State
\(\theta_{transit\_ivt,i}\), \(\theta_{transit\_nt,i}\), \(\theta_{non\_vehicle\_tt,i}\), \(\theta_{cost,i}\), \(asc_{driving,i}\), \(asc_{transit\_i}\), \(asc_{on\_demand,i}\), \(asc_{biking,i}\), \(asc_{waktling,i}\). Besides the five constants, all of the taste parameters are expected to have negative signs, so in GLAM logit we set the upper boundary of these parameters to zero. Also, we identify two clusters in GLAM logit by setting \(K=2\) in Algorithm 1.
We build several metrics to measure the performance of our approach, including mean squared error (MSE), mean absolute error (MAE), overall accuracy, pseudo loglikelihood, and pseudo R-square. MSE measures the average squared error per agent. MAE measures the average absolute error per agent. Overall accuracy measures the percentage of market share that is correctly predicted by the model. Since our dataset is market-level data, pseudo R-square is calculated based on pseudo loglikelihood with market share \(S_{lj}\) in [0,1] instead of observed choice \(\gamma_{lj}\in\) {0,1}. This still works since the pseudo loglikelihood reaches its maximum value when \(P_{lj}=S_{lj}\). Equations (32) - (35) show how we calculate these metrics. We use these metrics instead of the difference of inverted market share in BLP model (Berry et al., 1995) because we found their loss function is not suitable for datasets with unbalanced market shares. For instance, the loss function includes a log-form of market share ratio, say \(\ln\left(\frac{P_{ij}}{P_{lo}}\right)\). This item results in a large negative value if \(P_{ij}\) is close to zero.
\[MSE=\frac{1}{|I|}*\sum_{i\in I}\sum_{j\in I}\left(P_{ij}-S_{lj} \right)^{2} \tag{32}\] \[MAE=\frac{1}{|I|}*\sum_{i\in I}\sum_{j\in I}|P_{ij}-S_{ij}|\] (33) \[ACC=\frac{1}{|I|}*\sum_{i\in I}\sum_{j\in I}\min\left(P_{ij},S_{ ij}\right)\] (34) \[pseudo\_R2=1-\frac{\sum_{i\in I}\sum_{j\in I}\ln\left(P_{ij} \right)^{S_{lj}}}{\sum_{i\in I}\sum_{j\in I}\ln\left(P_{ij}^{0}\right)^{S_{ij }}} \tag{35}\]
where \(i,j\) denote the index of agents and modes; \(P_{ij}\) is the predicted market share of mode \(j\) in agent \(i\); \(S_{lj}\) is the observed market share of mode \(j\) in agent \(i\); \(P_{ij}^{0}\) is the predicted market share with zero model (all modes have the same market share).
#### 4.1.3 Benchmarking
An essential part of our experiments is to compare the performance of GLAM logit with current approaches dealing with taste heterogeneity. We build mixed logit (MXL), conditional mixed logit (CMXL) (Revelt & Train, 2000) and individual parameter logit (IPL) (Swait, 2022) as benchmarks.
For the CMXL, it is infeasible to build in its original form since the individual-level data is too large and our dataset does not contain previous choice observations of the same individual. As an alternative, we first estimate a MXL model with 1% sample of the individual data and then reshape the taste distribution based on the market-level mode share. The 1% dataset contains mode choice observations of 132,854 individuals, which is almost the largest sample size for MXL to converge on a local machine. Conditional taste distribution is then retrieved using a logic similar to Train (2009)'s book. Equations (36)-(37) show the original approach, and Equations (38)-(39) show the approach used in our work.
\[\bar{\beta}_{n}=\frac{\sum_{r}\beta^{r}\cdot P(\gamma_{n}|\beta^{r })}{\sum_{r}P(\gamma_{n}|\beta^{r})}\,\qquad\forall n\in N \tag{36}\] \[P(\gamma_{n}|\beta^{r})=\prod_{0=1}^{0}P(\gamma_{n}^{o}|\beta^{ r})\,\qquad\forall n\in N,r\in R \tag{37}\]
\[\bar{\beta}_{i}=\frac{\sum_{r}\beta^{r}.\left[L(S_{i}|\beta^{r}) \right]^{-1}}{\sum_{r}[L(S_{i}|\beta^{r})]^{-1}}\,\ \ \ \ \ \forall i\in I \tag{38}\] \[L(S_{i}|\beta^{r})=\sum_{j\in J}\left(P_{ij}^{r}-S_{ij}\right)^{ 2},\ \ \ \ \ \ \forall i\in I,r\in R \tag{39}\]
where \(\bar{\beta}_{n}\), \(\bar{\beta}_{i}\) denote the parameters vector of individual \(n\) and agent \(i\); \(\beta^{r}\) denotes the r-th draw from the population taste distribution \(g(\beta|\theta)\). We replace the probability of individual \(n^{\prime}s\) choice \(P(\gamma_{n}|\beta^{r})\) with an inverse of agent \(i^{\prime}s\) loss function \(L(S_{i}|\beta^{r})\). The loss function measures the Euclidean distance between predicted market share \(P_{ij}^{r}\) and observed market share \(S_{ij}\) (also used to calculate MSE). This works since the basic logic of conditional individual tastes is to take \(R\) draws of \(\beta\) from the population distribution and determine the weighted average of these draws (Train, 2009).
For the individual parameter logit (IPL) model, we also made some modifications to fit it into market-level dataset. Instead of maximizing the loglikelihood function as Swait (2022)'s did, we minimize the loss function in order to ensure agent-specific estimation with market-level dataset. This can be fulfilled by replacing Equations (6)-(7) with Equations (40)-(41).
\[l_{i}\big{(}\hat{\theta}_{ig}\big{)}=\sum_{I\in J}\left(P_{ij}^{ g}-S_{ij}\right)^{2}\ \ \,\forall i\in I,g\ \in\{1,...,G\} \tag{40}\] \[L_{lg}=\sum_{i=1}^{I}ln\left[l_{i}\big{(}\hat{\theta}_{ig}\big{)} \right],\ \ \ \ \ \forall g\ \in\{1,...,G\} \tag{41}\]
where Equation (40) defines the loss function given predicted and observed market share in agent \(i\), and Equation (41) serves as the objective function of the evolutionary algorithm. We use almost the same evolutionary algorithm and gradient descent algorithm as used in Swait (2022)'s work. We test three different candidate pools: (1) 200 initial candidates randomly sampled from parameter space [-5,5]; (2) 200 initial candidates randomly sampled from parameter space [-10,10]; (3) 200 initial candidates randomly sampled from parameter distribution estimated by MXL. We use 200 candidates because the calculation of a larger size will use up the CPU memory of our local machine. Figure 3 shows the performance of evolutionary algorithm given different candidate pools, from which we can find the value of loss function per iteration is quite sensitive to initial candidate pools. This indicates that bad candidate pool might stick the algorithm at local optimums, resulting in poor model performance. To speed up the algorithm and ensure a better performance, we use MXL draws as the initial candidate pool.
### Model results
We present the model results from three aspects: (1) basic statistics; (2) distribution of taste parameters; and (3) performance metrics.
Figure 3: Total loss in each iteration. In (a)-(c), x-axis is the number of iterations, y-axis is the value of loss function.
#### 4.2.1 Basic statistics
Table 5 summarizes the basic statistics of model results from which we can compare our GLAM logit model to benchmark models under the same context. For MXL and CMXL we cannot set every parameter to be randomly distributed, otherwise the models cannot converge. To this end, we only set five random parameters in MXL and CMXL and do not allow correlations among these parameters. Several interesting points were found. The results of MXL, CMXL, IPL and GLAM logit show great consistency in signs: time-related parameters (auto travel time, transit in-vehicle time, non-vehicle travel time, etc.) have negative signs, the parameter of trip cost has negative sign, and the constant of on demand auto and biking have negative signs. These signs align with our empirical knowledge.
1. The results of MXL, CMXL, IPL and GLAM logit show great consistency in signs: time-related parameters (auto travel time, transit in-vehicle time, non-vehicle travel time, etc.) have negative signs, the parameter of trip cost has a negative sign, and the constant for on-demand auto and biking have negative signs. These signs align with our empirical knowledge.
2. The scale of five random parameters estimated by MXL and CMXL is quite large, implying that these parameters might be "over-estimated" given other parameters as fixed values. This is inevitable since MXL cannot converge if we set all of the parameters to be normally distributed, which is one of the drawbacks in parametric estimation. Though IPL addresses this issue through nonparametric estimation, the effect is limited because its candidate pool is generated from MXL results. Unfortunately, this is also inevitable since otherwise it is hard to find a proper candidate pool to speed up the algorithm and avoid sticking to local optima.
3. In comparison, the parameter scales estimated by GLAM logit are balanced. The scales of mode specific constants in GLAM logit are similar to other benchmarks, while the scales of time-related parameters are much smaller and aligns with existing mode choice studies. The advantage of allowing randomness in all parameters can also be reflected by the pseudo log-likelihood value, with -142,139 in CMXL, -126,241 in IPL, and -82,264 in GLAM logit.
4. Two clusters were identified by GLAM logit. The first one accounts for 74.40% of the agents, in which constants of transit, on demand auto, and walking are large negative values (referring to longer trips in rural areas). The second one accounts for 25.60% of the agents, in which the value of time (the ratio between the parameters of auto travel time and trip cost) is higher and the standard deviation of mode-specific constants is larger (referring to shorter trips in urban areas).
5. GLAM logit took 2.15 hours to converge given 120,740 agents, which is higher than MXL (12 min), CMXL (17 min), and IPL (1.31 hours). The longer computational time of GLAM logit is partly due to our coding structure, which is acceptable given a large dataset.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **Mixed logit** & **Conditional** & **Individual** & **GLAM logit** \\ & **(MXL)** & **mixed logit** & **parameter logit** & **Cluster 1** & **Cluster 2** \\ & **(CMXL)** & **(IPL)** & **(74.40\%)** & **(25.60\%)** \\ \hline Auto travel time & -4.098*** & -5.506 & -7.754 & -0.252 & -0.498 \\ (\(\theta_{auto,tt,t\downarrow}\)) & (13.895) & (6.595) & (2.220) & (0.114) & (0.392) \\ Transit in-vehicle time & -4.665*** & -4.838 & -5.335 & -0.923 & -0.425 \\ (\(\theta_{transit,t\downarrow}\)) & (2.115) & (0.711) & (0.407) & (0.245) & (0.248) \\ Transit access time & -11.055*** & -11.065 & -10.915 & -0.434 & -0.234 \\ (\(\theta_{transit,at\downarrow}\)) & (–) & (–) & (0.091) & (0.097) & (0.105) \\ Transit gress time & -6.034*** & -6.034 & -6.229 & -0.486 & -0.251 \\ (\(\theta_{transit,t\downarrow}\)) & (–) & (–) & (0.119) & (0.107) & (0.119) \\ Number of transfers & -1.419*** & -1.419 & -1.495 & -1.612 & -1.391 \\ (\(\theta_{transit,nt,t\downarrow}\)) & (–) & (–) & (0.047) & (0.343) & (0.631) \\ Non-vehicle travel time & -17.591*** & -17.768 & -9.430 & -2.636 & -1.456 \\ (\(\theta_{non,vehicle,t\downarrow}\)) & (7.905) & (4.123) & (4.640) & (0.588) & (0.730) \\ Trip cost & -0.606*** & -0.576 & -0.164 & -0.146 & -0.082 \\ (\(\theta_{cost,t\downarrow}\)) & (0.590) & (0.314) & (0.249) & (0.175) & (0.155) \\ Driving constant & 0.136*** & 0.221 & 0.660 & 0.445 & 1.711 \\ (\(asc_{driving,t}\)) & (1.111) & (0.562) & (0.324) & (0.123) & (1.976) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Basic statistics of models build with 2019 dataset (each entry represents the average value of one estimated parameter, and the number in the parenthesis is the standard deviation)
#### 4.2.2 Distribution of taste parameters
The GLAM logit model with 2019 dataset converged at the 13\({}^{\rm th}\) iteration of Algorithm 1 (percent change of fixed-point priors less than 0.5%), resulting in estimated parameters per agent that are empirically derived. Figure 3 (a)-(b) shows the mean value of twelve taste parameters in each iteration. These taste parameters are revealed to be neither Gumbel nor Gaussian.
Figure 4 (c)-(f) shows the empirical distribution of taste parameters of the two clusters. In general, twelve parameters can be divided into several categories: (1) highly concentrated parameters with unimodal distribution, such as transit in-vehicle time (transit_ivt), transit access time (transit_at)/transit egress time (transit_et) in cluster 1, and trip cost (cost). These parameters are concentrated around their mean values with small variations, indicating homogeneous tastes among agents; (2) highly concentrated parameters with multimodal distribution, such as auto travel time (auto_tt) and transit access time (transit_at)/transit egress time (transit_et) in cluster 2. These parameters also have small variations while show the potential of exploring more clusters (in our experiments we only consider two clusters), and (3) even-distributed parameters with multimodal distribution, such as most of the mode specific constants. These parameters have larger variations, indicating heterogeneous tastes among agents. The results make sense since travelers usually have similar tastes for travel time and cost while their preferences for trip modes are quite diverse.
We pick one parameter from each category to compare the estimated distribution using GLAM logit and benchmark models. Figure 5 presents the probability density function (PDF) of three selected parameters: (1) a highly concentrated parameter with unimodal distribution, trip cost, (2) a highly concentrated parameter with multimodal distribution, auto travel time, and (3) an evenly distributed parameter with multimodal distribution. In MXL, since all of the parameters are assumed to be unimodal, multimodal distribution cannot be identified (except for MXL with latent class settings, which is more complicated). Though CMXL reshapes the unimodal distribution conditional on agent-level choices, the effect is limited because this is still a parametric distribution. Moreover, GLAM logit ensures all agent-specific parameters of time and cost to be negative. In contrast, part of these parameters in MXL, CMXL, and IPL are positive, which raises issues in the demand prediction (e.g., the market share decrease with the increase of travel time). To this end, GLAM logit provides a flexible and direct approach for modelers to capture taste homogeneities and heterogeneities, which are infeasible in MXL/CMXL and hard to fulfill in IPL.
#### 4.2.3 Performance metrics
We compare the performance of GLAM logit and benchmark models with two datasets. Mode choice dataset of 2019 is used to calculate in-sample performance, and mode choice dataset of 2021 is used to calculate out-of-sample performance. Table 6 shows a comparison of in-sample and out-of-sample performance metrics. The in-sample performance of GLAM logit is the highest in general, with a MSE of 0.0024, a MAE of 0.0605, an overall accuracy of 96.97%, and a pseudo R-square of 0.6197. In comparison, the best model in benchmarks is IPL, with a MSE of 0.0623, a MAE of 0.3168, an overall accuracy of 84.16%, and a pseudo R-square of 0.4445. The higher in-sample performance in GLAM logit is mainly due to its bottom-up logic of nonparametric estimation, i.e., each set of agent taste parameters is estimated by solving an QP problem instead of assuming a parametric distribution (like MXL and CMXL) or first generating taste candidate pools and then conducting gradient search (like IPL). Such a nonparametric and deterministic estimation enables GLAM logit to capture taste heterogeneity with great flexibility.
Figure 4: Mean values and parameter distributions. In (a)-(b), x-axis is the number of iterations, y-axis is parameter mean value. In (c)-(f), x-axis is the value of estimated parameters, y-axis is the probability density.
Figure 5: Comparison of parameter distributions estimated by GLAM logit and benchmark models. In (a)-(c), x-axis is the value of estimated parameters, y-axis is the probability density.
When it comes to the out-of-sample performance, GLAM logit is still the best, though its performance is a bit lower than the in-sample. GLAM logit has a MSE of 0.0285, a MAE of 0.2191, an overall accuracy of 89.04%, and a pseudo R-square of 0.5788. The best model in benchmarks is also IPL, with a MSE of 0.0717, a MAE of 0.3350, an overall accuracy of 82.35%, and a pseudo R-square of 0.4165. The risk of overfitting in GLAM logit is absolutely acceptable compared with the AMXL model in Ren and Chow (2022)'s work with a similar estimation logic. The low risk of overfitting in GLAM logit can be attributed to two aspects. On the one side, GLAM logit uses aggregate market share instead of a single choice observation, which is more stable given noise and measurement errors in the dataset. The agent-level structure makes the parameter estimation more trackable and robust by fitting multiple choice observations at once. On the other side, the ubiquitous dataset provides sufficient information for our modeling. That is why our benchmark models also have decent out-of-sample performance. With the 2019 and 2021 datasets from Replica Inc., we already have trip OD pairs throughout New York State. In that case, we no longer need to transfer from a sample to the population. Instead, we can directly use agent-level parameters to predict the market share in any OD pair given new scenarios in the future. The slight drop of accuracy also indicates that travelers' mode preference in New York State didn't change too much from 2019 to 2021, though COVID-19 is a considerable impactor. This is probably because (1) the test data is for 2021 September-December, when travel patterns largely returned to normal conditions; and (2) while COVID-19 impacts NYC greatly, it has fewer impacts in smaller cities and rural areas.
### NY statewide mode choice preference retrieved from GLAM logit
Since each set of parameters is linked to an agent id, we can further check the spatial distribution of taste parameters, as well as the taste heterogeneity among population segments. This section is to present the statewide mode choice preference retrieved from GLAM logit. We first calculate value-of-time (VOT) by dividing the parameter of auto travel time with the parameter of trip cost. The unit of VOT is $/hour, which is widely used in transportation studies. We then check the spatial distribution of mode specific constants in New York State and New York City, which reflects travelers' mode preferences along different OD pairs. These results are essential to the policymaking in statewide transportation projects.
#### 4.3.1 Value-of-time (VOT) in NYS and NYC
Table 7 lists the average VOT of four population segments in New York State and New York City. The results are consistent with existing studies and our empirical knowledge by: (1) the average VOT in New York City is generally higher than in New York State; (2) NotlowIncome Population have the highest VOT in all segments ($13.95/hour in NYS and $28.05/hour in NYC); (3) LowIncome Population and Senior Population have relatively lower VOT (for LowIncome Population, $9.63/hour in NYS and $21.67/hour in NYC; for Senior Population, $6.26/hour in NYS and $10.93/hour in NYC). It is noted that the average VOT in NYS is a bit lower than existing studies. This is probably because
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **MXL** & **CMXL** & **IPL** & **GLAM logit** \\ \hline
**In-sample performance metrics** & & & & \\ Mean squared error (MSE) & 0.2193 & 0.1270 & 0.0623 & 0.0024 \\ Mean absolute error (MAE) & 0.5793 & 0.4543 & 0.3168 & 0.0605 \\ Overall accuracy & 61.03\% & 77.28\% & 84.16\% & 96.97\% \\ Pseudo R-square & 0.0715 & 0.3993 & 0.4445 & 0.6197 \\ \hline
**Out-of-sample performance metrics** & & & & \\ Mean squared error (MSE) & 0.1875 & 0.1345 & 0.0717 & 0.0285 \\ Mean absolute error (MAE) & 0.5205 & 0.4785 & 0.3350 & 0.2191 \\ Overall accuracy & 66.08\% & 76.08\% & 82.35\% & 89.04\% \\ Pseudo R-square & 0.1032 & 0.3430 & 0.4165 & 0.5788 \\ \hline \hline \end{tabular} Note: In-sample metrics are calculated using the 2019 dataset. Out-of-sample metrics are calculated using the 2021 dataset.
\end{table}
Table 6: In-sample and out-of-sample performance of GLAM logit and benchmark models
our model includes many agents with low trip volume and low VOT in rural areas, which would be ignored in transportation survey. The synthetic population data give us a chance to bring them back and build an ethical model considering underserved areas.
Figure 6 presents the spatial distribution of VOT in New York City and New York State. In New York State, the VOT in New York City, Albany, Buffalo, Syracuse, Rochester, and Ithaca is obviously higher than in other areas. Among these cities, NYC has the highest VOT. In NYC, trips related to Manhattan and trips pointing to JFK airport have relatively higher VOT, while trips in Staten Island have relatively lower VOT. These findings are good references for the operating strategies of largescale transportation services (e.g., Uber/Lyft, paratransit, p2p carpooling, etc.).
#### 4.3.2 Mode preferences in NYS and NYC
Figure 7-9 present the spatial distribution of three mode specific constants, including driving constant, transit constant, and walking constant. We would not present the other two mode constants since the length of the paper is limited. Several interesting points of mode preference were found.
1. The constant of driving mode is generally positive in NYS and NYC, with a small proportion to be negative in rural areas. The positive value is higher in major cities than in other areas. Since we set carpool mode as the reference level, the distribution of driving constant has a relatively smaller variation, implying that driving and carpool share some similarities in travelers' mind.
2. The constant of transit mode is obviously higher in New York City than in other regions. Within NYC, we can find three "corridors" with high preference for public transit: 1) from upper-Manhattan to lower-Manhattan (subway line A&C&B&D), 2) from downtown Brooklyn to Manhattan (subway line B&D&N&Q), and 3) from Queens to mid-Manhattan (subway line E&F&M&R).
3. The constant of walking mode is positive for shorter trips while negative for longer trips. Short trips in New York City have the largest positive value. This makes sense since travelers prefer to walk within a walkable distance and safe environment.
\begin{table}
\begin{tabular}{l|c|c} \hline & **Average VOT in NY State** & **Average VOT in NYC** \\ \hline
**NotLowIncome Population** & $13.95/hour & $28.05/hour \\ \hline
**LowIncome Population** & $9.63/hour & $21.67/hour \\ \hline
**Senior Population** & $6.26/hour & $10.93/hour \\ \hline
**Student Population** & $7.40/hour & $10.96/hour \\ \hline \end{tabular}
\end{table}
Table 7: Value of time (VOT) of different population segments
Figure 6: Value of time (VOT) distribution in NYS and NYC
Figure 8: Distribution of transit constant in NYS and NYC
Figure 7: Distribution of driving constant in NYS and NYC
Figure 9: Distribution of walking constant in NYS and NYC
## 5 Conclusion
Demand for large-scale choice modeling is on the rise, and understanding the taste heterogeneity among individuals or agents requires innovative methods to deal with aggregate data retrieved from big datasets. We propose the group level agent-based mixed logit model (GLAM logit) combined with an inverse optimization (IO) estimation method, an agent-level machine learning method that is theoretically consistent with a utility-maximizing mixed logit model framework. This method is designed for a ubiquitous dataset representing a whole population which is possible with Big Data. It contributes to the study field by overcoming limitations of conventional discrete choice models and current nonparametric logit models given ubiquitous datasets.
The methodological contributions of this study are as follows. First, we transferred the estimation framework from individual level to agent level by aggregating individual choice observations into agent-level market share. The agent-level structure makes the parameter estimation more trackable and robust given ubiquitous dataset with inevitable noise and measurement errors. Our results show that the out-of-sample accuracy is slightly lower than in-sample accuracy (89.04% versus 96.97%), which indicates an acceptable risk of overfitting. Second, we use a bottom-up logic of nonparametric estimation, i.e., each set of agent taste parameters is estimated by solving a QP problem instead of assuming a parametric distribution or updating taste candidate pools. The constrained nonparametric estimation enables GLAM logit to capture taste heterogeneity with great flexibility that is hard for MXL, CMXL, and IPL. A comparison of parameter distribution and out-of-sample performance can reflect this advantage. Third, since each agent's taste is fully specified and linked to the agent id, we can retrieve statewide mode choice preference from the results of GLAM logit. Our experiments prove that the spatial distribution of VOTs and mode specific constants is explainable using our empirical knowledge, which can be directly integrated into system design optimization models as demand predictions and essential to the policymaking of statewide transportation projects.
Despite the advantages above, there are many new research opportunities and questions to be addressed. While the study looks only at mode choice, the model can be customized to examine more diverse choice scenarios. Also, our current results show that some parameter distributions are multimodal, indicating more clusters to be identified. Since we only consider two clusters in this study, it is worth exploring how to find the best number of clusters in the future. Last but not least, GLAM logit is designed for a single choice scenario with millions of choice observations. Hence, our current model structure cannot deal with a specific case, in which the choice set and alternative attributes vary among different choice scenarios. Estimating taste parameters with multiple choice dimensions and large sample size will be another direction of our future study.
## Acknowledgments
Funding support from C2SMART and NYU's Summer Undergraduate Research Program are appreciated. Data shared by Replica are gratefully acknowledged.
|
2309.04425 | Equivariant volume extremization and holography | In a previous paper two of us (D.M. and A.Z.) proposed that a vast class of
gravitational extremization problems in holography can be formulated in terms
of the equivariant volume of the internal geometry, or of the cone over it. We
substantiate this claim by analysing supergravity solutions corresponding to
branes partially or totally wrapped on a four-dimensional orbifold, both in
M-theory as well as in type II supergravities. We show that our approach
recovers the relevant gravitational central charges/free energies of several
known supergravity solutions and can be used to compute these also for
solutions that are not known explicitly. Moreover, we demonstrate the validity
of previously conjectured gravitational block formulas for M5 and D4 branes. In
the case of M5 branes we make contact with a recent approach based on
localization of equivariant forms, constructed with Killing spinor bilinears. | Edoardo Colombo, Federico Faedo, Dario Martelli, Alberto Zaffaroni | 2023-09-08T16:38:50Z | http://arxiv.org/abs/2309.04425v3 | # Equivariant volume extremization and holography
###### Abstract
In a previous paper two of us (D.M. and A.Z.) proposed that a vast class of gravitational extremization problems in holography can be formulated in terms of the equivariant volume of the internal geometry, or of the cone over it. We substantiate this claim by analysing supergravity solutions corresponding to branes partially or totally wrapped on a four-dimensional orbifold, both in M-theory as well as in type II supergravities. We show that our approach recovers the relevant gravitational central charges/free energies of several known supergravity solutions and can be used to compute these also for solutions that are not known explicitly. Moreover, we demonstrate the validity of previously conjectured gravitational block formulas for M5 and D4 branes. In the case of M5 branes we make contact with a recent approach based on localization of equivariant forms, constructed with Killing spinor bilinears.
_Dipartimento di Fisica, Universita di Milano-Bicocca_
_Piazza della Scienza 3, 20126 Milano, Italy_
_INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy_
_Dipartimento di Matematica "Giuseppe Peano", Universita di Torino,_
_Via Carlo Alberto 10, 10123 Torino, Italy_
_INFN, Sezione di Torino, Via Pietro Giuria 1, 10125 Torino, Italy_
In a previous paper two of us (D.M. and A.Z.) proposed that a vast class of gravitational extremization problems in holography can be formulated in terms of the equivariant volume of the internal geometry, or of the cone over it. We substantiate this claim by analysing supergravity solutions corresponding to branes partially or totally wrapped on a four-dimensional orbifold, both in M-theory as well as in type II supergravities. We show that our approach recovers the relevant gravitational central charges/free energies of several known supergravity solutions and can be used to compute these also for solutions that are not known explicitly. Moreover, we demonstrate the validity of previously conjectured gravitational block formulas for M5 and D4 branes. In the case of M5 branes we make contact with a recent approach based on localization of equivariant forms, constructed with Killing spinor bilinears.
###### Contents
* 1 Introduction
* 2 Equivariant volume with higher times
* 3 AdS\({}_{3}\) and AdS\({}_{5}\) solutions in M theory
* 3.1 AdS\({}_{3}\times M_{8}\) solutions
* 3.2 AdS\({}_{5}\times M_{6}\) solutions
* 3.3 Comparison with other approaches
* 4 AdS\({}_{2}\), AdS\({}_{3}\) and AdS\({}_{4}\) solutions in type II supergravities
* 4.1 AdS\({}_{2}\times M_{8}\) solutions in massive type IIA
* 4.2 AdS\({}_{4}\times M_{6}\) solutions in massive type IIA
* 4.3 AdS\({}_{3}\times M_{7}\) solutions in type IIB
* 5 Conclusions
* A Fixing the Kahler moduli of AdS\({}_{5}\times M_{6}\) solutions with \(\mathbb{Z}_{2}\) symmetry
* B AdS\({}_{3}\times M_{7}\) solutions with the addition of higher times
* C AdS\({}_{4}\times M_{6}\) gravity solutions
* C.1 Kahler-Einstein base space
* C.2 \(S^{2}\times S^{2}\) base space
## 1 Introduction
In this paper we propose a general prescription to write extremal functions for supergravity solutions with a holographic dual. The extremal functions depend on equivariant parameters for the expected abelian isometries of the background and a set of parameters describing the geometry. The extremization with respect to the parameters gives the free energy of the supergravity solution, that is holographically equal to the central charge or free energy of the dual conformal field theory. On the quantum field theory side, this construction is the gravitational dual of extremizing the central charge (in even dimensions) or the sphere partition function (in odd dimensions) in order to find the exact R-symmetry. There is a huge literature about extremal functions for black hole, and more generally for holographic solutions. Extremal functions of known black holes and black strings can be expressed in terms of gravitational blocks [1] and strongly suggest that some equivariant localization is at work. Following [2] we will indeed express the extremal functions in terms of a universal geometrical quantity, the equivariant volume of the internal supergravity geometry.
Given a symplectic orbifold \(\mathbb{M}_{2m}\) of real dimension \(2m\) with a toric1 action of \(\mathbb{T}^{m}=U(1)^{m}\) generated by the Hamiltonian \(H\), we can define the equivariant volume
Footnote 1: The toric assumption is not essential, but is made for two reasons. Firstly, if a geometry has a symmetry group that contains \(\mathbb{T}^{m}=U(1)^{m}\), we need to extremize over the corresponding \(m-1\) equivariant parameters not fixed by supersymmetry, otherwise the critical point found would not be a _bona fide_ extremum of the gravitational action. Secondly, in this case the fixed point theorem simplifies to a sum of contributions at isolated fixed points. More generally, it would be straightforward to proceed assuming a \(\mathbb{T}^{k}=U(1)^{k}\) Hamiltonian action, with \(1<k<m\).
\[\mathds{V}=(-1)^{m}\int_{\mathbb{M}_{2m}}\mathrm{e}^{-\frac{\omega^{\mathbb{T }}}{2\pi}}\,, \tag{1.1}\]
where \(\omega^{\mathbb{T}}=\omega+2\pi H\) is the equivariant Kahler form. In addition to the vectors of the fan \(V^{A}\), it depends on the \(m\) equivariant parameters \(\epsilon_{I}\) for the torus \(\mathbb{T}^{m}\) action and on the Kahler parameter \(\lambda_{A}\) of the geometry. The latter enter in the expansion of the Kahler class in a sum of Chern classes of toric line bundles
\[-\frac{[\omega]}{2\pi}=\sum_{A}\lambda_{A}c_{1}(L_{A})\,. \tag{1.2}\]
The equivariant volume of toric orbifolds is a basic topological object. It can be computed using a fixed point formula and it is only sensitive to the degenerations of the torus \(\mathbb{T}^{m}\) near the fixed points. In the applications to holography one encounters metrics that are not Kahler and not even symplectic, but with underlying spaces that are in fact symplectic toric orbifolds and one can nevertheless define \(\mathds{V}\) and use it to compute topological quantities that ultimately will not depend on the metric. In many examples when the underlying geometry is not strictly symplectic or toric we can also define a natural generalization of \(\mathds{V}\) by a sort of analytical continuation.2 Given these properties, the equivariant volume is the gravitational analogue of quantum field theory quantities like 't Hooft anomalies and supersymmetric indices that are invariant under small deformations of the theory once symmetries and matter content are fixed. In [2] it was argued therefore that all extremization problems in gravity can be reformulated in terms of the equivariant volume. It was shown that this is true for volume minimization [3, 4] (dual to \(a\)[5] and \(F\)-maximization [6]) and the formalism of GK geometry [7, 8] (dual to \(c\)[9] and \(\mathcal{I}\)-extremization [10]). It was proposed that this should be true more generally.
Footnote 2: This happens for geometries where the fan is not strictly convex or geometries involving \(S^{4}\).
As a partial check of this proposal, it has been shown in [2] that all known extremization problems for branes wrapped over a sphere or a spindle in type II and M theory can be reformulated in terms of an extremal function
\[F\,=\,\mathds{V}^{(\alpha)}(\lambda_{A},\epsilon_{I})\,, \tag{1.3}\]
subject to a set of flux constraints
\[\nu M_{A}=-\frac{\partial\mathds{V}^{(\beta)}}{\partial\lambda_{A}}\,, \tag{1.4}\]
where \(M_{A}\) are the integer fluxes of the relevant RR or M theory antisymmetric form, obeying
\[\sum_{A}V_{I}^{A}M_{A}\,=\,0\,. \tag{1.5}\]
\(\nu\) is a normalization constant3 that depends on the type of brane and the dimension of the internal geometry, and \(\mathds{V}^{(\gamma)}\) is the homogeneous piece of \(\mathds{V}\) of degree \(\gamma\) in \(\lambda_{A}\). We also note that from (1.4) and (1.5) it follows, using the properties of \(\mathds{V}\), that the constraint
Footnote 3: _A priori_ there is also an overall normalization constant in the definition of \(F\), again depending on the type of brane and the dimension of the internal geometry, however this can always be absorbed in a rescaling of the \(\lambda_{A}\), using the homogeneity of \(\mathds{V}^{(\gamma)}\). For simplicity, in the examples we will indicate only the type of brane as a subscript in \(\nu\), omitting the dependence on the dimension of the internal geometry.
\[\mathds{V}^{(\beta-1)}\,=\,0 \tag{1.6}\]
must be satisfied. Although it is not an independent relation, one can regard this as a topological constraint necessary in order to impose the flux quantization, analogously to the GK formalism [7]. The integers \(\alpha\) and \(\beta\) depend on the type of brane. By a simple scaling argument, it was found that
\[\begin{array}{ll}\text{D3 branes in type IIB:}&\alpha=2\,,\ \ \beta=2\\ \text{M2 branes in M theory:}&\alpha=3\,,\ \ \beta=3\\ \text{M5 branes in M theory:}&\alpha=3\,,\ \ \beta=2\\ \text{D4 branes in massive type IIA:}&\alpha=5\,,\ \ \beta=3\\ \text{D2 branes in massive type IIA:}&\alpha=5\,,\ \ \beta=4\,.\end{array} \tag{1.7}\]
The extremal function \(F\) can be normalized such that its extremum reproduces the central charge of the dual field theory in even dimensions and the logarithm of the sphere partition function in odd dimensions and we will use this convention in the following.
In this paper we show that this construction also holds for known extremization problems for branes (partially or totally) wrapped over four-dimensional toric orbifolds. We need to generalize our construction by introducing _higher times_ in the equivariant volume
\[\mathds{V}=(-1)^{m}\int_{\mathbb{M}_{2m}}\mathrm{e}^{-\frac{\mathrm{e}^{ \frac{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{ \mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{ \mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}^{\mathrm{e}}}}}}}}}}}}}}}{2\pi}+ \sum_{k=2}\lambda_{A_{1}\ldots A_{k}}\mathrm{\bar{c}}_{1}^{\mathrm{T}}(L_{A_{1} })\ldots\mathrm{\bar{c}}_{1}^{\mathrm{T}}(L_{A_{k}})}\,, \tag{1.8}\]
where \(\lambda_{A_{1}\ldots A_{k}}\) are symmetric tensors and a sum over repeated indices \(A_{i}\) is understood. Higher times have appeared only recently in the literature [11] and are still poorly studied. The previous expression has a large gauge invariance and many parameters are redundant. As we will see, the equivariant volume with higher times
contains all the information needed to fully capture the topological properties and the quantization of fluxes for a very large class of supergravity solutions.
The above construction relies on even-dimensional toric orbifolds. For supergravity backgrounds AdS\({}_{d}\times M_{k}\) with odd-dimensional internal space \(M_{k}\) the geometry to consider is the cone over \(M_{k}\), as familiar from holography. This cone is often a non-compact toric Calabi-Yau, or, in the case of supersymmetry preserved with anti-twist, a non-convex generalization.4 When \(M_{k}\) is even-dimensional, we consider the equivariant volume of the compact \(M_{k}\) itself. Some M5 brane solutions have a \(\mathbb{Z}_{2}\) symmetry that allows to cut into half the number of fixed point and consider an equivalent problem for a non-compact Calabi-Yau (half of the manifold). This was done in [2] for M5 branes wrapped on a spindle.
Footnote 4: See for example [12].
Our approach naturally incorporates the GMS construction based on GK geometry [7, 8] as well as the recent localization technique based on Killing spinor bilinears in M theory [13]. Indeed, we will show that, for M5 solutions with even-dimensional \(M_{6}\) or \(M_{8}\), our approach is effectively equivalent to the one in [13]. In particular, all the geometrical constraints that must be imposed on a case-by-case analysis in order to find the free energy in [13] appear naturally in our construction as an extremization with respect to all the parameters that are not fixed by the flux quantization conditions. On the one hand, this is a nice confirmation of our prescription. On the other hand, our approach for the toric case is more general, it covers in a simple and universal way the even and odd-dimensional cases, it naturally extends to massive type IIA solutions, which are not yet covered by the previous techniques, and expresses everything in terms of the extremization of a universal quantity, the equivariant volume of the associated geometry, without referring to supergravity quantities. We are confident that when the explicit case-by-case supergravity analysis will be performed for the missing backgrounds it will confirm our general prescription.
The paper is organized as follows. In section 2 we define the equivariant volume of a general toric orbifold and we review some of its basic properties following [2]. We also introduce the concept of higher times, which are necessary to parameterize all the fluxes supported by a given geometry. In section 3 we analyse M theory solutions with M5 brane flux. In section 3.1 we consider solutions associated with M5 branes wrapped over a four-dimensional orbifold \(\mathbb{M}_{4}\). We show that the free energy can be obtained by extremizing the appropriate term in the equivariant volume and that the result agrees with the field theory computation in [2], obtained by integrating the anomaly of the M5 brane theory over \(\mathbb{M}_{4}\). In section 3.2 we consider solutions that are potentially related to M5 branes wrapped on a two-cycle in \(\mathbb{M}_{4}\). By extremizing the appropriate term in the equivariant volume, we reproduce known results in the literature and extend them to predictions for solutions still to be found. In section 3.3 we compare our prescription with the recent approach based on Killing spinor
bilinears in M theory [13]. In section 4 we consider solutions in type II string theory with geometries that are fibrations over a four-dimensional orbifold \(\mathbb{M}_{4}\). In section 4.1 we consider massive type IIA solutions associated with D4 branes wrapped around a four-dimensional toric orbifold \(\mathbb{M}_{4}\) and derive the free energy proposed in [14]. In section 4.2 we consider massive type IIA solutions associated with a system of D4/D8 branes, with the former wrapped on a two-cycle in \(\mathbb{M}_{4}\). Extremizing the appropriate term in the equivariant volume we are able to reproduce the gravitational free energy computed from the explicit solution. In section 4.3 we consider type IIB solutions with D3 flux associated with \(S^{3}/\mathbb{Z}_{p}\) fibrations over \(\mathbb{M}_{4}\), which could potentially arise as the near-horizon limit of a system of D3 branes wrapped on a two-cycle of the four-dimensional orbifold \(\mathbb{M}_{4}\). This example can be covered by the formalism of GK geometry, that we here extend to the case of fibrations over orbifolds, using the equivariance with respect to the full four-torus \(\mathbb{T}^{4}\). In this and other previous examples with M5 branes, we observe that, in order to obtain the correct critical point, one should allow all the equivariant parameters not fixed by symmetries to vary, thus rectifying some previous results in the literature. We conclude with a discussion of open problems and future perspectives. Three appendices contain technical aspects of some computations.
## 2 Equivariant volume with higher times
In this section we review and generalize some basic facts about the equivariant volume of general toric orbifolds that will be used in the following. We adopt the conventions of [2], to which we refer for more details and a review of equivariant localization.
We consider a toric orbifold \(\mathbb{M}_{2m}\) with an action \(\mathbb{T}^{m}\) generated by the \(m\) vector fields \(\partial_{\phi_{I}}\). We introduce \(m\) equivariant parameters \(\epsilon_{I}\), with \(I=1,\ldots,m\), and the vector field \(\xi=\epsilon_{I}\partial_{\phi_{I}}\) and consider equivariantly closed forms \(\alpha^{\mathbb{T}}\) satisfying
\[(\mathrm{d}+2\pi i_{\xi})\alpha^{\mathbb{T}}=0\,. \tag{2.1}\]
We will be dealing with varieties and orbifolds of different dimension and, when needed, we will also write \(\alpha^{\mathbb{T}^{m}}\) to specify the dimension. Each toric orbifold comes equipped with a fan, a collection of integer vectors \(V^{A}\) that exhibits \(\mathbb{M}_{2m}\) as a \(\mathbb{T}^{m}\) fibration over a convex polytope
\[\mathcal{P}=\{y_{I}\in\mathbb{R}^{m}\,|\,y_{I}V^{A}_{I}\geq\lambda_{A}\}\,. \tag{2.2}\]
On each of the facets \(y_{I}V^{A}_{I}=\lambda_{A}\) a particular circle in \(\mathbb{T}^{m}\) degenerates, as familiar from toric geometry.5 Each facet defines a toric divisor \(D_{A}\), with associated line
bundle \(L_{A}\), and an equivariant Chern class
\[c_{1}^{\mathbb{T}}(L_{A})=c_{1}(L_{A})+2\pi\epsilon_{I}\mu_{I}^{A}=\mathrm{d}( \mu_{I}^{A}\mathrm{d}\phi_{I})+2\pi\epsilon_{I}\mu_{I}^{A}. \tag{2.3}\]
The functions \(\mu^{A}\) play the role of moment maps. We will not need their explicit expression, which is discussed in [2], but we will frequently use the relation
\[\sum_{A}V_{I}^{A}c_{1}^{\mathbb{T}}(L_{A})=-\epsilon_{I}\,, \tag{2.4}\]
which is the equivariant version of the co-homological statement \(\sum_{A}V_{I}^{A}c_{1}(L_{A})=0\) following from the toric equivalence relations among divisors \(\sum_{A}V_{I}^{A}D_{A}=0\).
The equivariant volume with higher times (1.8) can be computed with a fixed point formula that can also be taken as an operative definition
\[\mathds{V}(\lambda_{A_{1}\ldots A_{K}},\epsilon_{I})=(-1)^{m}\sum_{\alpha=1}^ {n}\frac{\mathrm{e}^{\tau^{\mathbb{T}}|_{y\alpha}}}{d_{\alpha}\,e^{\mathbb{T} }|_{y_{\alpha}}}\,, \tag{2.5}\]
where \(y_{\alpha}\) are the fixed points of the \(\mathbb{T}^{m}\) action, \(d_{\alpha}\) the order of the orbifold singularity at \(y_{\alpha}\), \(e^{\mathbb{T}}\) the equivariant Euler class of the tangent bundle at \(y_{\alpha}\) and
\[\tau^{\mathbb{T}}=\sum_{k=1}\lambda_{A_{1}\ldots A_{k}}c_{1}^{\mathbb{T}}(L_{A _{1}})\ldots c_{1}^{\mathbb{T}}(L_{A_{k}})\,. \tag{2.6}\]
To use the localization formula we assume that \(\mathbb{M}_{2m}\) has only isolated orbifold singularities. This is the case if the fan is the union of \(m\)-dimensional cones \(\{V^{A_{1}},\ldots,V^{A_{m}}\}\).6 Each cone \(\alpha=(A_{1},\ldots,A_{m})\) corresponds to an isolated fixed point \(y_{\alpha}\) with a local orbifold singularity of order
Footnote 6: We can always resolve the fan by adding vectors if this condition is not met.
\[d_{\alpha}=\left|\det(V^{A_{1}},\ldots,V^{A_{m}})\right|. \tag{2.7}\]
The restriction of the Euler class is given by
\[e^{\mathbb{T}}\big{|}_{y_{\alpha}}=\prod_{A_{i}\in\alpha}c_{1}^{\mathbb{T}}(L _{A_{i}})\big{|}_{y_{\alpha}}\,. \tag{2.8}\]
The restriction of the Chern classes at the fixed points \(y_{\alpha}\) are computed as follows [2]
\[c_{1}^{\mathbb{T}}(L_{A})\big{|}_{y_{\alpha}}=\begin{cases}-\frac{\epsilon_{I }U_{I}^{A}}{d_{\alpha}}&\text{if }A\in\alpha\\ 0&\text{if }A\notin\alpha\end{cases}\,, \tag{2.9}\]
where \(U_{I}^{A}\) are the inward normal vectors to the facets of the cone \(\alpha\) defined and normalized by the relation
\[U_{\alpha}^{A_{i}}\cdot V^{A_{j}}=d_{\alpha}\delta_{ij}\,. \tag{2.10}\]
Now the fixed point formula gives
\[\mathds{V}(\lambda_{A_{1}\ldots A_{k}},\epsilon_{I})=\sum_{\alpha=(A_{1},\ldots, A_{m})}\frac{\mathrm{e}^{\tau_{\alpha}}}{d_{\alpha}\prod_{i=1}^{m}\frac{\epsilon \cdot U_{\alpha}^{A_{i}}}{d_{\alpha}}}\,, \tag{2.11}\]
where \(\tau_{\alpha}\) is the restriction of the equivariant form (2.6) to the fixed point \(y_{\alpha}\) and explicitly reads
\[\tau_{\alpha}=-\sum_{i=1}^{m}\lambda_{A_{i}}\Big{(}\frac{\epsilon\cdot U_{ \alpha}^{A_{i}}}{d_{\alpha}}\Big{)}+\sum_{i,j=1}^{m}\lambda_{A_{i}A_{j}}\Big{(} \frac{\epsilon\cdot U_{\alpha}^{A_{i}}}{d_{\alpha}}\Big{)}\Big{(}\frac{ \epsilon\cdot U_{\alpha}^{A_{j}}}{d_{\alpha}}\Big{)}+\ldots \tag{2.12}\]
The equivariant volume can be expanded in power series of the higher times,
\[\mathds{V}(\lambda_{A_{1}\ldots A_{k}},\epsilon_{I})=\sum_{n=0}^{\infty} \mathds{V}^{(n)}(\lambda_{A_{1}\ldots A_{k}},\epsilon_{I})\,, \tag{2.13}\]
where we denote with \(\mathds{V}^{(n)}\) the homogeneous component of degree \(n\) in the set of higher times \(\lambda_{A_{1}\ldots A_{k}}\) for all \(k\). \(\mathds{V}^{(n)}\) is a polynomial in \(\epsilon_{I}\) in the compact case, while it can be a rational function of \(\epsilon_{I}\) when \(\mathbb{M}_{2m}\) is non-compact.
In the examples in [2] only single times (\(\lambda_{A}\)) were used. In this paper we will use single and double times (\(\lambda_{A}\) and \(\lambda_{AB}\)). As a general rule, to fully capture the parameters of the supergravity solution, we need a number of independent parameters at least equal to the number of fixed points. Functionally, indeed \(\mathds{V}\) is a function of \(\epsilon_{I}\) and \(\tau_{\alpha}\) only. Notice that there is a large redundancy in the description with higher times. Due to the relation (2.4), \(\tau^{\mathbb{T}}\) is invariant under the _gauge transformations_
\[\lambda_{A_{1}\ldots A_{k+1}}\to\lambda_{A_{1}\ldots A_{k+1}}+\beta_{I}^{(A_{ 1}\ldots A_{k}}V_{I}^{A_{k+1})}\,,\qquad\lambda_{A_{1}\ldots A_{k}}\to\lambda_ {A_{1}\ldots A_{k}}+\epsilon_{I}\beta_{I}^{A_{1}\ldots A_{k}}\,, \tag{2.14}\]
where \(\beta_{I}^{A_{1}\ldots A_{k}}\) is symmetric in the indices \(A_{1}\ldots A_{k}\). Notice that the subgroup with \(\epsilon_{I}\beta_{I}^{A_{1}\ldots A_{k}}=0\) acts only on \(\lambda_{A_{1}\ldots A_{k+1}}\) without mixing times of different degree and it is the only transformation allowed for single times. In the Calabi-Yau case, where the vectors in the fan lie on a plane identified by the direction \(I=CY\), say \(V_{CY}^{A}=1\),7 this subgroup can also be written as
Footnote 7: In this paper \(\epsilon_{CY}\) will be identified with \(\epsilon_{3}\) in all examples.
\[\lambda_{A_{1}\ldots A_{k}}\to\lambda_{A_{1}\ldots A_{k}}+(\epsilon_{CY}V_{I} ^{(A_{1}}-\epsilon_{I})\gamma_{I}^{A_{2}\ldots A_{k})}\,, \tag{2.15}\]
generalizing the results in [2]. Many times can be therefore gauge-fixed to zero.
As an example we give some more explicit expressions about the four-dimensional case. These will be heavily used in the following since in this paper we will mostly consider geometries that are fibrations over four-dimensional compact orbifolds \(\mathbb{M}_{4}\). For clarity of notation, we will use capital letters \((V_{I}^{A},A,I)\) for the higher-dimensional geometry and lower-case letters \((v_{i}^{a},a,i)\) for \(\mathbb{M}_{4}\). The fan of a four-dimensional compact orbifold is just a collection of two-dimensional integer vectors \(v^{a}\), \(a=1,\ldots,d\)
that define a convex polygon in the plane. The fixed points are associated with the cones \((v^{a},v^{a+1})\), where we take a counter-clockwise order for the vector and identify cyclically \(v^{a+d}=v^{a}\). Notice that in the compact four-dimensional case the number of fixed points is equal to the number of vectors in the fan and we can use the index \(a\) to label both. With the notations of [2] we define the quantities
\[\epsilon_{1}^{a}=\frac{\epsilon\cdot u_{1}^{a}}{d_{a,a+1}}\,,\qquad\epsilon_{2 }^{a}=\frac{\epsilon\cdot u_{2}^{a}}{d_{a,a+1}}\,, \tag{2.16}\]
where \(u_{1}^{a}\) and \(u_{2}^{a}\) are the inward normals to the cones \((v^{a},v^{a+1})\). Explicitly
\[\epsilon_{1}^{a}=-\frac{\det(v^{a+1},\epsilon)}{\det(v^{a},v^{a+1})}\,,\qquad \epsilon_{2}^{a}=\frac{\det(v^{a},\epsilon)}{\det(v^{a},v^{a+1})}\,, \tag{2.17}\]
where \(\epsilon\equiv(\epsilon_{1},\epsilon_{2})\). In particular, the equivariant Euler class of the tangent bundle at a fixed point \(y_{a}\) reads
\[e^{\mathbb{T}}\big{|}_{y_{a}}=\epsilon_{1}^{a}\,\epsilon_{2}^{a}\,, \tag{2.18}\]
and the order of the local orbifold singularity is
\[d_{a,a+1}=\det(v^{a},v^{a+1})\,. \tag{2.19}\]
The restriction to the fixed points of the equivariant Chern classes \(c_{1}^{\mathbb{T}}(L_{a})\) can be written as
\[c_{1}^{\mathbb{T}}(L_{a})\big{|}_{y_{b}}=-(\delta_{a,b}\epsilon_{1}^{b}+\delta _{a,b+1}\epsilon_{2}^{b})\,. \tag{2.20}\]
The fixed point formula (2.11) for the equivariant volume specializes to the expression8
Footnote 8: Notice that there is no summation on \(a\) in the exponent.
\[\mathbb{V}(\lambda_{a_{1}\ldots a_{k}},\epsilon_{i})=\sum_{a=1}^{d}\frac{ \mathrm{e}^{-\lambda_{a}\epsilon_{1}^{a}-\lambda_{a+1}\epsilon_{2}^{a}+\lambda _{a,a}(\epsilon_{1}^{a})^{2}+2\lambda_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a} +\lambda_{a+1,a+1}(\epsilon_{2}^{a})^{2}+\ldots}}{d_{a,a+1}\,\epsilon_{1}^{a} \epsilon_{2}^{a}}\,. \tag{2.21}\]
In the following, we will also need the intersections matrix of divisors, which is independent of the equivariant parameters \(\epsilon_{1},\epsilon_{2}\)[2]:
\[D_{a}\cdot D_{b}=D_{ab}=\int_{\mathbb{M}_{4}}c_{1}^{\mathbb{T}}(L_{a})c_{1}^{ \mathbb{T}}(L_{b})=\begin{cases}\frac{1}{d_{a-1,a}}&\text{if }b=a-1\,,\\ \frac{1}{d_{a,a+1}}&\text{if }b=a+1\,,\\ -\frac{d_{a-1,a+1}}{d_{a-1,a}d_{a,a+1}}&\text{if }b=a\,,\\ 0&\text{otherwise}\,.\end{cases} \tag{2.22}\]
## 3 AdS\({}_{3}\) and AdS\({}_{5}\) solutions in M theory
We start by analysing M theory solutions with M5 brane flux and show that the free energy can be obtained by extremizing the appropriate term in the equivariant volume. The case of M5 branes wrapped on a spindle have been already studied in [2]. Here we focus on geometries that are fibrations over a four-dimensional toric orbifold \(\mathbb{M}_{4}\).
### AdS\({}_{3}\times M_{8}\) solutions
In this section we consider AdS\({}_{3}\times M_{8}\) solutions in M theory, where9\(M_{8}\) is an \(S^{4}\) fibration over the four-dimensional orbifold \(\mathbb{M}_{4}\). Examples of this form have been found in [15] and further discussed in [14, 16, 17]. They are obtained by uplifting AdS\({}_{3}\times\mathbb{M}_{4}\) solutions of \(D=7\) maximal gauged supergravity to eleven dimensions. These AdS\({}_{3}\times M_{8}\) solutions can be interpreted as the near-horizon geometry of a system of M5 branes wrapped around \(\mathbb{M}_{4}\).
Footnote 9: In general, \(M_{8}\) is itself an orbifold.
We need first to identify the topological structure of the underlying geometry. We will focus on the case of toric \(\mathbb{M}_{4}\). The eight-dimensional geometry \(M_{8}\) is not strictly toric, but it admits an action of \(\mathbb{T}^{4}=U(1)^{4}\). If \(d\) is the dimension of the fan of \(\mathbb{M}_{4}\), there are \(2d\) fixed points of the torus action obtained by selecting a fixed point on \(\mathbb{M}_{4}\) and combining it with the North and South pole of \(S^{4}\). We will assume that there is a \(\mathbb{Z}_{2}\) symmetry of the fibration that identifies the North and South pole contributions to the fixed point formula. In this situation we can consider half of the geometry, a \(\mathbb{C}^{2}\) fibration over \(\mathbb{M}_{4}\) with the geometry of a non-compact toric CY\({}_{4}\). One can understand the appearance of the fibre \(\mathbb{C}^{2}\) from the transverse geometry of the brane system, which is \(\mathbb{C}^{2}\times\mathbb{R}\), with \(S^{4}\) embedded inside. We then consider a CY\({}_{4}\) with fan generated by the vectors
\[V^{a}=(v^{a},1,\mathfrak{t}_{a})\,,\qquad V^{d+1}=(0,0,1,0)\,,\qquad V^{d+2}=( 0,0,1,1)\,, \tag{3.1}\]
where \(v^{a}\), \(a=1,\ldots,d\), are the vectors of the fan of \(\mathbb{M}_{4}\) and \(\mathfrak{t}_{a}\) are integers specifying the twisting of \(\mathbb{C}^{2}\) over \(\mathbb{M}_{4}\). When supersymmetry is preserved with anti-twist [18], the toric diagram is not convex and it does not strictly define a toric geometry. We will nevertheless proceed also in this case, considering it as an extrapolation from the twist case. The non-convex case is obtained from the formulas in this paper by sending \(v^{a}\to\sigma^{a}v^{a}\), where \(\sigma^{a}=\pm 1\).
In addition to the metric, the supergravity solution is specified by the integer fluxes of the M theory four-form along all the non-trivial four-cycles. The toric four-cycles of the geometry are \(\mathbb{M}_{4}\) itself, the sphere \(S^{4}\) and \(\mathbb{P}^{1}\) fibrations over the toric two-cycles \(\Sigma_{a}\subset\mathbb{M}_{4}\). In our half-geometry, the sphere \(S^{4}\) and \(\mathbb{P}^{1}\subset S^{4}\) are replaced with copies of \(\mathbb{C}^{2}\) and \(\mathbb{C}\). All together, the toric four-cycles correspond to all the possible intersections of the toric divisors \(D_{A}\cap D_{B}\) and we can therefore introduce a matrix of fluxes \(M_{AB}\). As usual, not all toric divisors are inequivalent in co-homology. The relations \(\sum_{A}V^{A}_{I}D_{A}=0\) imply that the matrix of fluxes satisfy
\[\sum_{A}V^{A}_{I}M_{AB}=0\,. \tag{3.2}\]
We are now ready to formulate our prescription for the extremal function. For M5 branes in M theory, as discussed in [2] and in the introduction, we define the free
energy to extremize as
\[F=\mathds{V}^{(3)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\,, \tag{3.3}\]
and impose the flux constraints10
Footnote 10: We put a bar on top of \(\nu_{M5}\) to stress that we are using a half-geometry. To have the correct normalization of the free energy when using half of the geometry, the parameter \(\nu_{M5}\) must be rescaled as in formula (3.47), as we will discuss more extensively in section 3.2.
\[\overline{\nu}_{M5}\left(2-\delta_{AB}\right)M_{AB}=-\frac{\partial}{\partial \lambda_{AB}}\mathds{V}^{(2)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\,. \tag{3.4}\]
Here the index \(A=1,\ldots,d+2\) runs over all the vectors of the fan of the CY\({}_{4}\), whereas we reserve the lower-case index \(a=1,\ldots,d\) for the vectors of the fan of the base \(\mathbb{M}_{4}\). On the other hand, the index \(I=1,2,3,4\) runs over the equivariant parameters of the CY\({}_{4}\) and we will use \(i=1,2\) for the directions inside \(\mathbb{M}_{4}\). We have added a \((2-\delta_{AB})\) factor in the equation for the fluxes for convenience. It is easy to see using (2.11) that this equation can be equivalently rewritten as
\[\overline{\nu}_{M5}\,M_{AB}=-\frac{\partial^{2}}{\partial\lambda_{A}\partial \lambda_{B}}\mathds{V}^{(3)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\,, \tag{3.5}\]
and one may wonder if we really need higher times. The answer is yes. As we will discuss later, with only single times the previous equation cannot be solved.11
Footnote 11: One would need to restrict the \(\mathfrak{t}_{a}\) in order to find solutions.
In the rest of this section we will show that \(F\) reproduces the expected extremal function and its factorization in gravitational blocks discussed in [2, 14].
#### 3.1.1 The equivariant volume with double times
The \(\mathbb{T}^{4}\) torus action on the CY\({}_{4}\) has \(d\) fixed points, each one corresponding to a cone in the fan with generators \((V^{a},V^{a+1},V^{d+1},V^{d+2})\), \(a=1,\ldots,d\). In particular, there is a one-to-one correspondence between these fixed points and the ones of the base orbifold \(\mathbb{M}_{4}\); for the latter the fixed points correspond to two-dimensional cones of the form \((v^{a},v^{a+1})\) and they can be labelled by the index \(a\). The order of the orbifold singularities associated with the fixed points of CY\({}_{4}\) and \(\mathbb{M}_{4}\) also match:
\[d_{a,a+1,d+1,d+2}=\left|\det(V^{a},V^{a+1},V^{d+1},V^{d+2})\right|=\left|\det( v^{a},v^{a+1})\right|=d_{a,a+1}\,. \tag{3.6}\]
Therefore, the fixed point formula for the equivariant volume with higher times of CY\({}_{4}\) takes the following form:
\[\mathds{V}(\lambda_{A},\lambda_{AB},\epsilon_{I})=\sum_{a}\frac{\mathrm{e}^{ \tau_{a}}}{d_{a,a+1}\,e^{\mathbb{T}^{4}}|_{a}}\,. \tag{3.7}\]
Here, \(\tau_{a}\) is the restriction to the fixed point \(a\) of the form (2.6)
\[\tau_{a}=\left(\sum_{A}\lambda_{A}\,c_{1}^{\mathbb{T}^{4}}(L_{A})+\sum_{A,B} \lambda_{AB}\,c_{1}^{\mathbb{T}^{4}}(L_{A})\,c_{1}^{\mathbb{T}^{4}}(L_{B}) \right)\Bigg{|}_{a}\,, \tag{3.8}\]
while at the denominator we have the restriction of the Euler class \(e^{\mathbb{T}^{4}}\)
\[e^{\mathbb{T}^{4}}\big{|}_{a}=\Big{(}c_{1}^{\mathbb{T}^{4}}(L_{a})\,c_{1}^{ \mathbb{T}^{4}}(L_{a+1})\,c_{1}^{\mathbb{T}^{4}}(L_{d+1})\,c_{1}^{\mathbb{T}^{4} }(L_{d+2})\Big{)}\Big{|}_{a}\,. \tag{3.9}\]
The restrictions of the Chern classes can be computed using (2.9). The inward normals to the faces of the cone generated by \((V_{a},V_{a+1},V_{d+1},V_{d+2})\) are
\[\begin{split}& U^{a}=(u_{1}^{a},0,0)\,,\\ & U^{a+1}=(u_{2}^{a},0,0)\,,\\ & U^{d+1}=\left((\mathfrak{t}_{a}-1)u_{1}^{a}+(\mathfrak{t}_{a+1} -1)u_{2}^{a}\,,d_{a,a+1},-d_{a,a+1}\right),\\ & U^{d+2}=(-\mathfrak{t}_{a}u_{1}^{a}-\mathfrak{t}_{a+1}u_{2}^{a }\,,0,d_{a,a+1})\,,\end{split} \tag{3.10}\]
where \(u_{1}^{a}\) and \(u_{2}^{a}\) are the two-dimensional normals to the cone \((v^{a},v^{a+1})\). Using the notations introduced in (2.16) we find
\[\begin{split}& c_{1}^{\mathbb{T}^{4}}(L_{a})\big{|}_{a}=-\frac{ \epsilon_{i}\,(u_{1}^{a})_{i}}{d_{a,a+1}}=-\epsilon_{1}^{a}\,,\\ & c_{1}^{\mathbb{T}^{4}}(L_{a+1})\big{|}_{a}=-\frac{\epsilon_{i} \,(u_{2}^{a})_{i}}{d_{a,a+1}}=-\epsilon_{2}^{a}\,,\\ & c_{1}^{\mathbb{T}^{4}}(L_{b})\big{|}_{a}=0\,,\qquad b\neq a,a+ 1\,,\\ & c_{1}^{\mathbb{T}^{4}}(L_{d+1})\big{|}_{a}=-(\mathfrak{t}_{a}- 1)\epsilon_{1}^{a}-(\mathfrak{t}_{a+1}-1)\epsilon_{2}^{a}-\epsilon_{3}+ \epsilon_{4}\,,\\ & c_{1}^{\mathbb{T}^{4}}(L_{d+2})\big{|}_{a}=\mathfrak{t}_{a} \epsilon_{1}^{a}+\mathfrak{t}_{a+1}\epsilon_{2}^{a}-\epsilon_{4}\,,\end{split} \tag{3.11}\]
where for simplicity we have used Einstein notation for the sums over the index \(i=1,2\).
We can write the equivariant volume of the \(\text{CY}_{4}\) as an integral over the base orbifold \(\mathbb{M}_{4}\) of four-dimensional equivariant forms with \(\epsilon_{3}\) and \(\epsilon_{4}\) as parameters. Let us denote with \(\mathbb{T}\) the two-dimensional torus associated with \(\epsilon_{1}\) and \(\epsilon_{2}\), and let \(c_{1}^{\mathbb{T}}(L_{a})\) be the equivariant Chern classes associated to the restrictions of the line bundles \(L_{a}\) to the base \(\mathbb{M}_{4}\). We can then take advantage of the one-to-one correspondence between fixed point of the \(\text{CY}_{4}\) and fixed points of \(\mathbb{M}_{4}\) and, using (2.20), we can rewrite (3.7) as
\[\mathds{V}(\lambda_{A},\lambda_{AB},\epsilon_{I})=\int_{\mathbb{M}_{4}}\frac{ \text{e}^{\tau^{\mathsf{T}}}}{\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}\,, \tag{3.12}\]
where
\[\begin{split}\tau^{\mathbb{T}}&=\sum_{A}\lambda_{A}\, \mathcal{C}_{A}+\sum_{A,B}\lambda_{AB}\,\mathcal{C}_{A}\,\mathcal{C}_{B}\,,\\ \mathcal{C}_{a}&=\,c_{1}^{\mathbb{T}}(L_{a})\,, \qquad a=1,\ldots,d\,,\\ \mathcal{C}_{d+1}&=\,-\epsilon_{3}+\epsilon_{4}+ \sum_{a}(\mathfrak{t}_{a}-1)c_{1}^{\mathbb{T}}(L_{a})\,,\\ \mathcal{C}_{d+2}&=\,-\epsilon_{4}-\sum_{a}\mathfrak{ t}_{a}c_{1}^{\mathbb{T}}(L_{a})\,.\end{split} \tag{3.13}\]
Notice the relations \(\sum_{a}v_{i}^{a}\,c_{1}^{\mathbb{T}}(L_{a})=-\epsilon_{i}\) and \(\sum_{A}V_{I}^{A}\,\mathcal{C}_{A}=-\epsilon_{I}\), following from (2.4).12
Footnote 12: The second relation, which can be checked by direct computation, is obviously the restriction of \(\sum_{A}V_{I}^{A}\,c_{1}^{\mathbb{T}}(L_{A})=-\epsilon_{I}\) to \(\mathbb{M}_{4}\).
The homogeneous component of degree \(\alpha\) of the equivariant volume with higher times can be expressed as
\[\mathds{V}^{(\alpha)}(\lambda_{A},\lambda_{AB},\epsilon_{I})=\int_{\mathbb{M} _{4}}\frac{(\tau^{\mathbb{T}})^{\alpha}}{\alpha!\,\mathcal{C}_{d+1}\,\mathcal{ C}_{d+2}}=\sum_{a}\frac{B_{a}^{(\alpha)}}{d_{a,a+1}\;\epsilon_{1}^{a}\; \epsilon_{2}^{a}}\;, \tag{3.14}\]
where we have defined \(B_{a}^{(\alpha)}\) to be the restriction over the \(a\)-th fixed point of \(\mathbb{M}_{4}\) of the following equivariant form:
\[B^{(\alpha)}=\frac{(\tau^{\mathbb{T}})^{\alpha}}{\alpha!\,\mathcal{C}_{d+1} \,\mathcal{C}_{d+2}}\;. \tag{3.15}\]
For later reference we derive the relation between \(B_{a}^{(\alpha)}\) and \(B_{a}^{(\beta)}\)
\[B_{a}^{(\beta)}= \,\frac{(\tau_{a})^{\beta}}{\beta!\,\big{(}\mathcal{C}_{d+1}\, \mathcal{C}_{d+2}\big{)}|_{a}}=\frac{(\alpha!)^{\frac{\beta}{\alpha}}}{\beta!} \left[\frac{(\tau_{a})^{\alpha}}{\alpha!\,\big{(}\mathcal{C}_{d+1}\,\mathcal{ C}_{d+2}\big{)}|_{a}}\right]^{\frac{\beta}{\alpha}}\Big{[}\big{(} \mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\big{)}|_{a}\Big{]}^{\frac{\beta}{\alpha}-1} \tag{3.16}\] \[= \,\frac{(\alpha!)^{\frac{\beta}{\alpha}}}{\beta!}\big{(}B_{a}^{( \alpha)}\big{)}^{\frac{\beta}{\alpha}}\big{(}(1-\mathfrak{t}_{a})\epsilon_{1}^ {a}+(1-\mathfrak{t}_{a+1})\epsilon_{2}^{a}-\epsilon_{3}+\epsilon_{4}\big{)}^{ \frac{\beta}{\alpha}-1}\big{(}\mathfrak{t}_{a}\epsilon_{1}^{a}+\mathfrak{t}_{a +1}\epsilon_{2}^{a}-\epsilon_{4}\big{)}^{\frac{\beta}{\alpha}-1}.\]
When \(\alpha\) is even this formula holds in terms of absolute values and the signs must be fixed separately. This will not be the case for the computation of this section, so we postpone the discussion about the signs to section 4.1.
#### 3.1.2 Solving the flux constraints
The flux constraints (3.4) reads
\[\overline{\nu}_{M5}\,(2-\delta_{AB})\,M_{AB}=-\frac{\partial\mathds{V}^{(2)}} {\partial\lambda_{AB}}=-(2-\delta_{AB})\int_{\mathbb{M}_{4}}\frac{\mathcal{C}_ {A}\,\mathcal{C}_{B}\,\tau^{\mathbb{T}}}{\mathcal{C}_{d+1}\,\mathcal{C}_{d+2} }\,, \tag{3.17}\]
or, equivalently
\[\overline{\nu}_{M5}\,M_{AB}=-\int_{\mathbb{M}_{4}}\frac{\mathcal{C}_{A}\, \mathcal{C}_{B}\,\tau^{\mathbb{T}}}{\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}=- \sum_{a}\frac{B_{a}^{(1)}\cdot\big{(}\mathcal{C}_{A}\,\mathcal{C}_{B}\big{)}|_{ a}}{d_{a,a+1}\;\epsilon_{1}^{a}\;\epsilon_{2}^{a}}\,. \tag{3.18}\]
Let us focus on the \(A,B\in\{1,\ldots,d\}\) sector. Using (2.20) we find
\[\overline{\nu}_{M5}\,M_{a,a+1}=-\frac{B_{a}^{(1)}}{d_{a,a+1}}\,,\] \[\overline{\nu}_{M5}\,M_{a,a}=-\frac{B_{a}^{(1)}\,\epsilon_{1}^{a} }{d_{a,a+1}\,\epsilon_{2}^{a}}-\frac{B_{a-1}^{(1)}\,\epsilon_{2}^{a-1}}{d_{a- 1,a}\,\epsilon_{1}^{a-1}}\,, \tag{3.19}\] \[\overline{\nu}_{M5}\,M_{ab}=0\qquad\text{when $b\neq a$, $a+1$, $a-1$}\,.\]
These equations give constraints on the fluxes but they have a very simple solution
\[B_{a}^{(1)}=-\overline{\nu}_{M5}\,N\,, \tag{3.20}\] \[M_{ab}=N\,D_{ab}\,,\]
where \(D_{ab}\) is the intersection matrix of divisors (2.22) and \(N\) is any integer that is a multiple of all the products \(d_{a-1,a}\,d_{a,a+1}\).
This can be seen as follows. By combining the first two equations we obtain
\[M_{a,a}=M_{a,a+1}\,\frac{\epsilon_{1}^{a}}{\epsilon_{2}^{a}}+M_{a,a-1}\,\frac {\epsilon_{2}^{a-1}}{\epsilon_{1}^{a-1}}\,, \tag{3.21}\]
and using the relation [2]
\[\frac{\epsilon_{1}^{a}}{d_{a,a+1}\,\epsilon_{2}^{a}}+\frac{\epsilon_{2}^{a-1}} {d_{a-1,a}\,\epsilon_{1}^{a-1}}\,=\,-\,\frac{d_{a-1,a+1}}{d_{a-1,a}\,\,d_{a,a+ 1}}\,, \tag{3.22}\]
we can rewrite this as
\[M_{a,a}\,d_{a,a+1}+M_{a,a-1}\,d_{a-1,a+1}=\frac{\epsilon_{1}^{a}}{\epsilon_{2} ^{a}}\left(M_{a,a+1}\,d_{a,a+1}-M_{a,a-1}\,d_{a-1,a}\right)\,. \tag{3.23}\]
Given that the fluxes \(M_{AB}\) and the orders of the orbifold singularity \(d_{a,a+1}\) are just integers, the only way that this equation can be true for general values of \(\epsilon\) is for both sides to vanish. This implies that \(M_{ab}\) is proportional to the intersections \(D_{ab}\) given in (2.22). We can then conclude that the only solution to equations (3.19) is (3.20). Notice that there is just one independent flux associated with the \(M_{ab}\) components of the flux matrix. This was to be expected since this corresponds to the M theory four-form flux on \(S^{4}\).
The values of the remaining entries of the matrix of fluxes \(M_{AB}\) are related to the fibration parameters. By substituting \(B_{a}^{(1)}=-\overline{\nu}_{M5}\,N\) in (3.18) we find
\[M_{AB}=N\sum_{a}\frac{\big{(}\mathcal{C}_{A}\,\mathcal{C}_{B}\big{)}|_{a}}{d_ {a,a+1}\,\epsilon_{1}^{a}\,\epsilon_{2}^{a}}=N\int_{\mathbb{M}_{4}}\mathcal{C} _{A}\,\mathcal{C}_{B}=N\sum_{c,d}\mathfrak{t}_{A}^{c}\,\mathfrak{t}_{B}^{d}\, D_{cd}\,. \tag{3.24}\]
In the last step we have used (2.22) and for convenience we have defined \(\mathfrak{t}_{A}^{c}\) as
\[\mathfrak{t}_{A}^{c}=\begin{cases}\delta_{A}^{c}&A\in\{1,\ldots,d\}\\ \mathfrak{t}_{\text{c}}-1&A=d+1\\ -\mathfrak{t}_{\text{c}}&A=d+2\end{cases}\,. \tag{3.25}\]
Given that the \(\mathfrak{t}_{a}\) are integers, the fluxes \(M_{AB}\) in (3.24) are all integers.
We note that the expression (3.24) for \(M_{AB}\) satisfies the relation required to be considered a matrix of fluxes,
\[\sum_{A}V_{I}^{A}M_{AB}=0\:. \tag{3.26}\]
This can easily be verified by noting that
\[\sum_{A}V_{I}^{A}\,\mathfrak{t}_{A}^{c}=\begin{cases}v_{i}^{c}&I\equiv i=1,2\\ 0&I=3,4\end{cases}\:,\qquad\sum_{a}v_{i}^{c}D_{cd}=0\:. \tag{3.27}\]
The simplest solution to the equations
\[B_{a}^{(1)}\equiv\frac{\tau_{a}(\lambda_{A}(\epsilon_{I}),\lambda_{AB}( \epsilon_{I}),\epsilon_{I})}{\big{(}\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\big{)} |_{a}}=-\overline{\nu}_{M5}\,N \tag{3.28}\]
is to set \(\lambda_{d+1,d+2}=-\frac{1}{2}\,\overline{\nu}_{M5}N\) while setting all the other \(\lambda_{A}\) and \(\lambda_{AB}\) to zero. We note that in general there exist no solutions to these equations with \(\lambda_{AB}=0\) for all \(A,B\), meaning that the inclusion of the higher times to the equivariant volume is necessary. This stems from the fact that when \(\lambda_{AB}=0\) only \(d-1\) of the \(\tau_{a}\) are independent: using the gauge invariance (2.15)
\[\lambda_{A}\to\lambda_{A}+\sum_{I=1}^{4}\gamma_{I}(\epsilon_{3}V_{I}^{A}- \epsilon_{I})\,, \tag{3.29}\]
three out of the \(d+2\) Kahler moduli \(\lambda_{A}\) can be set to zero.
#### 3.1.3 The extremal function and \(c\)-extremization
We are now ready to compute the extremal function
\[F(\epsilon_{I})=\mathds{V}^{(3)}(\lambda_{A}(\epsilon_{I}),\lambda_{AB}( \epsilon_{I}),\epsilon_{I})\,. \tag{3.30}\]
The dual field theory is supposed to be the two-dimensional SCFT obtained compactifying on \(\mathbb{M}_{4}\) the \((2,0)\) theory living on a stack of \(N\) M5 branes. The gravitational extremization problem should correspond to \(c\)-extremization in the dual two-dimensional SCFT.
A general comment that applies to all the examples in this paper is the following. The free energy must be extremized with respect to _all but one_ of the parameters \(\epsilon_{I}\) in order to find the critical point. The value of the remaining parameter must be instead fixed by requiring the correct scaling of the supercharge under the R-symmetry vector field \(\xi\). This is familiar from the constructions in [3, 4, 7, 8]. In our case, we extremize with respect to \(\epsilon_{4}\), \(\epsilon_{1}\) and \(\epsilon_{2}\) with \(\epsilon_{3}\) fixed to a canonical value.13
Using relations (3.14) and (3.16) we find
\[F=\sum_{a}\frac{B_{a}^{(3)}}{d_{a,a+1}\,\epsilon_{1}^{a}\,\epsilon _{2}^{a}}\,, \tag{3.31}\] \[B_{a}^{(3)}=\frac{1}{6}(-\overline{\nu}_{M5}\,N)^{3}\,\big{(}(1- \mathbf{t}_{a})\epsilon_{1}^{a}+(1-\mathbf{t}_{a+1})\epsilon_{2}^{a}-\epsilon _{3}+\epsilon_{4}\big{)}^{2}\big{(}\mathbf{t}_{a}\epsilon_{1}^{a}+\mathbf{t}_ {a+1}\epsilon_{2}^{a}-\epsilon_{4}\big{)}^{2}\,,\]
which matches the form of the conjectured formula of [14] in terms of gravitational blocks [1].14
Footnote 14: The convention for the sign of the free energy in [14] is the opposite of ours.
To make contact with the dual field theory, we can also write our result in terms of an integral of equivariant forms over the base \(\mathbb{M}_{4}\) as follows:
\[F =-\frac{1}{6}\,\overline{\nu}_{M5}^{3}\,N^{3}\int_{\mathbb{M}_{4 }}\mathcal{C}_{d+1}^{2}\,\mathcal{C}_{d+2}^{2} \tag{3.32}\] \[=-\frac{1}{6}\,\overline{\nu}_{M5}^{3}\,N^{3}\int_{\mathbb{M}_{4 }}\Big{(}\epsilon_{3}-\epsilon_{4}+\sum_{a}(1-\mathbf{t}_{a})c_{1}^{\mathbb{T }}(L_{a})\Big{)}^{2}\Big{(}\epsilon_{4}+\sum_{a}\mathbf{t}_{a}c_{1}^{\mathbb{T }}(L_{a})\Big{)}^{2}\,.\]
This expression correctly reproduces the M5 brane anomaly polynomial integrated over the four-dimensional orbifold \(\mathbb{M}_{4}\) as computed in [2].15
Footnote 15: Attention must be paid when performing the comparison since the symbol \(F\) refers to the central charge here, while it refers to the integral of the anomaly polynomial in [2] (see also (3.35)).
Let us briefly review the comparison with field theory, referring to [2] for details. The anomaly polynomial of the 2d SCFT is obtained by integrating the eight-form anomaly polynomial of the six-dimensional theory over \(\mathbb{M}_{4}\), which, at large \(N\), gives
\[\mathcal{A}_{\rm 2d}=\int_{\mathbb{M}_{4}}\mathcal{A}_{\rm 6d}=\frac{N^{3}}{24} \int_{\mathbb{M}_{4}}c_{1}(F_{1})^{2}c_{1}(F_{2})^{2}\,, \tag{3.33}\]
where \(F_{I}\) are the generators of the \(U(1)\times U(1)\subset SO(5)_{R}\) Cartan subgroup of the \((2,0)\) theory R-symmetry. The \(c_{1}(F_{I})\) can be decomposed as
\[c_{1}(F_{I})=\Delta_{I}c_{1}(F_{R}^{\rm 2d})-\mathfrak{p}_{I}^{a}\Big{(}c_{1} (L_{a})+2\pi\mu_{a}^{i}c_{1}(\mathcal{J}_{i})\Big{)}\,, \tag{3.34}\]
where \(F_{R}^{\rm 2d}\), \(\mathcal{J}_{1},\mathcal{J}_{2}\) are line bundles associated with the 2d R-symmetry and the two global symmetries coming from the isometries of \(\mathbb{M}_{4}\). They correspond to background fields for the two-dimensional theory with no legs along \(\mathbb{M}_{4}\). Substituting (3.34) in (3.33) and setting \(c_{1}(\mathcal{J}_{i})=\epsilon_{i}c_{1}(F_{R}^{\rm 2d})\), leads to the equivariant integral
\[\mathcal{A}_{\rm 2d}=\frac{c_{r}}{6}c_{1}(F_{R}^{\rm 2d})^{2}=\frac{N^{3}}{24}c _{1}(F_{R}^{\rm 2d})^{2}\int_{\mathbb{M}_{4}}(\Delta_{1}-\mathfrak{p}_{1}^{a}c_{1} ^{\mathbb{T}}(L_{a}))^{2}(\Delta_{2}-\mathfrak{p}_{2}^{a}c_{1}^{\mathbb{T}}(L_ {a}))^{2}\,. \tag{3.35}\]
Preserving supersymmetry with a twist requires \(c_{1}(F_{1})+c_{1}(F_{2})=2c_{1}(F_{R}^{\rm 2d})-\sum_{a}c_{1}(L_{a})\) which gives [2]
\[\Delta_{1}+\Delta_{2}=2+\det(W,\epsilon)\,,\qquad\mathfrak{p}_{1}^{a}+ \mathfrak{p}_{2}^{a}=1+\det(W,v^{a})\,, \tag{3.36}\]
where \(\epsilon=(\epsilon_{1},\epsilon_{2})\) and \(W\in\mathbb{R}^{2}\) is a two-dimensional constant vector.16 The two-dimensional central charge \(c_{r}\) is extracted from (3.35) and should be extremized with respect to \(\epsilon_{i}\) and \(\Delta_{I}\) subject to the previous constraint. We then see that the extremization of the gravitational free energy is equivalent to \(c\)-extremization under the identifications
Footnote 16: \(W\) can be gauged away, see [2].
\[\Delta_{1}=\epsilon_{4}\,,\qquad\Delta_{2}=\epsilon_{3}-\epsilon_{4}\,,\qquad \mathfrak{p}_{1}^{a}=\mathfrak{t}_{a}\,,\qquad\mathfrak{p}_{2}^{a}=1- \mathfrak{t}_{a}\,,\qquad W=0\,, \tag{3.37}\]
where we set \(\epsilon_{3}=2\) for convenience. The free energy \(F\) is actually homogeneous of degree two in \(\epsilon_{I}\). To match the free energy with the central charge we have to set \(\epsilon_{3}^{2}\overline{\mathcal{P}}_{M5}^{3}=-6\). The case of anti-twist is similar and can be discussed by taking a non-convex fan for \(\mathbb{M}_{4}\). The most general supersymmetry condition is now \(c_{1}(F_{1})+c_{1}(F_{2})=2c_{1}(F_{R}^{24})-\sum_{a}\sigma^{a}c_{1}(L_{a})\) where \(\sigma_{a}=\pm 1\) as discussed in [14] and requires
\[\Delta_{1}+\Delta_{2}=2+\det(W,\epsilon)\,,\qquad\mathfrak{p}_{1}^{a}+ \mathfrak{p}_{2}^{a}=\sigma_{a}+\det(W,v^{a})\,. \tag{3.38}\]
This case can be just obtained by _formally_ sending \(v^{a}\to\sigma^{a}v^{a}\) everywhere, implying \(\epsilon_{1}^{a}\to\sigma^{a}\epsilon_{1}^{a}\) and \(\epsilon_{2}^{a}\to\sigma^{a+1}\epsilon_{2}^{a}\).
### AdS\({}_{5}\times M_{6}\) solutions
In this section we consider a generalization of the family of M theory solutions found in [19] and further studied in [20]. Their geometry is AdS\({}_{5}\times M_{6}\) where \(M_{6}\) is a manifold obtained as a \(\mathbb{P}^{1}\) bundle over a four-dimensional compact manifold \(B_{4}\), that can be either a Kahler-Einstein manifold (\(B_{4}=\text{KE}_{4}\)) or the product of two KE\({}_{2}\) (\(B_{4}=\Sigma_{1}\times\Sigma_{2}\)). The bundle is the projectivization of the canonical bundle over \(B_{4}\), \(\mathbb{P}(K\oplus\mathcal{O})\). Here we consider the case where \(B_{4}\) is replaced by a generic four-dimensional toric orbifold \(\mathbb{M}_{4}\). Notice that generically \(M_{6}\) can be an orbifold,17 like in the solutions discussed in [21]. In addition to recovering the gravitational central charges of the existing solutions, we give a prediction for these more general backgrounds that are still to be found. These solutions are potentially interpreted as M5 branes wrapped over a two-cycle in \(\mathbb{M}_{4}\) (see for example [22, 23]).
Footnote 17: Using our formalism, we could easily study the case that \(M_{6}\) is a generic toric six-dimensional orbifold. It would be interesting to understand what kinds of orbifold admit an holographic interpretation.
The topological structure of the underlying geometry can be encoded in the fan
\[V^{a}=(v^{a},1)\,,\qquad V^{d+1}=(0,0,1)\,,\qquad V^{d+2}=(0,0,-1)\,,\qquad a= 1,\ldots,d\,, \tag{3.39}\]
where \(v^{a}\) are the two-dimensional vectors in the fan of \(\mathbb{M}_{4}\). We will use a capital index \(A\) to run over \(a=1,\ldots,d\), \(d+1\) and \(d+2\). That this is the right geometry
can be seen looking at the symplectic reduction presentation \(\mathbb{C}^{d+2}//G\) of \(M_{6}\). Here \(G\) is the subgroup of the torus \(\mathbb{T}^{d+2}=U(1)^{d+2}\) generated by the GLSM charges
\[\sum_{A}Q_{A}^{k}V_{I}^{A}=0\,,\qquad\qquad k=1,\dots,d-1\,. \tag{3.40}\]
We can choose the following basis of GLSM charges
\[(q_{a}^{p},-\sum_{a}q_{a}^{p},0)\,,\qquad(0,\dots,0,1,1)\,, \tag{3.41}\]
where \(q_{a}^{p}\) are the \(d-2\) charges for \(\mathbb{M}_{4}\), \(\sum_{a}q_{a}^{p}v_{i}^{a}=0\). The first \(d-2\) vectors define the canonical bundle \(K\) of \(\mathbb{M}_{4}\) with an extra copy of \(\mathbb{C}\). The final charge vector projectivizes it and gives indeed the geometry we are interested in:
\[\mathbb{P}(K\oplus\mathcal{O})\,. \tag{3.42}\]
We need also to specify the integer fluxes of the M theory four-form along all the non-trivial four-cycles. There are \(d+2\) toric four-cycles in the geometry, associated with the divisors \(D_{A}\). The divisors \(D_{a}\) are \(\mathbb{P}^{1}\) fibrations over the toric two-cycles \(\Sigma_{a}\subset\mathbb{M}_{4}\), while \(D_{d+1}\) and \(D_{d+2}\) are copies of \(\mathbb{M}_{4}\) sitting at the North and South pole of \(\mathbb{P}^{1}\), respectively. All together, they define a vector of fluxes \(M_{A}\). The relations \(\sum_{A}V_{I}^{A}D_{A}=0\) imply that not all toric divisors are inequivalent and that the vector of fluxes satisfies
\[\sum_{A}V_{I}^{A}M_{A}=0\,. \tag{3.43}\]
Since we are dealing with M5 branes in M theory, we define the free energy as in section 3.1
\[F=\mathds{V}^{(3)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\,, \tag{3.44}\]
and, since now we have a vector of fluxes, we impose the flux constraints
\[\nu_{M5}\,M_{A}=-\frac{\partial}{\partial\lambda_{A}}\mathds{V}^{(2)}(\lambda _{A},\lambda_{AB},\epsilon_{I})\,. \tag{3.45}\]
Differently from the case discussed in the previous section, for these geometries there is no general field theory result for the central charge of the dual four-dimensional SCFTs. Our results here can therefore be seen as a prediction for the general form of the off-shell central charge, which presumably can be obtained by integrating the M5 brane anomaly polynomial on a suitable two-cycle inside \(M_{6}\), or using the method of [22]. In order to compare with the existing literature, we will therefore consider in some detail a number of explicit examples of \(\mathbb{M}_{4}\), including \(\mathrm{KE}_{4}\) and \(\Sigma_{1}\times\Sigma_{2}\), but also other examples for which there is no known supergravity solution, nor field-theoretic understanding. The equations to be solved in the extremization problem typically lead to finding the zeroes of simultaneous polynomials
of high degree and are therefore not manageable. For this reason, we will proceed by making different technical assumptions to simplify the algebra. One such general assumption is the existence of a \(\mathbb{Z}_{2}\) symmetry acting on the \(\mathbb{P}^{1}\) fibre, as we discuss below. Furthermore, we will occasionally restrict to non-generic fluxes in order to simplify the otherwise unwieldy expressions.
If we restrict to a class of geometries with a \(\mathbb{Z}_{2}\) symmetry that exchanges the North and South poles of \(\mathbb{P}^{1}\), we can consider a simplified geometry obtained by cutting \(\mathbb{P}^{1}\) into half. We thus obtain a non-compact Calabi-Yau geometry given by the canonical bundle over \(\mathbb{M}_{4}\). The corresponding fan is obtained by dropping \(V^{d+2}\):
\[V^{a}=(v^{a},1)\,,\qquad V^{d+1}=(0,0,1)\,\,\,,\qquad a=1,\ldots,d\,. \tag{3.46}\]
Notice that this is a (partial) resolution of a CY\({}_{3}\) cone where \(V^{d+1}\) is associated with a compact divisor. Supergravity solutions with \(\mathbb{Z}_{2}\) symmetry have been considered in [19, 20] where they correspond to set the parameter called \(c\) to zero. Effectively, the \(\mathbb{Z}_{2}\) symmetry reduces by one the number of independent fluxes we can turn on, thus simplifying the calculations. Notice that the on-shell equivariant volume V for the half-geometry is half of the one for the total geometry. The relation between the parameters to use in the two cases, in order to have the same normalization for the free energy, is the following
\[\overline{\nu}_{M5}=2^{-2/3}\nu_{M5}\,, \tag{3.47}\]
where \(\overline{\nu}_{M5}\) is the correct one for half-geometries.
Notice that we introduced single and double times in (3.45). We can immediately understand the need for higher times. In a compact geometry, \(\mathds{V}^{(2)}(\lambda_{A})\) with only single times would vanish identically.18 As we will discuss later, the double times are generically necessary also when imposing the \(\mathbb{Z}_{2}\) symmetry in order to have enough parameters to solve the equations.19
Footnote 18: For a compact geometry \(\mathds{V}^{(2)}(\lambda_{A})=-\frac{1}{2}\sum_{AB}\lambda_{A}\lambda_{B}\int_ {M_{6}}c_{1}^{\mathrm{T}}(L_{A})c_{1}^{\mathrm{T}}(L_{B})=0\) since it is the integral of a four-form at most on a six-dimensional manifold. In the non-compact case, this condition is evaded and \(\mathds{V}^{(2)}(\lambda_{A})\) is a rational function of \(\epsilon_{I}\). See [2] for details.
Footnote 19: In the case of compactification on a spindle they are not necessary [2].
#### 3.2.1 Geometries with \(\mathbb{Z}_{2}\) symmetry
We consider first geometries with \(\mathbb{Z}_{2}\) symmetry. Cutting \(M_{6}\) into half we consider the non-compact CY\({}_{3}\) specified by the fan (3.46). The \(I=3\) condition in (3.43) gives
\[M_{d+1}=-\sum_{a}M_{a}\,, \tag{3.48}\]
thus fixing the flux along \(\mathbb{M}_{4}\) in terms of the other fluxes. The \(I=1,2\) conditions in (3.43) give two linear relations among the \(M_{a}\), leaving a total number \(d-2\) of independent fluxes. Notice that geometries without \(\mathbb{Z}_{2}\) symmetry have one additional independent flux, as we discuss later.
The fan is the union of \(d\) cones \((V^{a},V^{a+1},V^{d+1})\) and we see that the number of fixed points is the same of that of the base \(\mathbb{M}_{4}\). It is then easy to write the equivariant volume with higher times as a sum over the fixed points of \(\mathbb{M}_{4}\)
\[\mathds{V}=\sum_{a}\frac{\mathrm{e}^{\tau_{a}}}{d_{a,a+1}\epsilon_{1}^{a} \epsilon_{2}^{a}(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{a})}\,, \tag{3.49}\]
where
\[\tau_{a}=\left(\sum_{A}\lambda_{A}\,c_{1}^{\mathbb{T}^{3}}(L_{A})+\sum_{A,B} \lambda_{AB}\,c_{1}^{\mathbb{T}^{3}}(L_{A})\,c_{1}^{\mathbb{T}^{3}}(L_{B}) \right)\bigg{|}_{a} \tag{3.50}\]
or, more explicitly,
\[\begin{split}\tau_{a}=&-\lambda_{a}\epsilon_{1}^{a} -\lambda_{a+1}\epsilon_{2}^{a}-\lambda_{d+1}(\epsilon_{3}-\epsilon_{1}^{a}- \epsilon_{2}^{a})+\lambda_{aa}(\epsilon_{1}^{a})^{2}+2\lambda_{a,a+1}\epsilon _{1}^{a}\epsilon_{2}^{a}+\lambda_{a+1,a+1}(\epsilon_{2}^{a})^{2}\\ &+2(\lambda_{a,d+1}\epsilon_{1}^{a}+\lambda_{a+1,d+1}\epsilon_{2 }^{a})(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{a})+\lambda_{d+1,d+1}( \epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{a})^{2}\,.\end{split} \tag{3.51}\]
Notice that the equations (3.45) are not solvable with only single times. \(M_{d+1}=-\sum_{a}M_{a}\neq 0\) while, for \(\lambda_{AB}=0\),
\[\begin{split}-\frac{\partial\mathds{V}^{(2)}}{\partial\lambda_{ d+1}}&=\sum_{a}\frac{-\lambda_{a}\epsilon_{1}^{a}-\lambda_{a+1} \epsilon_{2}^{a}-\lambda_{d+1}(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{a}) }{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a}}\\ &=\int_{\mathbb{M}_{4}}\left(\sum_{a}\lambda_{a}c_{1}^{\mathbb{T} }(L_{a})-\lambda_{d+1}\Big{(}\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a}) \Big{)}\right)=0\,,\end{split} \tag{3.52}\]
being the integral of a two-form at most.
The equations (3.45) explicitly read
\[(I) \overline{\nu}_{M5}M_{a}=\frac{\epsilon_{1}^{a}\tau_{a}}{d_{a,a+ 1}\epsilon_{1}^{a}\epsilon_{2}^{a}(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2} ^{a})}+\frac{\epsilon_{2}^{a-1}\tau_{a-1}}{d_{a-1,a}\epsilon_{1}^{a-1} \epsilon_{2}^{a-1}(\epsilon_{3}-\epsilon_{1}^{a-1}-\epsilon_{2}^{a-1})}\,,\] \[(II) -\overline{\nu}_{M5}\sum_{a}M_{a}=\sum_{a}\frac{\tau_{a}}{d_{a,a+ 1}\epsilon_{1}^{a}\epsilon_{2}^{a}}\,. \tag{3.53}\]
These equations are not independent. In particular, \((II)\) follows from \((I)\).20
Footnote 20: Using \(\sum_{a}v^{a}M_{a}=0\) and the vector identity \(v_{i}^{a}\epsilon_{1}^{a}+v_{i}^{a+1}\epsilon_{2}^{a}=\epsilon_{i}\), one derives \(\epsilon_{i}\sum_{a}\frac{\tau_{a}}{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a }(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{a})}=0\) from \((I)\). Then, summing over \(a\) in \((I)\), and using the previous identity:
\[\overline{\nu}_{M5}\sum_{a}M_{a}=\sum_{a}\frac{(\epsilon_{1}^{a}+\epsilon_{2} ^{a})\tau_{a}}{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a}(\epsilon_{3}- \epsilon_{1}^{a}-\epsilon_{2}^{a})}=-\sum_{a}\frac{\tau_{a}}{d_{a,a+1}\epsilon _{1}^{a}\epsilon_{2}^{a}}\,, \tag{3.54}\]
valid for \(\epsilon_{i}\neq 0\). For \(\epsilon_{i}=0\) one should pay more attention and we will see in section 4.2 one instance where a similar subtlety is important. In the present case we will check explicitly that both \((I)\) and \((II)\) are valid. The equations \((I)\) can be written as
\[B_{a-1}^{(1)}-B_{a}^{(1)}=d_{a,a+1}\epsilon_{2}^{a}\overline{\nu}_{M5}M_{a}\,, \qquad\qquad B_{a}^{(1)}=-\frac{\tau_{a}}{\epsilon_{3}-\epsilon_{1}^{a}- \epsilon_{2}^{a}}\,. \tag{3.55}\]
It is then clear that these equations can be solved for \(\tau_{a}\), but one "time", say \(\tau_{1}\), remains undetermined. Our prescription is to _extremize_ the free energy with respect to _all_ parameters that are left undetermined after imposing the flux constraints. In this case then we extremize
\[\mathds{V}^{(3)}(\epsilon_{i},\tau_{1}) \tag{3.56}\]
with respect to \(\epsilon_{1}\), \(\epsilon_{2}\) and \(\tau_{1}\), with \(\epsilon_{3}\) set to some canonical value, fixed by the scaling of the supercharge under the R-symmetry vector field. In the next subsection we will parameterize the free energy in a more convenient way.
#### 3.2.2 The extremal function for geometries with \(\mathbb{Z}_{2}\) symmetry
We can write the general form of the extremal function for geometries with \(\mathbb{Z}_{2}\) symmetry. Let us define
\[\tau_{CY_{3}}^{\mathbb{T}^{3}}=\sum_{A}\lambda_{A}\,c_{1}^{\mathbb{T}^{3}}(L_ {A})+\sum_{A,B}\lambda_{AB}\,c_{1}^{\mathbb{T}^{3}}(L_{A})\,c_{1}^{\mathbb{T}^ {3}}(L_{B})\,, \tag{3.57}\]
the equivariant form with restriction \(\tau_{a}\) at the fixed points. By restricting the form to \(\mathbb{M}_{4}\) and considering \(\epsilon_{3}\) as a parameter, we obtain
\[\begin{split}\tau^{\mathbb{T}}&=\sum_{A}\lambda_{ A}\,\mathcal{C}_{A}+\sum_{A,B}\lambda_{AB}\,\mathcal{C}_{A}\,\mathcal{C}_{B}\;, \\ \mathcal{C}_{a}&=c_{1}^{\mathbb{T}}(L_{a})\;,\qquad a =1,\ldots,d\;,\\ \mathcal{C}_{d+1}&=\,-\Big{(}\epsilon_{3}+\sum_{a}c_ {1}^{\mathbb{T}}(L_{a})\Big{)}\;,\end{split} \tag{3.58}\]
where \(c_{1}^{\mathbb{T}}(L_{a})\) are the restrictions of the line bundles \(L_{a}\) to the base \(\mathbb{M}_{4}\) and \(\mathbb{T}\) is the two-dimensional torus spanned by \(\epsilon_{1}\) and \(\epsilon_{2}\). From now on, unless explicitly said, all classes will refer to the base \(\mathbb{M}_{4}\). In terms of \(\tau^{\mathbb{T}}\) the quadratic piece of the equivariant volume can be written as
\[\mathds{V}^{(2)}(\lambda_{A},\lambda_{AB},\epsilon_{I})=\frac{1}{2}\int_{ \mathbb{M}_{4}}\frac{(\tau^{\mathbb{T}})^{2}}{\epsilon_{3}+\sum_{a}c_{1}^{ \mathbb{T}}(L_{a})}\,. \tag{3.59}\]
The flux constraints (3.45) give
\[\begin{split}(I)&\qquad-\overline{\nu}_{M5}M_{a}= \int_{\mathbb{M}_{4}}\frac{c_{1}^{\mathbb{T}}(L_{a})\,\tau^{\mathbb{T}}}{ \epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})}\,,\\ (II)&\qquad-\overline{\nu}_{M5}\sum_{a}M_{a}=\int_{ \mathbb{M}_{4}}\tau^{\mathbb{T}}\,.\end{split} \tag{3.60}\]
For a generic fan, using the gauge transformations (2.14) and (2.15) we can set all \(\lambda_{a}=\lambda_{a,a}=\lambda_{a,a+1}=0\).21 We will show more formally in appendix A that \(\mathds{V}^{(3)}\) has
a critical point at \(\lambda_{a}=\lambda_{a,a}=\lambda_{a,a+1}=0\). Then condition \((I)\) becomes
\[-\overline{\nu}_{M5}M_{a} =\int_{\mathbb{M}_{4}}c_{1}^{\mathbb{T}}(L_{a})\Big{(}-\lambda_{d+ 1}-2\sum_{b}\lambda_{b,d+1}c_{1}^{\mathbb{T}}(L_{b})+\lambda_{d+1,d+1}\Big{(} \epsilon_{3}+\sum_{b}c_{1}^{\mathbb{T}}(L_{b})\Big{)}\Big{)}\] \[=\sum_{b}D_{ab}(-2\lambda_{b,d+1}+\lambda_{d+1,d+1})\,. \tag{3.61}\]
We can similarly compute \((II)\) as an integral
\[-\overline{\nu}_{M5}\sum_{a}M_{a}=\int_{\mathbb{M}_{4}}\tau^{\mathbb{T}}=\sum _{a,b}D_{ab}(-2\lambda_{b,d+1}+\lambda_{d+1,d+1})\,, \tag{3.62}\]
and see that it is automatically satisfied if \((I)\) is. Since \(\sum_{a}v_{i}^{a}M_{a}=0\), the flux constraints fix the \(\lambda_{b,d+1}\) only up to the ambiguities
\[\lambda_{a,d+1}\to\lambda_{a,d+1}+\sum_{i=1}^{2}\delta_{i}v_{i}^ {a}+\gamma\,, \tag{3.63}\] \[\lambda_{d+1,d+1}\to\lambda_{d+1,d+1}+2\gamma\,,\]
where \(\delta_{i}\) and \(\gamma\) are free parameters. However, these free parameters can be all reabsorbed in a redefinition of
\[\lambda_{d+1}\to\lambda_{d+1}+2\gamma\epsilon_{3}+2\sum_{i=1}^{2}\delta_{i} \epsilon_{i}\,, \tag{3.64}\]
since \(\sum_{a}v_{i}^{a}c_{1}^{\mathbb{T}}(L_{a})=-\epsilon_{i}\) and they are not really independent.
The free energy is then given by
\[\mathds{V}^{(3)}=\frac{1}{6}\int_{\mathbb{M}_{4}}\frac{(\tau^{\mathbb{T}})^{3 }}{\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})}\,, \tag{3.65}\]
which explicitly gives
\[\mathds{V}^{(3)}=\frac{1}{6}\int_{\mathbb{M}_{4}}\Bigl{(}\epsilon_{3}+\sum_{a }c_{1}^{\mathbb{T}}(L_{a})\Bigr{)}^{2}\Bigl{(}\bar{\lambda}_{d+1}+\sum_{b} \bar{\lambda}_{b,d+1}c_{1}^{\mathbb{T}}(L_{b})\Bigr{)}^{3}\,, \tag{3.66}\]
where we defined
\[\bar{\lambda}_{d+1}=-\lambda_{d+1}+\lambda_{d+1,d+1}\epsilon_{3}\,,\qquad\bar {\lambda}_{a,d+1}=-2\lambda_{a,d+1}+\lambda_{d+1,d+1}\,, \tag{3.67}\]
which are subject to the constraints
\[-\overline{\nu}_{M5}M_{a}=\sum_{b}D_{ab}\bar{\lambda}_{b,d+1}\,. \tag{3.68}\]
Substituting the solution of the flux constraints, \(\mathds{V}^{(3)}\) becomes a function of \(\epsilon_{i}\) and the extra parameter \(\bar{\lambda}_{d+1}\). Indeed, as we have seen, the ambiguities (3.63) can be reabsorbed in a redefinition of \(\bar{\lambda}_{d+1}\). A direct evaluation gives
\[6\mathds{V}^{(3)}=\bar{\lambda}_{d+1}^{3}\sum_{ab}D_{ab}+\bar{ \lambda}_{d+1}^{2}\Big{(}6\epsilon_{3}\sum_{ab}D_{ab}\bar{\lambda}_{a,d+1}+3 \sum_{abc}D_{abc}\bar{\lambda}_{a,d+1}\Big{)}\] \[\quad+3\bar{\lambda}_{d+1}\Big{(}\epsilon_{3}^{2}\sum_{ab}D_{ab} \bar{\lambda}_{a,d+1}\bar{\lambda}_{b,d+1}+2\epsilon_{3}\sum_{abc}D_{abc}\bar{ \lambda}_{a,d+1}\bar{\lambda}_{b,d+1}+\sum_{abcd}D_{abcd}\bar{\lambda}_{a,d+1} \bar{\lambda}_{b,d+1}\Big{)}\] \[\quad+\Big{(}\epsilon_{3}^{2}\sum_{abc}D_{abc}\bar{\lambda}_{a,d +1}\bar{\lambda}_{b,d+1}\bar{\lambda}_{c,d+1}+2\epsilon_{3}\sum_{abcd}D_{abcd} \bar{\lambda}_{a,d+1}\bar{\lambda}_{b,d+1}\bar{\lambda}_{c,d+1}\] \[\quad+\sum_{abcde}D_{abcde}\bar{\lambda}_{a,d+1}\bar{\lambda}_{b,d+1}\bar{\lambda}_{c,d+1}\Big{)}\,, \tag{3.69}\]
where the generalized intersection numbers are defined by
\[D_{a_{1}\dots a_{p}}=\int_{\mathbb{M}_{4}}c_{1}^{\mathbb{T}}(L_{a_{1}})\dots c _{1}^{\mathbb{T}}(L_{a_{p}})\,. \tag{3.70}\]
Notice that \(D_{ab}\) is \(\epsilon\)-independent, while \(D_{a_{1}\dots a_{p}}\) is a homogeneous function of degree \(p-2\) in \(\epsilon_{1}\) and \(\epsilon_{2}\). \(\mathds{V}^{(3)}\) need to be extremized with respect to \(\epsilon_{1}\), \(\epsilon_{2}\) and \(\bar{\lambda}_{d+1}\), with \(\epsilon_{3}\) set to the canonical value.
The critical point is generically at a non-zero value of \(\epsilon_{1}\) and \(\epsilon_{2}\). We can expect a critical point22 at \(\epsilon_{1}=\epsilon_{2}=0\) only if the background and the fluxes have some extra symmetry, as for examples in the cases where all \(U(1)\) isometries are enhanced to a non-abelian group. In these particular cases, we can further simplify the expression
Footnote 22: In the opposite direction, of course one would have as critical point \(\epsilon_{1}=\epsilon_{2}=0\) if the base \(B_{4}\) has _no_ continuous symmetries. This is the case for examples for del Pezzo surfaces \(\text{dP}_{k}\) with \(k>3\), which we do not treat here. This would lead one to suspect that all \(\text{KE}_{4}\) have \(\epsilon_{1}=\epsilon_{2}=0\) as critical point, but this is actually incorrect, as the example of the toric \(\text{dP}_{3}\) will show.
\[6\mathds{V}^{(3)} =\bar{\lambda}_{d+1}^{3}\sum_{ab}D_{ab}+6\epsilon_{3}\bar{\lambda }_{d+1}^{2}\sum_{ab}D_{ab}\bar{\lambda}_{a,d+1}+3\epsilon_{3}^{2}\bar{\lambda }_{d+1}\sum_{ab}D_{ab}\bar{\lambda}_{a,d+1}\bar{\lambda}_{b,d+1}+O(\epsilon_{ i}^{2})\] \[=\bar{\lambda}_{d+1}^{3}\sum_{ab}D_{ab}-6\overline{\nu}_{M5}\sum _{a}M_{a}\epsilon_{3}\bar{\lambda}_{d+1}^{2}+3\epsilon_{3}^{2}\bar{\lambda}_{ d+1}\sum_{ab}D_{ab}\bar{\lambda}_{a,d+1}\bar{\lambda}_{b,d+1}+O(\epsilon_{i}^{2})\,, \tag{3.71}\]
and extremize it with respect to \(\bar{\lambda}_{d+1}\).
As a check of our expression, we can reproduce the central charge of the existing solutions with Kahler-Einstein metrics and fluxes all equal [20]. The only toric four-manifolds that are also KE are \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\text{dP}_{3}\), with fans
\[\mathbb{P}^{2}:\;\;v^{1}=(1,1)\,,\,v^{2}=(-1,0)\,,\,v^{3}=(0,-1)\,,\] \[\mathbb{P}^{1}\times\mathbb{P}^{1}:\;\;v^{1}=(1,0)\,,\,v^{2}=(0,1 )\,,\,v^{3}=(-1,0)\,,\,v^{4}=(0,-1)\,,\] \[\text{dP}_{3}:\,v^{1}=(1,0)\,,\,v^{2}=(1,1)\,,\,v^{3}=(0,1)\,,\,v^ {4}=(-1,0)\,,\,v^{5}=(-1,-1)\,,\,v^{6}=(0,-1)\,, \tag{3.72}\]
and intersection matrices
\[\mathbb{P}^{2}:\qquad\quad D_{ab}=1\,,\] \[\mathbb{P}^{1}\times\mathbb{P}^{1}:\quad D_{ab}=1\text{ if }|a-b|=1 \,(\text{mod}\,2)\quad\text{and}\quad\text{zero otherwise}\,, \tag{3.73}\] \[\text{dP}_{3}:\qquad\quad D_{aa}=-1\,,\quad D_{a,a\pm 1}=1\quad \text{and}\quad\text{zero otherwise}\,,\]
where the indices are cyclically identified. To compare with the \(\text{KE}_{4}\) solutions, we set all \(M_{a}\equiv N\). We can then choose all \(\bar{\lambda}_{a,d+1}\) equal and we find
\[\sum_{abc}D_{abc}\bar{\lambda}_{a,d+1}=\sum_{abc}D_{abc}\bar{\lambda}_{a,d+1} \bar{\lambda}_{b,d+1}=\sum_{abc}D_{abc}\bar{\lambda}_{a,d+1}\bar{\lambda}_{b, d+1}\bar{\lambda}_{c,d+1}=0\,, \tag{3.74}\]
thus ensuring that the linear terms in \(\epsilon_{1}\) and \(\epsilon_{2}\) in \(\mathds{V}^{(3)}\) vanish, and that there is indeed a critical point at \(\epsilon_{1}=\epsilon_{2}=0\). Extremizing (3.71) we get
\[\mathds{V}^{(3)}=\epsilon_{3}^{3}\overline{\nu}_{M5}^{3}(-5+3\sqrt{3})N^{3} \{\frac{1}{9},\frac{1}{3},2\}\,, \tag{3.75}\]
for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\text{dP}_{3}\), respectively, which agrees with (2.16) in [20] for \(\epsilon_{3}\overline{\nu}_{M5}=3\).23
Footnote 23: \(N_{C_{N}}\) in [20] can be identified with \(M_{d+1}=-\sum M_{a}\), so that \(N_{there}=-hN_{C_{N}}/M=h\sum M_{a}/M\) where \((h,M)\) are defined in [20] and they have value \((3,9),(4,8)\) and \((2,6)\) for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\text{dP}_{3}\), respectively.
For \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) we can introduce a second flux. The general solution to \(\sum_{A}V_{I}^{A}M_{A}=0\) is indeed
\[M_{A}=(N_{1},N_{2},N_{1},N_{2},-2N_{1}-2N_{2})\,. \tag{3.76}\]
The background has an expected \(SU(2)\times SU(2)\) symmetry that it is realized in the supergravity solution [20], that now is not in the KE class. Using the gauge transformation (2.14) we can set \(\bar{\lambda}_{a+2,d+1}=\bar{\lambda}_{a,d+1}\). A simple computation then shows that the free energy extremized has a critical point in \(\epsilon_{1}=\epsilon_{2}=0\), consistently with the non-abelian isometry of the solution, with critical value
\[\mathds{V}^{(3)}=\frac{\epsilon_{3}^{3}\overline{\nu}_{M5}^{3}}{6}\left(2(N_{ 1}^{2}+N_{1}N_{2}+N_{2}^{2})^{3/2}-(2N_{1}^{3}+3N_{1}^{2}N_{2}+3N_{1}N_{2}^{2 }+2N_{2}^{3})\right)\,, \tag{3.77}\]
which should be compared with (2.29) in [20] with \(N_{1}=pN\) and \(N_{2}=qN\). This looks superficially different, but it can be rewritten in the form above (_cf._ for example (F.14) in [22]).
#### 3.2.3 Examples of geometries with non-zero critical \(\boldsymbol{\epsilon}\)
So far, in all the explicit examples we have discussed we found that \(\epsilon_{1}=\epsilon_{2}=0\) is a critical point. However, we have already pointed out that for generic toric \(\mathbb{M}_{4}\) and/or with generic fluxes this will not be the case. In this subsection we will investigate situations in which at least one of \(\epsilon_{1},\epsilon_{2}\) is different from zero at the critical point, by considering geometries with \(SU(2)\times U(1)\) symmetry, as well as the case of \(\text{dP}_{3}\)
with generic fluxes. Interestingly, it turns out that for dP\({}_{3}\) there exist two special configurations of fluxes (different from the case where they are all equal) where the critical point is again \(\epsilon_{1}=\epsilon_{2}=0\), but the corresponding supergravity solutions are not known. For four independent generic fluxes, instead, \(\epsilon_{1}=\epsilon_{2}=0\) is not a critical point.
#### dP\({}_{3}\) with unequal fluxes
The symmetry of dP\({}_{3}\) is just \(U(1)\times U(1)\) and the existence of the critical point \(\epsilon_{1}=\epsilon_{2}=0\) of the extremization problem is not obviously implied by the fact that there exists a KE metric on dP\({}_{3}\). In the basis of the fan as in (3.72), the general assignment of fluxes compatible with \(\sum_{A}V_{I}^{A}M_{A}=0\) can be parameterized as
\[M_{A}=(N_{1},N_{2},N_{3},N_{4},N_{5},N_{6},-2N_{1}-3N_{2}-2N_{3}+N_{5})\,, \tag{3.78}\]
where we choose \(N_{1},N_{2},N_{3},N_{5}\) as independent, with \(N_{4}=N_{1}+N_{2}-N_{5}\) and \(N_{6}=N_{2}+N_{3}-N_{5}\). Upon setting \(\lambda_{a}=\lambda_{a,a}=\lambda_{a,a+1}=0\) using the gauge freedom, as discussed before, the constraint (3.68) on \(\bar{\lambda}_{a,d+1}\) can be solved by taking, for example
\[\begin{split}\bar{\lambda}_{1,d+1}&=-\bar{\nu}_{M5 }\frac{N_{2}+N_{3}}{2}\,,\qquad\qquad\qquad\qquad\qquad\bar{\lambda}_{2,d+1}=- \bar{\nu}_{M5}\frac{N_{3}+N_{1}}{2}\,,\\ \bar{\lambda}_{3,d+1}&=-\bar{\nu}_{M5}\frac{N_{1}+N _{2}}{2}\,,\qquad\qquad\qquad\qquad\bar{\lambda}_{4,d+1}=-\bar{\nu}_{M5}\frac{ N_{2}+N_{3}}{2}\,,\\ \bar{\lambda}_{5,d+1}&=-\bar{\nu}_{M5}\frac{N_{3}+N _{1}+2(N_{2}-N_{5})}{2}\,,\qquad\bar{\lambda}_{6,d+1}=-\bar{\nu}_{M5}\frac{N_{ 1}+N_{2}}{2}\,.\end{split} \tag{3.79}\]
Writing out the free energy (3.69), up to linear order in \(\epsilon_{1},\epsilon_{2}\), we have
\[\mathds{V}^{(3)}=\left.\mathds{V}^{(3)}\right|_{\epsilon_{i}=0}+\left.\partial _{\epsilon_{1}}\mathds{V}^{(3)}\right|_{\epsilon_{i}=0}\epsilon_{1}+\left. \partial_{\epsilon_{2}}\mathds{V}^{(3)}\right|_{\epsilon_{i}=0}\epsilon_{2}+O (\epsilon_{i}^{2}) \tag{3.80}\]
where the constant term is not particularly interesting and
\[\begin{split}\left.\partial_{\epsilon_{1}}\mathds{V}^{(3)} \right|_{\epsilon_{i}=0}&=\bar{\nu}_{M5}\frac{N_{5}-N_{2}}{2} \Big{[}6\bar{\lambda}_{d+1}^{2}-12(N_{2}+N_{3})\bar{\nu}_{M5}\epsilon_{3}\bar {\lambda}_{d+1}\\ &+\big{(}N_{2}^{2}-2N_{5}^{2}+3N_{3}(N_{2}+N_{5})+N_{5}N_{2}\big{)} \bar{\nu}_{M5}^{2}\epsilon_{3}^{2}\Big{]}\,,\\ \left.\partial_{\epsilon_{2}}\mathds{V}^{(3)}\right|_{\epsilon_{i }=0}&=\bar{\nu}_{M5}\frac{N_{5}-N_{2}}{2}\Big{[}6\bar{\lambda}_{d +1}^{2}-12(N_{2}+N_{1})\bar{\nu}_{M5}\epsilon_{3}\bar{\lambda}_{d+1}\\ &+\big{(}N_{2}^{2}-2N_{5}^{2}+3N_{1}(N_{2}+N_{5})+N_{5}N_{2}\big{)} \bar{\nu}_{M5}^{2}\epsilon_{3}^{2}\Big{]}\,.\end{split} \tag{3.81}\]
We see that for generic values of the fluxes the expressions above cannot be zero simultaneously, implying that \(\epsilon_{i}=0\) is not a critical point of the extremization. The complete extremization equations are unwieldy, so in the following we will instead concentrate on two special configurations of fluxes, with enhanced symmetry, for which \(\epsilon_{1}=\epsilon_{2}=0\) turns out to be a critical point.
The first special value of fluxes is clearly obtained for \(N_{5}=N_{2}\), that leaves three fluxes \(N_{1},N_{2},N_{3}\) free. In this case the parameters (3.79) acquire the cyclic symmetry \(\bar{\lambda}_{a,d+1}=\bar{\lambda}_{a+3,d+1}\), analogously to the \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) discussed in the previous section and indeed the linear terms in \(\mathds{V}^{(3)}\) manifestly vanish, so that \(\epsilon_{i}=0\) is a critical point. The fluxes display an enhanced symmetry:
\[M_{A}=(N_{1},N_{2},N_{3},N_{1},N_{2},N_{3},-2N_{1}-2N_{2}-2N_{3})\,. \tag{3.82}\]
Extremizing \(\mathbb{V}^{(3)}\) with respect to \(\bar{\lambda}_{d+1}\) yields
\[\begin{split}\bar{\lambda}_{d+1}^{*}&=\frac{2\bar{ \nu}_{M5}\epsilon_{3}}{3}(N_{1}+N_{2}+N_{3})\\ &-\frac{\bar{\nu}_{M5}\epsilon_{3}}{3}\sqrt{4(N_{1}^{2}+N_{2}^{2 }+N_{3}^{2})+5(N_{1}N_{2}+N_{2}N_{3}+N_{3}N_{1})}\,,\end{split} \tag{3.83}\]
and the corresponding value of the on-shell central charge is
\[\begin{split}\mathds{V}^{(3)}&=\frac{2\bar{\nu}_{M 5}^{3}\epsilon_{3}^{3}}{27}\Big{[}\big{(}4(N_{1}^{2}+N_{2}^{2}+N_{3}^{2})+5(N_{ 1}N_{2}+N_{2}N_{3}+N_{3}N_{1})\big{)}^{3/2}\\ &-(N_{1}+N_{2}+N_{3})\big{(}8(N_{1}^{2}+N_{2}^{2}+N_{3}^{2})+7(N_ {1}N_{2}+N_{2}N_{3}+N_{3}N_{1})\big{)}\Big{]}\,.\end{split} \tag{3.84}\]
It can be checked that this expression agrees precisely with the central charge given in eq. (3.79) of [24] and it correctly reduces to (3.75) upon setting \(N_{1}=N_{2}=N_{3}=N\).
Notice that while the expression of \(\bar{\lambda}_{d+1}^{*}\) depends on the specific gauge chosen for the parameters \(\bar{\lambda}_{a,d+1}\), the critical values \(\epsilon_{i}^{*}=0\) and the central charge (3.84) do not rely on this.
The second special value of fluxes that we found is \(N_{1}=N_{3}=N_{5}\), which implies \(N_{a}=N_{a+2}\), so that the fluxes have again an enhanced symmetry:
\[M_{A}=(N_{1},N_{2},N_{1},N_{2},N_{1},N_{2},-3N_{1}-3N_{2})\,. \tag{3.85}\]
In this case, notice that the two expressions in (3.81) coincide, so that it is possible that both linear terms vanish, for a particular value of \(\bar{\lambda}_{d+1}^{*}\), despite \(N_{2}\neq N_{5}\). However, the parameters in (3.79) do not enjoy this new symmetry, so it is better to look for a different gauge, where the parameters respect the additional symmetry, namely \(\bar{\lambda}_{a,d+1}=\bar{\lambda}_{a+2,d+1}\). This can be achieved choosing
\[\bar{\lambda}_{1,d+1}=-\bar{\nu}_{M5}\frac{N_{1}+2N_{2}}{3}\,,\qquad\bar{ \lambda}_{2,d+1}=-\bar{\nu}_{M5}\frac{2N_{1}+N_{2}}{3}\,, \tag{3.86}\]
and cyclic permutations. In this gauge, we can now check that \(\mathds{V}^{(3)}\) has no linear terms in \(\epsilon_{1}\) and \(\epsilon_{2}\). Therefore, extremizing \(\mathds{V}^{(3)}\) with respect to \(\bar{\lambda}_{d+1}\), \(\epsilon_{1}\) and \(\epsilon_{2}\), we obtain the critical values \(\epsilon_{1,2}^{*}=0\) and
\[\bar{\lambda}_{d+1}^{*}=\frac{6(N_{1}+N_{2})-\sqrt{6(5N_{1}^{2}+8N_{1}N_{2}+5N _{2}^{2})}}{6}\,\bar{\nu}_{M5}\epsilon_{3}\,, \tag{3.87}\]
and the corresponding value of the on-shell central charge is
\[\mathbb{V}^{(3)}=\frac{\bar{\nu}_{M5}^{3}\epsilon_{3}^{3}}{4}\Big{[}\frac{\big{(}6 (5N_{1}^{2}+8N_{1}N_{2}+5N_{2}^{2})\big{)}^{3/2}}{27}-2(N_{1}+N_{2})\big{(}3N_{1 }^{2}+4N_{1}N_{2}+3N_{2}^{2}\big{)}\Big{]}\,. \tag{3.88}\]
It can be checked that this expression agrees precisely with the central charge given in eq. (3.79) of [24] and it correctly reduces to (3.75) upon setting \(N_{1}=N_{2}=N\).
It would be interesting to construct explicit supergravity solutions corresponding to the two special configurations of fluxes we found. If they exist, they should lie outside the KE class considered in [19].
### \(\mathbb{M}_{4}=S^{2}\ltimes\mathbb{\Sigma}\)
We now consider the toric orbifold \(\mathbb{M}_{4}=S^{2}\ltimes\mathbb{\Sigma}\), namely a spindle \(\mathbb{\Sigma}=\mathbb{W}\mathbb{P}^{1}_{[n_{+},n_{-}]}\) fibred over a two-sphere, which is a case with only an \(SU(1)\times U(1)\) symmetry. We take the following fan
\[v^{1}=(n_{-},0)\,,\qquad v^{2}=(-k,1)\,,\qquad v^{3}=(-n_{+},0)\,,\qquad v^{4 }=(0,-1)\,, \tag{3.89}\]
and refer to [14] for more details about this orbifold. The total fan is as in (3.46) and the constraint (3.43) is solved by
\[M_{a}=\left(\frac{N_{1}}{n_{-}},N_{2},\frac{N_{1}-k\,N_{2}}{n_{+}},N_{2} \right)\,,\qquad M_{d+1}=-\sum_{a}M_{a}\,, \tag{3.90}\]
where \(N_{1},N_{2}\) parameterize the two independent fluxes, and notice that \(N_{2}=N_{4}\) is implied by the \(SU(2)\) symmetry acting on the base \(S^{2}\). The constraint (3.68) on \(\bar{\lambda}_{a,d+1}\) is solved by taking
\[\bar{\lambda}_{3,d+1}=-n_{+}\bigg{(}\bar{\nu}_{M5}N_{2}+\frac{\bar{\lambda}_{ 1,d+1}}{n_{-}}\bigg{)}\,,\quad\bar{\lambda}_{4,d+1}=-\bigg{(}\bar{\nu}_{M5}N_{ 1}+\frac{k\,\bar{\lambda}_{1,d+1}}{n_{-}}+\bar{\lambda}_{2,d+1}\bigg{)}\,, \tag{3.91}\]
and we can choose a gauge in which
\[\bar{\lambda}_{2,d+1}=-\frac{1}{2}\bigg{(}\bar{\nu}_{M5}N_{1}+\frac{k\,\bar{ \lambda}_{1,d+1}}{n_{-}}\bigg{)}\,. \tag{3.92}\]
After using the remaining gauge freedom to fix \(\bar{\lambda}_{1,d+1}\), we are then left to extremize \(\mathbb{V}^{(3)}\) with respect to \(\epsilon_{1},\epsilon_{2},\bar{\lambda}_{d+1}\). One can show that the combination
\[k\frac{\partial\mathbb{V}^{(3)}}{\partial\epsilon_{1}}-2\frac{\partial\mathbb{ V}^{(3)}}{\partial\epsilon_{2}}=0 \tag{3.93}\]
implies \(\epsilon_{2}^{*}=0\), as expected from the \(SU(2)\) symmetry, while generically \(\epsilon_{1}^{*}\neq 0\). In particular, \(\epsilon_{1}^{*}\) is determined solving a quartic equation, which takes about half a page to be written, so we will refrain from reporting this. The on-shell central charge can
then written in terms of the parameters \(N_{1},N_{2},k,n_{+},n_{-}\) and \(\epsilon_{1}^{*}\). For simplicity we shall present the results in three special cases, where the equations are qualitatively unchanged, but simpler to write.
Firstly, let us set \(k=0\). This leads to the direct product \(\mathbb{M}_{4}=S^{2}\times\mathbb{\Sigma}\) and in this case, defining
\[\chi\equiv\frac{n_{+}+n_{-}}{n_{+}n_{-}}\,,\qquad\mu\equiv\frac{n_{+}-n_{-}}{n _{+}+n_{-}}\,, \tag{3.94}\]
it is convenient to use the remaining gauge freedom to set
\[\bar{\lambda}_{1,d+1}=-\bar{\nu}_{M5}\frac{2(1-\mu)N_{2}-\mu\chi N_{1}}{2(1+ \mu)\chi}\,. \tag{3.95}\]
Upon extremizing we find that indeed \(\epsilon_{2}^{*}=0\) and
\[\bar{\lambda}_{d+1}^{*}=\frac{\bar{\nu}_{M5}\epsilon_{3}}{4\chi}\big{[}2( \chi N_{1}+2N_{2})\pm\mathfrak{s}_{1}^{1/2}\big{]}\,, \tag{3.96}\]
where we defined the quantity
\[\mathfrak{s}_{1}=N_{1}^{2}\chi^{2}(\mu\chi\tilde{\epsilon}_{1}^{*}-2)^{2}-2N_ {2}(\chi N_{1}+2N_{2})\big{(}(1-\mu)\chi\tilde{\epsilon}_{1}^{*}+2\big{)} \big{(}(1+\mu)\chi\tilde{\epsilon}_{1}^{*}-2\big{)}\,. \tag{3.97}\]
Here, \(\tilde{\epsilon}_{1}^{*}\) is solution to the quartic equation
\[\begin{split} 3\big{[}& N_{1}^{2}\mu\chi^{2}(2-\mu \chi\hat{\epsilon}_{1})+2N_{2}(\chi N_{1}+2N_{2})(2\mu+(1-\mu^{2})\chi\hat{ \epsilon}_{1})\big{]}\mathfrak{s}_{1}^{1/2}-3N_{1}^{3}\mu\chi^{3}(2-\mu\chi \hat{\epsilon}_{1})^{2}\\ &-N_{2}(3\chi^{2}N_{1}^{2}+6\chi N_{1}N_{2}+8N_{2}^{2})\big{(}12 \mu+4(1-3\mu^{2})\chi\hat{\epsilon}_{1}-3\mu(1-\mu^{2})\chi^{2}\hat{\epsilon} _{1}^{2}\big{)}=0\,,\end{split} \tag{3.98}\]
the critical value of \(\epsilon_{1}\) is given by \(\epsilon_{1}^{*}=\tilde{\epsilon}_{1}^{*}\epsilon_{3}\) and the on-shell central charge reads
\[\begin{split}\mathbb{V}^{(3)}&=\frac{\bar{\nu}_{M5 }^{3}\epsilon_{3}^{3}}{48\chi^{2}}\Big{\{}\mathfrak{s}_{1}^{3/2}-(2-\mu\chi \tilde{\epsilon}_{1}^{*})\big{[}N_{1}^{3}\chi^{3}(2-\mu\chi\tilde{\epsilon}_{1 }^{*})^{2}\\ &+N_{2}(3\chi^{2}N_{1}^{2}+6\chi N_{1}N_{2}+8N_{2}^{2})\big{(}2+(1 -\mu)\chi\tilde{\epsilon}_{1}^{*}\big{)}\big{(}2-(1+\mu)\chi\tilde{\epsilon}_ {1}^{*}\big{)}\big{]}\Big{\}}\,.\end{split} \tag{3.99}\]
Notice that setting \(n_{+}=n_{-}=1\) in the above expressions we get \(\epsilon_{1}^{*}=0\) and reproduce the expression (3.77) for the central charge of the \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) case.
Following the reasoning in [22], the total space \(M_{6}\) may be also viewed as an \(\mathbb{F}_{2}\) fibred over the spindle \(\mathbb{\Sigma}\) and we therefore interpret the corresponding putative AdS\({}_{5}\times M_{6}\) solution as arising from a stack of M5 branes at \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) singularity, further wrapped on the spindle \(\mathbb{\Sigma}\). It would be very interesting to reproduce the above central charge from an anomaly computation, or to construct the explicit AdS\({}_{5}\times M_{6}\) supergravity solution.
A second sub-case is obtained setting \(n_{+}=n_{-}=1\), with \(k>0\), and corresponds to the Hirzebruch surfaces \(\mathbb{F}_{k}\). Using the remaining gauge freedom now we can set
\[\bar{\lambda}_{1,d+1}=-\bar{\nu}_{M5}\frac{(2-k)N_{2}}{4} \tag{3.100}\]
and we find that the remaining two extremization equations are solved by
\[\bar{\lambda}^{*}_{d+1}=\frac{\bar{\nu}_{M5}\epsilon_{3}}{4}\big{[}(2N_{1}+(2-k)N _{2})\pm\mathbf{s}_{2}^{1/2}\big{]}\,, \tag{3.101}\]
where
\[\mathbf{s}_{2}=4N_{1}^{2}-4N_{1}N_{2}(\hat{\epsilon}_{1}^{*}+1)(\hat{\epsilon} _{1}^{*}-1+k)-N_{2}^{2}(\hat{\epsilon}_{1}^{*}+1)\big{(}(4-2k-k^{2})\hat{ \epsilon}_{1}^{*}-4+2k-k^{2}\big{)} \tag{3.102}\]
and \(\hat{\epsilon}_{1}^{*}\) is the solution to the quartic equation
\[12N_{1}^{2} (2\hat{\epsilon}_{1}+k)-6N_{1}N_{2}\big{(}3k\hat{\epsilon}_{1}^{2} -2(2-2k-k^{2})\hat{\epsilon}_{1}-k(1-2k)\big{)}\] \[-N_{2}^{2}\big{(}3k(2-3k-k^{2})\hat{\epsilon}_{1}^{2}-2(8-6k+3k^{ 2}+3k^{3})\hat{\epsilon}_{1}-k(2-3k+3k^{2})\big{)}\] \[-3\big{[}2N_{1}(2\hat{\epsilon}_{1}+k)+N_{2}\big{(}(4-2k-k^{2}) \hat{\epsilon}_{1}-k^{2}\big{)}\big{]}\mathbf{s}_{2}^{1/2}=0\,. \tag{3.103}\]
The critical value of \(\epsilon_{1}\) is again given by \(\epsilon_{1}^{*}=\hat{\epsilon}_{1}^{*}\epsilon_{3}\) and the on-shell central charge reads
\[\mathbb{V}^{(3)} =\frac{\bar{\nu}_{M5}^{3}\epsilon_{3}^{3}}{24}\Big{\{}\mathbf{s}_ {2}^{3/2}-8N_{1}^{3}+12N_{1}^{2}N_{2}(\hat{\epsilon}_{1}^{*}+1)(\hat{\epsilon} _{1}^{*}-1+k) \tag{3.104}\] \[-6N_{1}N_{2}^{2}(\hat{\epsilon}_{1}^{*}+1)\big{(}k\hat{\epsilon} _{1}^{*}-(2-k-k^{2})\hat{\epsilon}_{1}^{*}+2-2k-k^{2}\big{)}\] \[-N_{2}^{3}(\hat{\epsilon}_{1}^{*}+1)\big{(}k(2-3k-k^{2})\hat{ \epsilon}_{1}^{*2}-2(4-2k+k^{3})\hat{\epsilon}_{1}^{*}+(8-6k+3k^{2}-k^{3}) \big{)}\Big{\}}\,.\]
Again, setting \(k=0\) in the above expressions we get \(\epsilon_{1}^{*}=0\) and reproduce the expression (3.77) for the central charge of the \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) case.
This result is manifestly not in agreement with the central charge given in eq. (3.79) of [24], where by construction \(\epsilon_{1}^{*}=\epsilon_{2}^{*}=0\). In fact, we can reproduce this result if we impose by hand that \(\epsilon_{1}^{*}=\epsilon_{2}^{*}=0\) so that
\[\mathbb{V}^{(3)}(\bar{\lambda}_{d+1})=\frac{\bar{\lambda}_{d+1}}{6}\big{[}8 \bar{\lambda}_{d+1}^{2}-6(2N_{1}+(2-k)N_{2})\bar{\nu}_{M5}\epsilon_{3}\bar{ \lambda}_{d+1}+3N_{2}(2N_{1}-k\,N_{2})\bar{\nu}_{M5}^{2}\epsilon_{3}^{2}\big{]} \tag{3.105}\]
and then extremizing this with respect to the remaining parameter \(\bar{\lambda}_{d+1}\) yields
\[\bar{\lambda}_{d+1}^{*}=\frac{2N_{1}+(2-k)N_{2}-\sqrt{4N_{1}^{2}+4(1-k)N_{1}N _{2}+(4-2k+k^{2})N_{2}^{2}}}{4}\,\bar{\nu}_{M5}\epsilon_{3}\,, \tag{3.106}\]
giving the on-shell central charge
\[\mathbb{V}^{(3)}(\bar{\lambda}_{d+1}^{*}) =\frac{\bar{\nu}_{M5}^{3}\epsilon_{3}^{3}}{24}\Big{[}\big{(}4N_{1 }^{2}+4(1-k)N_{1}N_{2}+(4-2k+k^{2})N_{2}^{2}\big{)}^{3/2} \tag{3.107}\] \[-(2N_{1}+(2-k)N_{2})\big{(}4N_{1}^{2}+2(1-2k)N_{1}N_{2}+(4-k+k^{2} )N_{2}^{2}\big{)}\Big{]}\,,\]
coinciding with the expression given in eq. (3.79) of [24]. This, however, does not correspond to a true extremum of \(\mathbb{V}^{(3)}\)and therefore it is unlikely that there exist corresponding supergravity solutions, nor dual SCFTs.
Finally, let us also present the results for the particular configuration of fluxes \(N_{1}=\frac{k}{2}N\), \(N_{2}=N\) implying that24
Footnote 24: We take \(k\) to be even in this case.
\[M_{a}=\bigg{(}\frac{k\,N}{2n_{-}},N,-\frac{k\,N}{2n_{+}},N\bigg{)}\, \tag{3.108}\]
without any assumption on \(k\) and \(n_{+},n_{-}\). In order to simplify the expressions, we make use of the remaining gauge freedom to set
\[\bar{\lambda}_{1,d+1}=-\bar{\nu}_{M5}\frac{((1-\mu)(4+k\mu\chi)-k\chi)N}{(1+\mu) \chi(4+k\mu\chi)}\,. \tag{3.109}\]
The extremization problem is then solved by
\[\bar{\lambda}_{d+1}^{*}=\frac{\bar{\nu}_{M5}\epsilon_{3}N}{2\chi\xi}\big{(}2 \xi\pm\mathfrak{s}_{3}^{1/2}\big{)}\,, \tag{3.110}\]
where we defined \(\xi=4+k\mu\chi\) and
\[\mathfrak{s}_{3}=4\xi^{2}-4\mu\chi\xi^{2}\hat{\epsilon}_{1}^{*}-\chi^{2}(8(2+ k\mu\chi)(1-\mu^{2})-(1-\mu^{2}+\mu^{4})k^{2}\chi^{2})\hat{\epsilon}_{1}^{*2}\,. \tag{3.111}\]
Here, \(\hat{\epsilon}_{1}^{*}\) is solution to the quartic equation
\[\begin{split} 2\xi^{2}&(k\chi+6\mu\xi)+4\chi\xi^{2}(4-3 \mu^{2}\xi)\hat{\epsilon}_{1}-3\chi^{2}\Big{(}\frac{\xi^{2}(1-\mu^{2})(k\chi+2 \mu\xi)}{2}-k^{3}\chi^{3}\Big{)}\hat{\epsilon}_{1}^{2}\\ &-3\big{(}2\mu\xi^{2}+\chi(8(2+k\mu\chi)(1-\mu^{2})-(1-\mu^{2}+ \mu^{4})k^{2}\chi^{2})\hat{\epsilon}_{1}\big{)}\mathfrak{s}_{3}^{1/2}=0\,, \end{split} \tag{3.112}\]
and the critical value of \(\epsilon_{1}\) is \(\epsilon_{1}^{*}=\hat{\epsilon}_{1}^{*}\epsilon_{3}\). The central charge in terms of \(\hat{\epsilon}_{1}^{*}\) is given by
\[\begin{split}\mathbb{V}^{(3)}&=\frac{\bar{\nu}_{ M5}^{3}\epsilon_{3}^{3}N^{3}}{24\chi^{2}\xi^{2}}\Big{\{}\mathfrak{s}_{3}^{3/2} -\chi^{3}\Big{(}\frac{\xi^{2}(1-\mu^{2})(k\chi+2\mu\xi)}{2}-k^{3}\chi^{3} \Big{)}\hat{\epsilon}_{1}^{*3}\\ &+2\chi^{2}\xi^{2}(4-3\mu^{2}\xi)\hat{\epsilon}_{1}^{*2}+2\chi \xi^{2}(k\chi+6\mu\xi)\hat{\epsilon}_{1}^{*}-8\xi^{3}\Big{\}}\,.\end{split} \tag{3.113}\]
#### 3.2.4 General geometries
We now discuss the general AdS\({}_{5}\times M_{6}\) solution with no \(\mathbb{Z}_{2}\) symmetry. The fan (3.39) corresponds now to a compact geometry. The fan is the union of \(2d\) cones \((V^{a},V^{a+1},V^{d+1})\) and \((V^{a},V^{a+1},V^{d+2})\) corresponding to the fixed points of the torus action, that are specified by selecting a fixed point on \(\mathbb{M}_{4}\) and simultaneously the North or South pole of the fibre \(\mathbb{P}^{1}\).
The equivariant volume is now given by
\[\mathbb{V}=\sum_{a}\frac{\mathrm{e}^{-\lambda_{a}\epsilon_{1}^{a}-\lambda_{a+ 1}\epsilon_{2}^{a}-\lambda_{d+1}(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{ a})+\dots}}{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a}(\epsilon_{3}- \epsilon_{1}^{a}-\epsilon_{2}^{a})}-\sum_{a}\frac{\mathrm{e}^{-\lambda_{a} \epsilon_{1}^{a}-\lambda_{a+1}\epsilon_{2}^{a}+\lambda_{d+2}(\epsilon_{3}- \epsilon_{1}^{a}-\epsilon_{2}^{a})+\dots}}{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2 }^{a}(\epsilon_{3}-\epsilon_{1}^{a}-\epsilon_{2}^{a})}\,, \tag{3.114}\]
where the dots at the exponents contain the higher times.
This expression can also be written as an integral over \(\mathbb{M}_{4}\)
\[\mathds{V}(\lambda_{A},\lambda_{AB},\epsilon_{I})=\int_{\mathbb{M}_{4}}\frac{ \mathrm{e}^{\tau_{N}^{\mathbb{T}}}-\mathrm{e}^{\tau_{S}^{\mathbb{T}}}}{\epsilon _{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})}\:, \tag{3.115}\]
where we have defined the North pole equivariant form \(\tau_{N}^{\mathbb{T}}\) and South pole equivariant form \(\tau_{S}^{\mathbb{T}}\) as
\[\begin{split}&\tau_{N}^{\mathbb{T}}=\sum_{A}\lambda_{A}\,\mathcal{ C}_{A}^{N}+\sum_{A,B}\lambda_{AB}\,\mathcal{C}_{A}^{N}\:\mathcal{C}_{B}^{N}\:,\\ &\tau_{S}^{\mathbb{T}}=\sum_{A}\lambda_{A}\,\mathcal{C}_{A}^{S}+ \sum_{A,B}\lambda_{AB}\,\mathcal{C}_{A}^{S}\,\mathcal{C}_{B}^{S}\:,\\ &\mathcal{C}_{a}^{N}\:=\,\mathcal{C}_{a}^{S}\:=\,c_{1}^{\mathbb{ T}}(L_{a})\:,\qquad a=1,\ldots,d\:,\\ &\mathcal{C}_{d+2}^{S}\:=\,-\,\mathcal{C}_{d+1}^{N}\:=\,\epsilon _{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})\:,\\ &\mathcal{C}_{d+2}^{N}\:=\,\mathcal{C}_{d+1}^{S}\:=0\:.\end{split} \tag{3.116}\]
The flux equations are the following:
\[-\nu_{M5}M_{A}=\partial_{\lambda_{A}}\mathds{V}^{(2)}=\int_{\mathbb{M}_{4}} \frac{\mathcal{C}_{A}^{N}\,\tau_{N}^{\mathbb{T}}-\mathcal{C}_{A}^{S}\,\tau_{S }^{\mathbb{T}}}{\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})}\:. \tag{3.117}\]
For a generic fan, using the gauge transformations (2.14) and (2.15) we can set all \(\lambda_{a}=\lambda_{a,a}=\lambda_{a,a+1}=0\). However, as already mentioned, for special fans, including \(\mathbb{P}^{2}\) and \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), one of the single times \(\lambda_{a}\) remains unfixed. As a difference with the the \(\mathbb{Z}_{2}\) symmetric case, an arbitrary \(\lambda_{a}\) solves trivially the flux equations. Therefore we set \(\lambda_{a,a}=\lambda_{a,a+1}=0\) and keep \(\lambda_{a}\) with the understanding that the latter can be partially or totally gauged fixed to zero. The forms \(\tau_{N}^{\mathbb{T}}\) and \(\tau_{S}^{\mathbb{T}}\) with all variables can then be written as
\[\begin{split}\tau_{N}^{\mathbb{T}}=&\sum_{a}\lambda _{a}\,c_{1}^{\mathbb{T}}(L_{a})+\,\Big{(}\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T }}(L_{a})\Big{)}\Big{(}\overline{\lambda}_{d+1}+\sum_{a}\overline{\lambda}_{a, d+1}\,c_{1}^{\mathbb{T}}(L_{a})\Big{)}\:,\\ \tau_{S}^{\mathbb{T}}=&\sum_{a}\lambda_{a}\,c_{1}^{ \mathbb{T}}(L_{a})\,-\Big{(}\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a}) \Big{)}\Big{(}\overline{\lambda}_{d+2}+\sum_{a}\overline{\lambda}_{a,d+2}\,c _{1}^{\mathbb{T}}(L_{a})\Big{)}\:,\end{split} \tag{3.118}\]
where we have defined the \(\overline{\lambda}\) variables as
\[\begin{split}&\overline{\lambda}_{d+1}=\epsilon_{3}\lambda_{d+1,d+1} -\lambda_{d+1}\:,\qquad\quad\overline{\lambda}_{d+2}=-\epsilon_{3}\lambda_{d+ 2,d+2}-\lambda_{d+2}\:,\\ &\overline{\lambda}_{b,d+1}=\lambda_{d+1,d+1}-2\lambda_{b,d+1}\:, \qquad\quad\overline{\lambda}_{b,d+2}=-\lambda_{d+2,d+2}-2\lambda_{b,d+2}\:. \end{split} \tag{3.119}\]
Then equations (3.117) become
\[\begin{split}&-\nu_{M5}M_{a}=\sum_{b}D_{ab}(\bar{\lambda}_{b,d+1}+ \bar{\lambda}_{b,d+2})\:,\\ &-\nu_{M5}M_{d+1}=-\sum_{ab}D_{ab}\,\bar{\lambda}_{b,d+1}\:,\\ &-\nu_{M5}M_{d+2}=\sum_{ab}D_{ab}\,\bar{\lambda}_{b,d+2}\:.\end{split} \tag{3.120}\]
The expression for \(\mathds{V}^{(3)}\) is
\[\begin{split}\mathds{V}^{(3)}&=\frac{1}{6}\int_{ \mathbb{M}_{4}}\Big{(}\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})\Big{)}^{2} \Big{[}\big{(}\Lambda^{N}\big{)}^{3}+\big{(}\Lambda^{S}\big{)}^{3}\Big{]}\\ &+\frac{1}{2}\int_{\mathbb{M}_{4}}\Big{(}\sum_{a}\lambda_{a}\,c_{ 1}^{\mathbb{T}}(L_{a})\Big{)}\Big{(}\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L _{a})\Big{)}\Big{[}\big{(}\Lambda^{N}\big{)}^{2}-\big{(}\Lambda^{S}\big{)}^{2} \Big{]}\\ &+\frac{1}{2}\int_{\mathbb{M}_{4}}\Big{(}\sum_{a}\lambda_{a}\,c_{ 1}^{\mathbb{T}}(L_{a})\Big{)}^{2}\Big{[}\Lambda^{N}+\Lambda^{S}\Big{]}\:,\end{split} \tag{3.121}\]
where we defined
\[\Lambda^{N}=\overline{\lambda}_{d+1}+\sum_{a}\overline{\lambda}_{a,d+1}\,c_{ 1}^{\mathbb{T}}(L_{a})\:,\qquad\Lambda^{S}=\overline{\lambda}_{d+2}+\sum_{a} \overline{\lambda}_{a,d+2}\,c_{1}^{\mathbb{T}}(L_{a})\:. \tag{3.122}\]
The flux constraints are not enough to fix all the \(\overline{\lambda}\), so the idea is again to extremize \(\mathds{V}^{(3)}\) with respect to the remaining variables. It is convenient to define \(\overline{\lambda}_{b,+}\) and \(\overline{\lambda}_{b,-}\) as
\[\overline{\lambda}_{b,\pm}=\overline{\lambda}_{b,d+1}\pm\overline{\lambda}_{b, d+2}\:, \tag{3.123}\]
so that all the \(\overline{\lambda}_{b,+}\) are fixed (up to gauge transformations) by
\[-\nu_{M5}M_{a}=\sum_{b}D_{ab}\,\overline{\lambda}_{b,+}\:, \tag{3.124}\]
whereas the \(\overline{\lambda}_{b,-}\) are only subject to the following constraint:
\[\nu_{M5}(M_{d+1}+M_{d+2})=\,\sum_{ab}D_{ab}\,\overline{\lambda}_{b,-}\:. \tag{3.125}\]
The extremization conditions then are
\[0=\frac{\partial\mathds{V}^{(3)}}{\partial\lambda_{a}}=\frac{\partial\mathds{ V}^{(3)}}{\partial\overline{\lambda}_{d+1}}=\frac{\partial\mathds{V}^{(3)}}{ \partial\overline{\lambda}_{d+2}}=\sum_{a}\rho^{a}\,\frac{\partial\mathds{V}^ {(3)}}{\partial\overline{\lambda}_{a,-}}\:,\quad\forall\:\rho^{a}\text{ such that }\sum_{ab}D_{ab}\,\rho^{b}=0\:. \tag{3.126}\]
In general these equations do not look easy, but in the special case \(M_{d+1}+M_{d+2}=0\) there is a simple solution: we can set \(\lambda_{a}=0\), \(\overline{\lambda}_{d+1}=\overline{\lambda}_{d+2}\) and \(\overline{\lambda}_{b,d+1}=\overline{\lambda}_{b,d+2}\) so that the equations with \(\rho^{a}\) are trivially solved. The rest of the computation reduces to that of section 3.2.2 for \(\mathbb{Z}_{2}\) symmetric geometries. When \(M_{d+1}+M_{d+2}\neq 0\) this simple solution is not possible because the constraint (3.125) would not be satisfied.
#### 3.2.5 Examples of general geometries
In this section we consider again the examples based on \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and dP\({}_{3}\) and compare with the supergravity solutions in [20], where the additional parameter \(c\) is turned on. Since the solutions in [20] all correspond to a critical point at \(\epsilon_{1}=\epsilon_{2}=0\), for simplicity in this section we restrict again to configurations with this feature,
which as we discussed requires a special choice of fluxes for the case of dP\({}_{3}\), while it is automatic for generic fluxes for \(\mathbb{P}^{2}\) and \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). The explicit value of the central charge has been written in [22, 23, 24].
The free energy (3.121) can be expanded in a sum of integrals of equivariant Chern classes. Since the multiple intersections (3.70) are homogeneous function of degree \(p-2\) in \(\epsilon_{1}\) and \(\epsilon_{2}\), all the terms involving \(D_{a_{1},\dots a_{p}}\) with \(p>2\) in (3.121) vanish for \(\epsilon_{1}=\epsilon_{2}=0\), and the free energy simplifies to
\[\begin{split} 6\mathds{V}^{(3)}&=\bar{\lambda}_{d+1}^{3} \sum_{ab}D_{ab}+3\bar{\lambda}_{d+1}^{2}\sum_{ab}D_{ab}(2\epsilon_{3}\bar{ \lambda}_{a,d+1}+\lambda_{a})\\ &+3\bar{\lambda}_{d+1}\sum_{ab}D_{ab}(\epsilon_{3}\bar{\lambda}_{ a,d+1}+\lambda_{a})(\epsilon_{3}\bar{\lambda}_{b,d+1}+\lambda_{b})\\ &+\bar{\lambda}_{d+2}^{3}\sum_{ab}D_{ab}+3\bar{\lambda}_{d+2}^{2 }\sum_{ab}D_{ab}(2\epsilon_{3}\bar{\lambda}_{a,d+2}-\lambda_{a})\\ &+3\bar{\lambda}_{d+2}\sum_{ab}D_{ab}(\epsilon_{3}\bar{\lambda}_ {a,d+2}-\lambda_{a})(\epsilon_{3}\bar{\lambda}_{b,d+2}-\lambda_{b})\,.\end{split} \tag{3.127}\]
Using the flux constraints we can also write
\[\mathds{V}^{(3)} =\bar{\lambda}_{d+1}^{3}\sum_{ab}D_{ab}+3\bar{\lambda}_{d+1}^{2} \epsilon_{3}\nu_{M5}M_{d+1}+3\bar{\lambda}_{d+1}^{2}\sum_{ab}D_{ab}\lambda_{a }^{+}+3\bar{\lambda}_{d+1}\sum_{ab}D_{ab}\lambda_{a}^{+}\lambda_{b}^{+} \tag{3.128}\] \[+\bar{\lambda}_{d+2}^{3}\sum_{ab}D_{ab}-3\bar{\lambda}_{d+2}^{2} \epsilon_{3}\nu_{M5}M_{d+2}+3\bar{\lambda}_{d+2}^{2}\sum_{ab}D_{ab}\lambda_{a }^{-}+3\bar{\lambda}_{d+2}\sum_{ab}D_{ab}\lambda_{a}^{-}\lambda_{b}^{-}\,,\]
where
\[\lambda_{a}^{+}=\epsilon_{3}\bar{\lambda}_{a,d+1}+\lambda_{a}\,,\qquad\lambda_ {a}^{-}=\epsilon_{3}\bar{\lambda}_{a,d+2}-\lambda_{a}\,, \tag{3.129}\]
are constrained variables.
We consider first the general case (\(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and dP\({}_{3}\)) with all fluxes associated to the fan of the KE\({}_{4}\) set equal to \(N\). The fan and intersection matrix are given in (3.72) and (3.73). There are, in principle, \(d-2\) independent fluxes on \(\mathbb{M}_{4}\) that we can turn on but in the supergravity solution with KE metric they are equal and we first restrict to this case. The relations \(\sum_{A}V_{I}^{A}M_{A}=0\) require \(M_{A}=(N,\dots,N,N_{N},N_{S})\) with \(dN+N_{N}-N_{S}=0\) and we can parameterize \(N_{N}=M-\frac{d}{2}N\) and \(N_{S}=M+\frac{d}{2}N\), possibly allowing an half-integer \(M\). Given the symmetry of the problem, we take all \(\lambda_{a}\) to be equal, and similarly for the \(\bar{\lambda}_{a,d+1}\) and \(\bar{\lambda}_{a,d+2}\). The condition \(\sum_{abc}D_{abc}=0\) holds for these models, and therefore all linear terms in \(\epsilon_{1},\epsilon_{2}\) in \(\mathds{V}^{(3)}\) vanish, guaranteeing a critical point at \(\epsilon_{1}=\epsilon_{2}=0\). The flux conditions are solved by
\[\bar{\lambda}_{a,d+1}=\nu_{M5}\frac{\left(M-\frac{dN}{2}\right)}{dm_{k}}\,, \qquad\bar{\lambda}_{a,d+2}=-\nu_{M5}\frac{\left(M+\frac{dN}{2}\right)}{dm_{k} }\,, \tag{3.130}\]
where \(\sum_{ab}D_{ab}=dm_{k}\) so that \(m_{k}=3,2,1\) for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and dP\({}_{3}\), respectively. Extremizing with respect to \(\lambda_{a}\) and defining \(\bar{\lambda}_{d+1}=\nu_{M5}(H+K)\) and \(\bar{\lambda}_{d+2}=\nu_{M5}(H-\frac{dN}{2})\).
\(K\)) we find
\[6\nu_{M5}^{-3}\nabla^{(3)}=2dm_{k}H^{3}-\frac{3}{2}d\epsilon_{3}^{2}N^{2}\frac{K^ {2}}{m_{k}H}-6d\epsilon_{3}NH^{2}+3\epsilon_{3}H(4KM+\frac{d}{2m_{k}}\epsilon_{3 }N^{2})\,, \tag{3.131}\]
which after extremization gives25
Footnote 25: Recall that to compare with section 3.2.2 we need to use the rescaling (3.47).
\[\mathbb{V}^{(3)}=\frac{d^{2}\nu_{M5}^{3}\epsilon_{3}^{3}N^{4}}{12m_{k}^{2}(d^{ 2}N^{2}+12M^{2})^{2}}\left((3d^{2}N^{2}-12M^{2})^{3/2}-dN(5d^{2}N^{2}-36M^{2}) \right)\,. \tag{3.132}\]
An analogous formula for non-necessarily toric KE has recently appeared in [24].
In the case \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) we can turn on two independent fluxes and have round metrics on the \(\mathbb{P}^{1}\)s. We take the general assignment of fluxes compatible with \(\sum_{A}V_{I}^{A}M_{A}=0\):
\[M_{A}=(N_{1},N_{2},N_{1},N_{2},N_{N},N_{S})\,, \tag{3.133}\]
where
\[2N_{1}+2N_{2}+N_{N}-N_{S}=0\]
and we can parameterize \(N_{N}=M-N_{1}-N_{2}\) and \(N_{S}=M+N_{1}+N_{2}\). Using the gauge transformations (2.14) we can also reduce to the case
\[\lambda_{a+2}=\lambda_{a}\,,\,\bar{\lambda}_{a+2,d+1}=\bar{\lambda}_{a,d+1}\,, \,\bar{\lambda}_{a+2,d+2}=\bar{\lambda}_{a,d+2}\,. \tag{3.134}\]
Notice that, in this gauge, all the linear terms in \(\epsilon_{1},\epsilon_{2}\) in \(\mathbb{V}^{(3)}\) vanishes since, as one can check,
\[\sum_{abc}D_{abc}l_{a}^{(1)}l_{b}^{(2)}l_{c}^{(3)}=0 \tag{3.135}\]
provided the vectors \(l_{a}^{(k)}\) satisfy \(l_{a}^{(k)}=l_{a+2}^{(k)}\). We can solve the flux constraints
\[2\bar{\lambda}_{1,d+1}+2\bar{\lambda}_{1,d+2}+\nu_{M5}N_{1}=0\,, \quad 4\bar{\lambda}_{1,d+1}+4\bar{\lambda}_{2,d+1}+\nu_{M5}(-M+N_{1}+N_{ 2})=0\,,\] \[2\bar{\lambda}_{2,d+1}+2\bar{\lambda}_{2,d+2}+\nu_{M5}N_{2}=0\,, \quad 4\bar{\lambda}_{1,d+2}+4\bar{\lambda}_{2,d+2}+\nu_{M5}(M+N_{1}+N_{ 2})=0\,, \tag{3.136}\]
by
\[\bar{\lambda}_{2,d+1}=-\bar{\lambda}_{1,d+1}+\frac{1}{4}\nu_{M5}(M-N_{1}-N_{ 2})\,,\]
\[\bar{\lambda}_{1,d+2}=-\bar{\lambda}_{1,d+1}-\nu_{M5}\frac{N_{2}}{2}\,, \tag{3.137}\]
\[\bar{\lambda}_{2,d+2}=\bar{\lambda}_{1,d+1}+\frac{1}{4}\nu_{M5}(-M-N_{1}+N_{ 2})\,.\]
Extremizing with respect to \(\lambda_{1,2}\) and \(\bar{\lambda}_{1,d+1}\) and defining \(\bar{\lambda}_{d+1}=\nu_{M5}(H+K)\) and \(\bar{\lambda}_{d+2}=\nu_{M5}(H-K)\) we find
\[6\nu_{M5}^{-3}\mathbb{V}^{(3)}=16H^{3}-3\epsilon_{3}^{2}N_{1}N_{2}\frac{K^{2} }{H}-12\epsilon_{3}(N_{1}+N_{2})H^{2}+3\epsilon_{3}H(4KM+\epsilon_{3}N_{1}N_{ 2})\,, \tag{3.138}\]
which after extremization gives
\[\begin{split}\mathds{V}^{(3)}&=\frac{\nu_{M5}^{3}c_{3}^ {3}N_{1}^{2}N_{2}^{2}(4N_{1}^{2}+4N_{1}N_{2}+4N_{2}^{2}-3M^{2})^{3/2}}{6(4N_{1}N _{2}+3M^{2})^{2}}\\ &-\frac{\nu_{M5}^{3}c_{3}^{3}N_{1}^{2}N_{2}^{2}(N_{1}+N_{2})(8N_{ 1}^{2}+4N_{1}N_{2}+8N_{2}^{2}-9M^{2})}{6(4N_{1}N_{2}+3M^{2})^{2}}\,,\end{split} \tag{3.139}\]
reproducing (5.7) of [23].
Finally, let us mention that in the case of \(\mathrm{dP}_{3}\) we can turn on four independent fluxes along the base plus one additional flux \(M\), and the general extremization problem is intractable. It is possible to solve it for the two special configurations of fluxes with enhanced symmetry discussed previously. We leave this as an instructive exercise for the reader.
The case \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) has been interpreted in [22, 23] as a solution for M5 branes sitting at the orbifold \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) wrapped over one of the \(\mathbb{P}^{1}\). The interpretation follows by deriving the central charge from an anomaly polynomial computation. It would be very interesting to understand if our general formula (3.121) can be written as the integral of the anomaly polynomial for some M5 brane theory wrapped over a two-cycle in \(\mathbb{M}_{4}\) and give a field theory interpretation of the solution.
### Comparison with other approaches
It is interesting to compare with the recent approach based on Killing spinor bilinears in M theory [13]. The technique consists in considering a set of equivariantly closed differential forms which can be constructed from Killing spinor bilinears. Three such forms have been explicitly constructed for \(\mathrm{AdS}_{5}\times M_{6}\) in [13] and for \(\mathrm{AdS}_{3}\times M_{8}\) in [24, 25]. Our results in sections 3.1 and 3.2 partially overlap with those in [24, 25] and it is interesting to compare the two methods. We will show that they are actually equivalent, when they can be compared, although in a non-trivial way.
For both cases, \(\mathrm{AdS}_{11-k}\times M_{k}\) with \(k=6,8\), the authors of [24, 25] define an equivariant \(k\)-form \(\Phi\) whose higher-degree component is the warped volume of \(M_{k}\) and the lowest component the third power of a special locally defined function \(y\). Up to coefficients, the integral of \(\Phi\) is the free energy, so we have
\[F=\int_{M_{k}}\Phi=\sum_{\alpha}\frac{y^{3}|_{\alpha}}{d_{\alpha}\,e^{\mathbb{ T}^{k/2}}|_{\alpha}}\,, \tag{3.140}\]
where \(\alpha\) are the fixed points of the geometry, and we recognize our expression for \(\mathds{V}^{(3)}\) for M theory solutions26
Footnote 26: We are omitting a \((-1)^{k/2}\) sign in the expression for \(\mathds{V}^{(3)}\). In this discussion we are ignoring all such overall numerical factors.
\[F=\mathds{V}^{(3)}=\sum_{\alpha}\frac{(\tau_{\alpha})^{3}}{d_{\alpha}\,e^{ \mathbb{T}^{k/2}}|_{\alpha}}\,, \tag{3.141}\]
upon identifying
\[y|_{\alpha}=\tau_{\alpha}\,. \tag{3.142}\]
There exists also an equivariant four-form \(\Phi^{F}\) whose higher-degree component is the M theory four-form and the lowest component the first power of the function \(y\). The flux quantization conditions give then
\[\begin{split} M_{AB}&=\int_{M_{8}}\Phi^{F}c_{1}(L_{A} )c_{1}(L_{B})=\sum_{\alpha}\frac{(c_{1}^{\mathbb{T}^{4}}(L_{A})c_{1}^{\mathbb{ T}^{4}}(L_{B})y)|_{\alpha}}{d_{\alpha}\,e^{\mathbb{T}^{4}}|_{\alpha}}\,,\\ M_{A}&=\int_{M_{6}}\Phi^{F}c_{1}(L_{A})=\sum_{\alpha} \frac{(c_{1}^{\mathbb{T}^{3}}(L_{A})y)|_{\alpha}}{d_{\alpha}\,e^{\mathbb{T}^{ 3}}|_{\alpha}}\,,\end{split} \tag{3.143}\]
for AdS\({}_{3}\times M_{8}\) and AdS\({}_{5}\times M_{6}\), respectively and it easy to see that these conditions are equivalent, up to coefficients, to our (3.4) and (3.45) with the same identification \(y|_{\alpha}=\tau_{\alpha}\).
Finally there exists another auxiliary form, a four-form \(\Phi^{*F}\) in AdS\({}_{3}\times M_{8}\) and a two-form \(\Phi^{Y}\) in AdS\({}_{5}\times M_{6}\), whose lowest component is the second power of the function \(y\).
Consider first the AdS\({}_{3}\times M_{8}\) solutions with wrapped M5 branes of section 3.1. The vanishing of the \(\Phi^{*F}\) flux along \(S^{4}\) is used in [24, 25] to enforce a \(\mathbb{Z}_{2}\) symmetry of the solution by identifying \(y^{N}|_{a}=-y^{S}|_{a}\), thus effectively cutting by half the number of fixed points. With the identification \(y|_{a}=\tau_{a}\), our construction in section 3.1 is then equivalent to the one in [24].
Consider next the AdS\({}_{5}\times M_{6}\) solutions of section 3.2. The approaches are complementary. While we consider toric orbifolds and the action of the full torus \(\mathbb{T}^{3}=U(1)^{3}\), the authors of [25] consider \(\mathbb{P}^{1}\) bundles over a smooth four-manifold \(B_{4}\) and assume that the R-symmetry vector has no legs along \(B_{4}\). Let us observe that this assumption can fail in general. For a generic \(B_{4}\) with abelian isometries there is no reason to expect that the R-symmetry does not mix with the isometries of \(B_{4}\) and a full-fledged computation considering the torus action on \(B_{4}\) is necessary. Also for the toric \(B_{4}=\mathrm{dP}_{3}\) with a generic choice of fluxes we expect a mixing with the isometries of \(B_{4}\), as discussed in section 3.2.2. Under this condition, the central charge given in [25] is not necessarily the extremum of the free energy. Obviously, whenever the two approaches can be compared and the assumption in [25] is satisfied, we find agreement.
From a technical point of view, this might be surprising. Recall indeed that the flux constraints do not completely fix the values of the times \(\tau_{a}=y|_{a}\). In our construction, we just extremize the free energy \(F\) with respect to the remaining parameters. In [24, 25] instead, in a case-by-case analysis, the auxiliary form \(\Phi^{Y}\) is used to find additional conditions to fix the \(y|_{\alpha}\). The two methods look superficially different, but we now show that they are effectively equivalent.
The extremization conditions with respect to the Kahler parameters that are not fixed by the flux constraints are written in (3.126). The first three conditions
\[\frac{\partial\mathds{V}^{(3)}}{\partial\lambda_{A}}=0\,,\qquad A=a,d+1,d+2\,, \tag{3.144}\]
can also be rewritten as
\[\sum_{B}V_{i}^{B}\frac{\partial\mathds{V}^{(3)}}{\partial\lambda_{BA}}(1+ \delta_{AB})=0\,,\qquad i=1,2\,,\qquad A=a,d+1,d+2\,. \tag{3.145}\]
Indeed
\[\begin{split}\sum_{B}& V_{i}^{B}\frac{\partial \mathds{V}}{\partial\lambda_{BA}}(1+\delta_{AB})=-2\int_{M_{6}}\mathrm{e}^{ \tau^{7^{3}}}c_{1}^{\mathbb{T}^{3}}(L_{A})\sum_{B}V_{i}^{B}c_{1}^{\mathbb{T}^{3 }}(L_{B})\\ &=2\epsilon_{i}\int_{M_{6}}\mathrm{e}^{\tau^{7^{3}}}c_{1}^{ \mathbb{T}^{3}}(L_{A})=-2\epsilon_{i}\frac{\partial\mathds{V}}{\partial \lambda_{A}}\,,\end{split} \tag{3.146}\]
and taking the degree two component of this equation we see that all the conditions (3.145) collapse to the extremization of the free energy with respect to the parameter \(\lambda_{A}\).
The conditions (3.126) can be then written as
\[0=\sum_{b}v_{i}^{b}\frac{\partial\mathds{V}^{(3)}}{\partial\lambda_{b,a}}(1+ \delta_{ba})=\sum_{b}v_{i}^{b}\frac{\partial\mathds{V}^{(3)}}{\partial\overline {\lambda}_{b,d+1}}=\sum_{b}v_{i}^{b}\frac{\partial\mathds{V}^{(3)}}{\partial \overline{\lambda}_{b,d+2}}=\sum_{a}\rho^{a}\,\frac{\partial\mathds{V}^{(3)}} {\partial\overline{\lambda}_{a,-}}\,, \tag{3.147}\]
where \(\rho^{a}\) is such that \(\sum_{ab}D_{ab}\,\rho^{b}=0\). Now, the equations
\[\begin{split}(1+\delta_{bc})\frac{\partial\mathds{V}^{(3)}}{ \partial\lambda_{b,c}}&=\sum_{a}(c_{1}^{\mathbb{T}^{3}}(L_{b})c_{ 1}^{\mathbb{T}^{3}}(L_{c}))|_{a}\frac{(\tau_{A}^{N})^{2}-(\tau_{a}^{S})^{2}}{ d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a}(\epsilon_{3}-\epsilon_{1}^{a}- \epsilon_{2}^{a})}\,,\\ \frac{\partial\mathds{V}^{(3)}}{\partial\lambda_{b,d+1}}& =\sum_{a}c_{1}^{\mathbb{T}^{3}}(L_{b})|_{a}\frac{(\tau_{A}^{N})^ {2}}{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a}}\,,\\ \frac{\partial\mathds{V}^{(3)}}{\partial\lambda_{b,d+2}}& =\sum_{a}c_{1}^{\mathbb{T}^{3}}(L_{b})|_{a}\frac{(\tau_{a}^{S})^ {2}}{d_{a,a+1}\epsilon_{1}^{a}\epsilon_{2}^{a}}\,,\end{split} \tag{3.148}\]
given the identification \(y|_{a}=\tau_{a}\) and the fact that \(\Phi^{Y}\) has lowest component \(y^{2}\), translate into the localization formulas for
\[\begin{split}\int_{\mathbb{P}^{1}_{bc}}\Phi^{Y}&= \int_{M_{6}}c_{1}(L_{c})c_{1}(L_{b})\Phi^{Y}\,,\\ \int_{D^{N}_{b}}\Phi^{Y}&=\int_{M_{6}}c_{1}(L_{d+1}) c_{1}(L_{b})\Phi^{Y}\,,\\ \int_{D^{S}_{b}}\Phi^{Y}&=\int_{M_{6}}c_{1}(L_{d+2}) c_{1}(L_{b})\Phi^{Y}\,,\end{split} \tag{3.149}\]
respectively, where \(\mathbb{P}^{1}_{bc}\) is the fibre taken at the fixed points \(D_{c}\cap D_{b}\) on the base (\(b=c\pm 1\) necessarily) and the \(D^{N,S}_{b}\) are the divisor on the base taken at the North and South pole of the fibre, respectively. The extremization constraints are then equivalent to the following co-homological relations
\[\begin{split} 0&=\sum_{b}v_{i}^{b}\int_{\mathbb{P}^{1}_{bc}} \Phi^{Y}=\sum_{b}v_{i}^{b}\int_{D^{N}_{b}}\Phi^{Y}=\sum_{b}v_{i}^{b}\int_{D^{S }_{b}}\Phi^{Y}\\ &=\sum_{a}\rho^{a}\left(\,\int_{D^{N}_{a}}\Phi^{Y}-\int_{D^{S}_{a }}\Phi^{Y}\right),\quad\forall\,\rho^{a}\text{ such that }\sum_{ab}D_{ab}\,\rho^{b}=0\,.\end{split} \tag{3.150}\]
The first three conditions are obvious: the cycles \(\sum_{b}v_{i}^{b}\,\mathbb{P}^{1}_{bc}\), \(\sum_{b}v_{i}^{b}\,D^{N,S}_{b}\) are trivial in homology. The last equation equates cycles sitting at the North and South pole. The corresponding fluxes of \(\Phi^{Y}\) do not need to be equal but they must be related. We know that \(c_{1}(L_{d+2})=c_{1}(L_{d+1})+\sum_{a}c_{1}(L_{a})\).27 Then
Footnote 27: The \(I=3\) condition of \(\sum_{A}V_{I}^{A}c_{1}(L_{A})=0\).
\[\begin{split}\sum_{a}\rho^{a}\left(\,\int_{D^{N}_{a}}\Phi^{Y}- \int_{D^{S}_{a}}\Phi^{Y}\right)&=\sum_{a}\rho^{a}\int_{M_{6}} \Big{(}c_{1}(L_{d+1})-c_{1}(L_{d+2})\Big{)}c_{1}(L_{a})\Phi^{Y}\\ &=-\sum_{ab}\rho^{a}\int_{M_{6}}c_{1}(L_{b})c_{1}(L_{a})\Phi^{Y }\propto\sum_{ab}\rho^{a}D_{ab}=0\,.\end{split} \tag{3.151}\]
The last step follows by expanding \(\Phi^{Y}\) in a sum of Chern classes, and by writing \(\int_{M_{6}}c_{1}(L_{b})c_{1}(L_{a})\Phi^{Y}\) as a sum of triple intersections \(D^{M_{6}}_{ABC}\) on \(M_{6}\). But \(D^{M_{6}}_{abc}=0\) and \(D^{M_{6}}_{d+1,a,b}=D^{M_{6}}_{d+2,a,b}=D_{ab}\).28
Footnote 28: The triple intersections on \(M_{6}\) are easily computed as \(D^{M_{6}}_{ABC}=\frac{\partial\Psi^{(3)}}{\partial\lambda_{A}\partial\lambda_{ B}\partial\lambda_{C}}\) from (3.114).
We see that our construction based on the equivariant volume naturally incorporates the localization approach of [13, 24, 25], with the advantage that all the geometrical constraints that must be imposed case-by-case in order to find the free energy in [13, 24, 25] appear naturally in our construction: they correspond to the extremization with respect to all parameters that remain after imposing the flux constraints. This avoids an analysis based on the specific topology of the background.
## 4 AdS\({}_{2}\), AdS\({}_{3}\) and AdS\({}_{4}\) solutions in type II supergravities
In this section we consider solutions in type II string theory with geometries that are fibrations over a four-dimensional orbifold \(\mathbb{M}_{4}\). We consider the case of massive type IIA solutions with D4 brane flux, corresponding to D4 branes wrapped over \(\mathbb{M}_{4}\) and the case of type IIB solutions with D3 brane flux. In all cases, we show that the free energy can be obtained by extremizing the appropriate term in the equivariant volume.
### AdS\({}_{2}\times M_{8}\) solutions in massive type IIA
In this section we turn our attention to D4 branes wrapped around a generic four-dimensional toric orbifold \(\mathbb{M}_{4}\)[14, 16, 17]. Specifically the brane system we study corresponds to AdS\({}_{2}\times M_{8}\) solutions in massive type IIA, where \(M_{8}\) is an \(S^{4}\) fibration over \(\mathbb{M}_{4}\). The geometry is similar to the case of M5 branes wrapped around \(\mathbb{M}_{4}\) considered in section 3.1 and we can borrow most of the computations. Here, due to the orientifold projection,29 the \(\mathbb{Z}_{2}\) projection used in section 3.1 is automatically implemented and there is only one set of fixed points, at one of the poles of \(S^{4}\). The geometry to consider is then a CY\({}_{4}\), a \(\mathbb{C}^{2}\) fibration over \(\mathbb{M}_{4}\) with toric fan generated by the vectors (3.1). As discussed in [2] and in the introduction, the prescription for D4 in massive type IIA is also similar to (3.4), with different degrees of homogeneity:
Footnote 29: The brane system is actually D4 in the presence of D8, which generate the cosmological constant, and an orientifold plane O8 that cuts \(S^{4}\) into half.
\[\nu_{D4}\,(2-\delta_{AB})\,M_{AB}=-\frac{\partial}{\partial\lambda_{AB}} \mathds{V}^{(3)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\,,\qquad F=\mathds{V}^ {(5)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\,. \tag{4.1}\]
The rest of the discussion is very similar to the section 3.1. We can write the flux equations as
\[\nu_{D4}\,M_{AB}=-\int_{\mathbb{M}_{4}}\frac{\mathcal{C}_{A}\,\mathcal{C}_{B} \big{(}\tau^{\mathbb{T}}\big{)}^{2}}{2\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}= -\sum_{a}\frac{B_{a}^{(2)}\cdot\big{(}\mathcal{C}_{A}\,\mathcal{C}_{B}\big{)} |_{a}}{d_{a,a+1}\,\epsilon_{1}^{a}\,\epsilon_{2}^{a}}\,, \tag{4.2}\]
where the equivariant forms \(\mathcal{C}_{A}\), \(\tau^{\mathbb{T}}\) and the \(B_{a}^{(\alpha)}\) are defined respectively by (3.13) and (3.15). These equations are identical to the ones of section 3.1, with the only difference being that \(B_{a}^{(2)}\) takes the place of \(B_{a}^{(1)}\). The solution can be read from (3.20) and (3.24):
\[\begin{split}& B_{a}^{(2)}=-\nu_{D4}\,N\,,\\ & M_{AB}=N\sum_{c,d}\mathfrak{t}_{A}^{c}\,\mathfrak{t}_{B}^{d}\,D _{cd}\,,\end{split} \tag{4.3}\]
with \(\mathfrak{t}_{A}^{c}\) given by (3.25).
Following prescription (4.1), the solution (4.3) must be substituted in the expression for \(\mathds{V}^{(5)}\), which depends on the \(B_{a}^{(5)}\) as
\[\mathds{V}^{(5)}(\lambda_{A},\lambda_{AB},\epsilon_{I})=\sum_{a}\frac{B_{a}^{ (5)}}{d_{a,a+1}\,\epsilon_{1}^{a}\,\epsilon_{2}^{a}}\,. \tag{4.4}\]
The relation between \(B_{a}^{(5)}\) and \(B_{a}^{(2)}\) is given in equation (3.16), with the added complication that the exponents are half-integers and thus we need to be careful about the signs:
\[\begin{split}& B_{a}^{(5)}=\eta_{a}\,\frac{2^{\frac{3}{2}}}{5!} \left|\nu_{D4}\,N\right|^{\frac{5}{2}}\left|\big{(}\mathcal{C}_{d+1}\, \mathcal{C}_{d+2}\big{)}|_{a}\right|^{\frac{3}{2}},\\ &\eta_{a}=\text{sign}\left(\big{(}\mathcal{C}_{d+1}\,\mathcal{C} _{d+2}\big{)}|_{a}\right)\cdot\text{sign}\left(\tau_{a}\right).\end{split} \tag{4.5}\]
We note that the sign of \(\big{(}\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\big{)}|_{a}\) is the same as the sign of \(B_{a}^{(2)}\), and thus is fixed:
\[\text{sign}\,\Big{(}\big{(}\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\big{)}|_{a} \Big{)}=\text{sign}\,\Bigg{(}\frac{(\tau_{a})^{2}}{2\big{(}\mathcal{C}_{d+1}\, \mathcal{C}_{d+2}\big{)}|_{a}}\Bigg{)}=\text{sign}\,\big{(}-\nu_{D4}\,N\big{)} \equiv\,\sigma\:. \tag{4.6}\]
The sign of the \(\tau_{a}\) however is not fixed by (4.3). We can rewrite the equations \(B_{a}^{(2)}=-\nu_{D4}\,N\) as
\[\tau_{a}=\sigma\,\eta_{a}\,\sqrt{-2\nu_{D4}\,N\big{(}\mathcal{C}_{d+1}\, \mathcal{C}_{d+2}\big{)}|_{a}}\:. \tag{4.7}\]
It is always possible to find \(\lambda_{A}\) and \(\lambda_{AB}\) that solve these equations, whatever the value of \(\eta_{a}\) might be.
For the free energy we can write
\[F=\frac{2^{\frac{5}{2}}}{5!}\big{(}-\nu_{D4}N\big{)}^{\frac{5}{2}}\sum_{a} \frac{\eta_{a}\big{(}(\epsilon_{3}-\epsilon_{4}+(\mathfrak{t}_{a}-1)\epsilon_{ 1}^{a}+(\mathfrak{t}_{a+1}-1)\epsilon_{2}^{a})(\epsilon_{4}-\mathfrak{t}_{a} \epsilon_{1}^{a}-\mathfrak{t}_{a+1}\epsilon_{2}^{a})\big{)}^{\frac{3}{2}}}{d_{ a,a+1}\,\epsilon_{1}^{a}\,\epsilon_{2}^{a}}, \tag{4.8}\]
thus reproducing the extremal function in [14].30
Footnote 30: Compare formula (5.7) in [14] and set \(\varphi_{1}=\epsilon_{4}\,,\varphi_{2}=\epsilon_{3}-\epsilon_{4}\,,m_{a} \mathfrak{p}_{1}^{a}=\mathfrak{t}_{a}\,,m_{a}\mathfrak{p}_{2}^{a}=1-\mathfrak{ t}_{a}\,,W=0\), and set \(\epsilon_{3}=2\) for simplicity of comparison. Our result for the free energy then matches theirs (up to an overall sign due to different conventions) upon choosing \(\epsilon_{3}(-\nu_{D4})^{\frac{5}{2}}=\frac{16\pi}{\sqrt{8-N_{f}}}\). Notice that in [14] the vectors \(v^{a}\) are taken to be primitive, contrary to the conventions we are using in this paper. Our \(v^{a}\) are their \(\hat{v}^{a}\).
The sign ambiguities remain to be fixed by a more careful analysis. For a convex fan, supersymmetry is preserved with a topological twist and we expect that all the \(\eta_{a}\) have the same sign [14]. This could follow from a generalization of the following argument valid for the equivariant volume with single times only. The \(\lambda_{A}\) determine the polytope
\[\mathcal{P}=\{y_{I}\in\mathbb{R}^{4}\,|\,y_{I}V_{I}^{A}\geq\lambda_{A}\}\:. \tag{4.9}\]
Naturally \(\mathcal{P}\) must be non-empty, so let us take \(y_{I}\in\mathcal{P}\). If we contract the inequalities \(y_{I}V_{I}^{A}\geq\lambda_{A}\) with \(c_{1}^{\mathbb{T}^{4}}(L_{A})|_{a}\) we get
\[-y_{I}\epsilon_{I}\geq\tau_{a}\quad\forall a\in\{1,\ldots,d\}\:. \tag{4.10}\]
Given that \(\mathcal{P}\) is a resolved cone and that the equivariant volume is given by an integral over \(\mathcal{P}\)
\[\mathds{V}(\lambda_{A},\epsilon_{I})=\int_{\mathcal{P}}\mathrm{d}^{4}y\:\mathrm{ e}^{-y_{I}\epsilon_{I}}\:, \tag{4.11}\]
then the exponent \(-y_{I}\epsilon_{I}\geq\tau_{a}\) must be negative for convergence. This implies \(\tau_{a}\leq 0\), and thus \(\eta_{a}=-\sigma\). By choosing \(\sigma=1\) we would find the result of [14].
The case of anti-twist requires taking a non-convex fan for \(\mathbb{M}_{4}\). This can be obtained by _formally_ sending \(v^{a}\to\sigma^{a}v^{a}\) everywhere, implying \(\epsilon_{1}^{a}\to\sigma^{a}\epsilon_{1}^{a}\) and \(\epsilon_{2}^{a}\to\sigma^{a+1}\epsilon_{2}^{a}\). It was proposed in [14] that the correct assignment of signs is \(\eta_{a}=-\sigma^{a}\sigma^{a+1}\), and it would be interesting to understand this by a geometrical argument.
### AdS\({}_{4}\times M_{6}\) solutions in massive type IIA
In this section we consider AdS\({}_{4}\times M_{6}\) solutions of massive type IIA supergravity, which correspond to a system of D4 branes wrapped around a two-cycle inside a four-dimensional toric \(\mathbb{M}_{4}\), in the presence of D8 branes and an orientifold plane O8. Explicit solutions of this type have been found in [26], with \(M_{6}\) being a \(\mathbb{P}^{1}\) fibration over a four-dimensional manifold that is either Kahler-Einstein or a product of Riemann surfaces, cut in half by the O8 plane. The only toric manifolds that admit such metrics are \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and dP\({}_{3}\): these are the cases we will be focussing on.
More precisely, we consider a half-geometry modelled on a non-compact CY\({}_{3}\) corresponding to the canonical bundle over \(\mathbb{M}_{4}\), with fan given by
\[V^{a}=(v^{a},1)\,,\qquad V^{d+1}=(0,0,1)\,\,\,,\qquad a=1,\dots,d\,, \tag{4.12}\]
where \(v^{a}\) are the vectors of the fan of \(\mathbb{M}_{4}\). This fan has the same structure as the ones in sections 3.2.1 to 3.2.3, and for this reason the discussion in this section will share some similarities with the former. This half-geometry can accurately describe the solutions of [26] when the parameters \(\ell\) and \(\sigma\) are set to zero. We explain this point in more detail in appendix C, where we also compute the free energy of the solutions of [26] to be compared with the results of our approach.
Our prescription is the following:
\[\nu_{D4}\,M_{A}=-\frac{\partial}{\partial\lambda_{A}}\mathds{V}^{(3)}(\lambda _{A},\lambda_{AB},\epsilon_{I})\,,\qquad F=\mathds{V}^{(5)}(\lambda_{A},\lambda _{AB},\epsilon_{I})\,,\qquad\sum_{A}V_{I}^{A}M_{A}=0\,. \tag{4.13}\]
The higher times are needed in order to find solutions to the flux constraints. Similarly to the discussion of section 3.2, we will need to extremize the free energy with respect to any parameter that is not fixed by the flux constraints.
Given the high-degree of symmetry of \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and dP\({}_{3}\) we expect a critical point at \(\epsilon_{1}=\epsilon_{2}=0\). Indeed, it can be verified with a similar logic as equation (3.74) that the linear terms in \(\epsilon_{i}\) in the expression of the free energy vanish and thus \(\epsilon_{1}=\epsilon_{2}=0\) is a critical point.
The flux equations are
\[\begin{split}(I)&\qquad\nu_{D4}M_{a}=-\frac{1}{2} \int_{\mathbb{M}_{4}}\frac{c_{1}^{\mathbb{T}}(L_{a})\left(\tau^{\mathbb{T}} \right)^{2}}{\epsilon_{3}+\sum_{b}c_{1}^{\mathbb{T}}(L_{b})}\,,\\ (II)&\qquad\nu_{D4}M_{d+1}=\frac{1}{2}\int_{\mathbb{ M}_{4}}\left(\tau^{\mathbb{T}}\right)^{2},\end{split} \tag{4.14}\]
where \(\tau^{\mathbb{T}}\) is defined as in (3.58). For generic values of \(\epsilon_{i}\) these equations are not independent: since \(\sum_{a}v_{i}^{a}M_{a}=0\) and \(\sum_{a}v_{i}^{a}c_{1}^{\mathbb{T}}(L_{a})=-\epsilon_{i}\), from \((I)\) we obtain
\[\epsilon_{i}\int_{\mathbb{M}_{4}}\frac{\left(\tau^{\mathbb{T}}\right)^{2}}{ \epsilon_{3}+\sum_{b}c_{1}^{\mathbb{T}}(L_{b})}=0\,. \tag{4.15}\]
When \(\epsilon_{1}\) and \(\epsilon_{2}\) are not both zero this is a non-trivial relation that we can use to write
\[\nu_{D4}\sum_{a}M_{a}=-\frac{1}{2}\int_{\mathbb{M}_{4}}\frac{\sum_{a}c_{1}^{ \mathbb{T}}(L_{a})\left(\tau^{\mathbb{T}}\right)^{2}}{\epsilon_{3}+\sum_{b}c_{1 }^{\mathbb{T}}(L_{b})}=-\frac{1}{2}\int_{\mathbb{M}_{4}}\!\left(\tau^{\mathbb{ T}}\right)^{2}, \tag{4.16}\]
which is equation \((II)\). Crucially, this argument fails when \(\epsilon_{1}=\epsilon_{2}=0\), which is the case we will be focussing on. As we will see in this case equation \((II)\) becomes independent of \((I)\) and provides an additional constraint.
As already discussed in section 3.2.2, we have enough gauge freedom to set \(\lambda_{a,a}=\lambda_{a,a+1}=0\). For generic fans, it is also usually possible to gauge away the \(\lambda_{a}\), but this is not the case for the highly symmetric fans that we consider in this section. For the \(\mathbb{Z}_{2}\) symmetric solutions studied in section 3.2.2 it was always possible to find a critical point with \(\lambda_{a}=0\) regardless, as argued in appendix A. However the argument of appendix A cannot be repurposed for the type IIA solutions of this section and we are thus forced to keep the \(\lambda_{a}\). The equivariant form \(\tau^{\mathbb{T}}\) can then be parameterized as
\[\tau^{\mathbb{T}}=\sum_{a}\lambda_{a}c_{1}^{\mathbb{T}}(L_{a})+\Big{(}\epsilon _{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})\Big{)}\Big{(}\overline{\lambda}_{d+1}+ \sum_{b}\overline{\lambda}_{b,d+1}c_{1}^{\mathbb{T}}(L_{b})\Big{)}\,, \tag{4.17}\]
where
\[\overline{\lambda}_{d+1}=-\lambda_{d+1}+\lambda_{d+1,d+1}\epsilon_{3}\,,\qquad \overline{\lambda}_{a,d+1}=-2\lambda_{a,d+1}+\lambda_{d+1,d+1}\,. \tag{4.18}\]
Then for \(\epsilon_{1}=\epsilon_{2}=0\) the flux equations become
\[\begin{split}(I)&\qquad\nu_{D4}M_{a}=-\frac{ \overline{\lambda}_{d+1}}{2}\Big{(}\overline{\lambda}_{d+1}\sum_{b}D_{ab}+2 \sum_{b}D_{ab}\Lambda_{b}\Big{)}\,,\\ \sum_{a}(I)+(II)&\qquad 0=\sum_{ab}D_{ab}\Lambda_{a} \Lambda_{b}+2\epsilon_{3}\overline{\lambda}_{d+1}\sum_{ab}D_{ab}\overline{ \lambda}_{a,d+1}\,,\end{split} \tag{4.19}\]
where
\[\Lambda_{a}=\lambda_{a}+\epsilon_{3}\overline{\lambda}_{a,d+1}\,. \tag{4.20}\]
Notice that the second equation is not a consequence of the first, as we already anticipated. The free energy restricted to \(\epsilon_{1}=\epsilon_{2}=0\) is
\[F=\mathds{V}^{(5)}=\epsilon_{3}^{2}\Big{(}\frac{\overline{\lambda}_{d+1}^{5}} {20}\sum_{ab}D_{ab}+\frac{\overline{\lambda}_{d+1}^{4}}{24}\sum_{ab}D_{ab}(3 \Lambda_{a}+\epsilon_{3}\overline{\lambda}_{a,d+1})+\frac{\overline{\lambda}_ {d+1}^{3}}{12}\sum_{ab}D_{ab}\Lambda_{a}\Lambda_{b}\Big{)}\,. \tag{4.21}\]
We can eliminate \(\overline{\lambda}_{a,d+1}\) from the above expression by using the second flux constraint in (4.19) and find
\[\begin{split}& F=\epsilon_{3}^{2}\Big{(}\frac{\overline{\lambda}_ {d+1}^{5}}{20}\sum_{ab}D_{ab}+\frac{\overline{\lambda}_{d+1}^{4}}{8}\sum_{ab}D _{ab}\Lambda_{a}+\frac{\overline{\lambda}_{d+1}^{3}}{16}\sum_{ab}D_{ab}\Lambda _{a}\Lambda_{b}\Big{)}\,,\\ &\nu_{D4}M_{a}=-\frac{\overline{\lambda}_{d+1}}{2}\Big{(}\overline {\lambda}_{d+1}\sum_{b}D_{ab}+2\sum_{b}D_{ab}\Lambda_{b}\Big{)}\,.\end{split} \tag{4.22}\]
The flux constraints are not sufficient to fix all parameters: one parameter, say \(\overline{\lambda}_{d+1}\), remains undetermined. Our prescription is to extremize the free energy with respect to this leftover parameter.
Let us consider first the case of Kahler-Einstein base manifold, with all fluxes relative to two-cycles in the base equal, that is \(M_{a}=N\). The three cases of interest are then \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\mathrm{dP}_{3}\). We define the integers \(M_{k}=\sum_{ab}D_{ab}\) and \(m_{k}=\sum_{b}D_{ab}\), which take values \(M_{k}=(9,8,6)\) and \(m_{k}=(3,2,1)\) for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\mathrm{dP}_{3}\) respectively. Since the fluxes \(M_{a}\) are all equal we can solve the flux equation by also setting all \(\Lambda_{a}\) equal to each other, giving us
\[\Lambda_{a}=-\frac{2N\nu_{D4}+m_{k}\overline{\lambda}_{d+1}^{2}}{2m_{k} \overline{\lambda}_{d+1}}\,. \tag{4.23}\]
The free energy as a function of \(\overline{\lambda}_{d+1}\) is then
\[F=\epsilon_{3}^{2}M_{k}\Big{(}\frac{\overline{\lambda}_{d+1}^{5}}{320}-\frac{ \nu_{D4}N\overline{\lambda}_{d+1}^{3}}{16m_{k}}+\frac{\nu_{D4}^{2}N^{2} \overline{\lambda}_{d+1}}{16m_{k}^{2}}\Big{)}\,, \tag{4.24}\]
and extremizing it we find four solutions:
\[\begin{split} F&=\pm\frac{1}{10}(3\sqrt{2}-4)\, \epsilon_{3}^{2}\left(\frac{\nu_{D4}N}{m_{k}}\right)^{5/2}M_{k}\,,\\ F&=\pm\frac{1}{10}(3\sqrt{2}+4)\,\epsilon_{3}^{2} \left(\frac{\nu_{D4}N}{m_{k}}\right)^{5/2}M_{k}\,.\end{split} \tag{4.25}\]
The first solution, with a plus sign, reproduces the free energy of the massive type IIA supergravity solutions of [26] upon setting31\(\epsilon_{3}^{2}\,\nu_{D4}^{\frac{5}{2}}=\frac{64\pi}{\sqrt{n_{0}}}\). The details about the computation of the free energy of the supergravity solutions are in appendix C.
Footnote 31: Notice that the numerical values of \(\nu_{D4}\) and \(\epsilon_{3}\) here are different from those of the corresponding quantities in the previous section.
Let us now consider the case of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) with independent fluxes. In this case the metric on each \(\mathbb{P}^{1}\) factor is round, but the two radii are different. If we impose the condition \(\sum_{A}V_{I}^{A}M_{A}=0\) then the fluxes can be parameterized as follows:
\[M_{a}=(N_{1},N_{2},N_{1},N_{2})\,,\qquad M_{d+1}=-2(N_{1}+N_{2})\,. \tag{4.26}\]
The flux constraints can then be solved by setting
\[\Lambda_{1}=\Lambda_{3}=-\frac{\nu_{D4}N_{2}+\overline{\lambda}_{d+1}^{2}}{2 \overline{\lambda}_{d+1}}\,,\qquad\Lambda_{2}=\Lambda_{4}=-\frac{\nu_{D4}N_{1} +\overline{\lambda}_{d+1}^{2}}{2\overline{\lambda}_{d+1}}\,. \tag{4.27}\]
The free energy takes the form
\[F=\frac{\epsilon_{3}^{2}\overline{\lambda}_{d+1}}{40}\big{(}\overline{\lambda }_{d+1}^{4}-5\nu_{D4}(N_{1}+N_{2})\overline{\lambda}_{d+1}^{2}+5\nu_{D4}^{2}N_ {1}N_{2}\big{)}\,, \tag{4.28}\]
and extremizing it with respect to \(\overline{\lambda}_{d+1}\) yields four solutions:
\[\begin{split} F&=\pm\frac{\epsilon_{3}^{2}}{10}\big{(} \sqrt{8+\mathbf{z}^{2}}-(2+\mathbf{z}^{2})\big{)}\sqrt{3-\sqrt{8+\mathbf{z}^{2 }}}\,(\nu_{D4}N)^{5/2}\,,\\ F&=\pm\frac{\epsilon_{3}^{2}}{10}\big{(}\sqrt{8+ \mathbf{z}^{2}}+(2+\mathbf{z}^{2})\big{)}\sqrt{3+\sqrt{8+\mathbf{z}^{2}}}\,( \nu_{D4}N)^{5/2}\,,\end{split} \tag{4.29}\]
where for convenience we have introduced the parameterization
\[N_{1}=(1+\mathbf{z})N,\qquad N_{2}=(1-\mathbf{z})N,\qquad|\mathbf{z}|<1\,. \tag{4.30}\]
Once again the first solution, with a plus sign, reproduces the free energy of the supergravity solutions of [26] upon setting \(\epsilon_{3}^{2}\,\nu_{D4}^{\frac{5}{2}}=\frac{64\pi}{\sqrt{n_{0}}}\) (see appendix C for details).
### AdS\({}_{3}\times M_{7}\) solutions in type IIB
In this section we consider AdS\({}_{3}\times M_{7}\) solutions in type IIB, where \(M_{7}\) is an \(S^{3}/\mathbb{Z}_{p}\) fibration over \(B_{4}\), which could potentially arise as the near-horizon limit of a system of D3 branes wrapped on a two-cycle in \(B_{4}\). Explicit solutions of this type have been found in [27, 28] for Kahler-Einstein \(B_{4}\) or products of Kahler-Einstein spaces. The case of _smooth_ Kahler \(B_{4}\) has been studied in [29] using the formalism of GK geometry and the GMS construction [8]. The orbifold case has not been considered in the literature as of yet, so in this section we take \(B_{4}\) to be a generic toric orbifold \(B_{4}\equiv\mathbb{M}_{4}\) and we also allow a general dependence on all the equivariant parameters, including those on the base \(\mathbb{M}_{4}\). As we already discussed, this is important to obtain the correct critical point for generic \(\mathbb{M}_{4}\) without particular symmetries, even in the smooth case. We also hope that our general formulas in terms of four-dimensional integrals will be useful to find a field theory interpretation of these solutions.
With odd-dimensional \(M_{7}\) we need to add one real dimension, the radial one, as familiar in holography. The relevant CY\({}_{4}\) geometry is given by the fibration with \(\mathbb{M}_{4}\) as the base and the Kahler cone over the Lens space as the fibre, that is \(\mathbb{C}^{2}/\mathbb{Z}_{p}\hookrightarrow\text{CY}_{4}\to\mathbb{M}_{4}\). This CY\({}_{4}\) is toric and its fan is generated by the vectors
\[V^{a}=(v^{a},1,\mathfrak{t}_{a})\,,\qquad V^{d+1}=(0,0,1,0)\,,\qquad V^{d+2}=( 0,0,1,p)\,, \tag{4.31}\]
where as usual the vectors \(v^{a}\) generate the fan of \(\mathbb{M}_{4}\), \(a=1,\ldots,d\).
Our prescription here reduces to the GMS construction [29], namely
\[F=\mathds{V}^{(2)}(\lambda_{A},\epsilon_{I})\,,\qquad\mathds{V}^{(1)}(\lambda _{A},\epsilon_{I})=0\,,\qquad\nu_{D3}\,M_{A}=-\frac{\partial}{\partial\lambda_ {A}}\mathds{V}^{(2)}(\lambda_{A},\epsilon_{I})\,, \tag{4.32}\]
where
\[\sum_{A}V_{I}^{A}M_{A}=0\,. \tag{4.33}\]
Here \(\mathds{V}^{(2)}\) matches the "supersymmetric action" introduced in [8] and we know from [8] that there is no need to use higher times for these solutions. Notice that
the second equation in (4.32), which is consequence of the third and (4.33), is the "topological constraint" in [8].
When \(p=1\) the CY\({}_{4}\) matches exactly the one of sections 3.1 and 4.1. The equivariant volume is computed in the same manner, with only minor corrections. The one-to-one correspondence between the fixed points of CY\({}_{4}\) and \(\mathbb{M}_{4}\) given by \((V^{a},V^{a+1},V^{d+1},V^{d+2})\leftrightarrow(v^{a},v^{a+1})\) still holds, but the orders of the orbifold singularities now differ by a factor of \(p\):
\[d_{a,a+1,d+1,d+2}=p\,d_{a,a+1}\;. \tag{4.34}\]
The inward normals to the faces of the cone generated by \((V^{a},V^{a+1},V^{d+1},V^{d+2})\) are now given by
\[\begin{split}& U^{a}=(p\,u_{1}^{a},0,0)\,,\\ & U^{a+1}=(p\,u_{2}^{a},0,0)\,,\\ & U^{d+1}=\left((\mathfrak{t}_{a}-p)u_{1}^{a}+(\mathfrak{t}_{a+1} -p)u_{2}^{a}\,,p\,d_{a,a+1},-d_{a,a+1}\right),\\ & U^{d+2}=(-\mathfrak{t}_{a}u_{1}^{a}-\mathfrak{t}_{a+1}u_{2}^{a }\,,0,d_{a,a+1})\,.\end{split} \tag{4.35}\]
From these we can derive the restriction of the equivariant Chern forms of CY\({}_{4}\) to the fixed points
\[\begin{split}& c_{1}^{\mathbb{T}^{4}}(L_{b})|_{a}=-(\epsilon_{1} ^{a}\,\delta_{a,b}+\epsilon_{2}^{a}\,\delta_{a+1,b})\,,\\ & c_{1}^{\mathbb{T}^{4}}(L_{d+1})|_{a}=\frac{-(\mathfrak{t}_{a}-p )\epsilon_{1}^{a}-(\mathfrak{t}_{a+1}-p)\epsilon_{2}^{a}-p\,\epsilon_{3}+ \epsilon_{4}}{p}\,,\\ & c_{1}^{\mathbb{T}^{4}}(L_{d+2})|_{a}=\frac{\mathfrak{t}_{a} \epsilon_{1}^{a}+\mathfrak{t}_{a+1}\epsilon_{2}^{a}-\epsilon_{4}}{p}\,,\end{split} \tag{4.36}\]
and the respective restrictions to the base \(\mathbb{M}_{4}\)
\[\begin{split}\mathcal{C}_{a}&=\,c_{1}^{\mathbb{T}}( L_{a})\;,\qquad a=1,\ldots,d\,,\\ &\mathcal{C}_{d+1}&=\,\frac{-p\,\epsilon_{3}+ \epsilon_{4}+\sum_{a}(\mathfrak{t}_{a}-p)c_{1}^{\mathbb{T}}(L_{a})}{p}\,,\\ &\mathcal{C}_{d+2}&=\,\frac{-\epsilon_{4}-\sum_{a} \mathfrak{t}_{a}c_{1}^{\mathbb{T}}(L_{a})}{p}\,.\end{split} \tag{4.37}\]
It is easy to verify that these forms satisfy \(\sum_{a}v_{i}^{a}\,c_{1}^{\mathbb{T}}(L_{a})=-\epsilon_{i}\) and \(\sum_{A}V_{I}^{A}\,\mathcal{C}_{A}=-\epsilon_{I}\).
The second degree homogeneous component of the equivariant volume can be written as an integral on the base \(\mathbb{M}_{4}\) as follows:
\[\mathbb{V}\,^{(2)}(\lambda_{A}\,,\epsilon_{I})=\int_{\mathbb{M}_{4}}\,\frac{( \tau^{\mathbb{T}})^{2}}{2\,p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}\;,\qquad \tau^{\mathbb{T}}=\sum_{A}\lambda_{A}\,\mathcal{C}_{A}\;. \tag{4.38}\]
The flux constraints then read
\[-\nu_{D3}\,M_{A}=\partial_{\lambda_{A}}\mathbb{V}\,^{(2)}(\lambda_{A}\,, \epsilon_{I})=\int_{\mathbb{M}_{4}}\,\frac{\mathcal{C}_{A}\,\tau^{\mathbb{T} }}{p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}=\sum_{a}\frac{B_{a}^{(1)}\cdot \left(\mathcal{C}_{A}\right)|_{a}}{d_{a,a+1}\,\epsilon_{1}^{a}\,\epsilon_{2}^{ a}}\;, \tag{4.39}\]
where \(B_{a}^{(1)}\) is the restriction to the \(a\)-th fixed point of the form
\[B^{(1)}\equiv\frac{\tau^{\mathbb{T}}}{p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}\,. \tag{4.40}\]
The solution to equations (4.39) takes the following form:
\[\nu_{D3}^{-1}\,B_{a}^{(1)}=b(\epsilon_{I})-\sum_{b}m_{b}\,c_{1}^{\mathbb{T}}(L _{b})|_{a}\,, \tag{4.41}\]
where the \(m_{b}\) are such that \(M_{a}=\sum_{b}D_{ab}\,m_{b}\). Indeed, if we substitute this expression into the right-hand side of (4.39) for \(A\equiv b\in\{1,\ldots,d\}\) we obtain
\[\sum_{a}\frac{B_{a}^{(1)}\cdot\big{(}\mathcal{C}_{b}\big{)}|_{a}}{d_{a,a+1}\; \epsilon_{1}^{a}\;\epsilon_{2}^{a}}=\nu_{D3}\int_{\mathbb{M}_{4}}\Big{(}b( \epsilon_{I})-\sum_{a}m_{a}\,c_{1}^{\mathbb{T}}(L_{a})\Big{)}\,\mathcal{C}_{b }=-\nu_{D3}\sum_{a}D_{ab}\,m_{a}\,, \tag{4.42}\]
thus recovering the left-hand side of (4.39). When \(A=d+1,d+2\) from a similar computation we find that
\[M_{d+1}=\sum_{a}\frac{(\mathfrak{t}_{a}-p)M_{a}}{p}\,,\qquad M_{d+2}=-\sum_{a }\frac{\mathfrak{t}_{a}M_{a}}{p}\,, \tag{4.43}\]
which are precisely the values of \(M_{d+1}\) and \(M_{d+2}\) necessary to satisfy the relation expected from the fluxes, \(\sum_{A}V_{I}^{A}M_{A}=0\).
So far we have not specified the value of \(b(\epsilon_{I})\) in (4.41). Using the gauge invariance (2.15) we can fix three parameters \(\lambda_{A}\). Therefore only \(d-1\) of the restrictions of \(\tau^{\mathbb{T}}\) to the fixed points are independent, which translates into a relation among the \(B_{a}^{(1)}\) that we use to fix the value of \(b(\epsilon_{I})\). This can be seen by observing that \(\tau^{\mathbb{T}}\) is an equivariant two-form and thus its integral over \(\mathbb{M}_{4}\) vanishes, giving us
\[0=\int_{\mathbb{M}_{4}}\tau^{\mathbb{T}}=\int_{\mathbb{M}_{4}}p\,\mathcal{C}_{ d+1}\,\mathcal{C}_{d+2}B^{(1)}\,. \tag{4.44}\]
The value of \(b(\epsilon_{I})\) that satisfies this relation can then be written as
\[b(\epsilon_{I})=\frac{\int_{\mathbb{M}_{4}}\mathcal{C}_{d+1}\,\mathcal{C}_{d +2}\sum_{a}m_{a}\,c_{1}^{\mathbb{T}}(L_{a})}{\int_{\mathbb{M}_{4}}\mathcal{C} _{d+1}\,\mathcal{C}_{d+2}}\,. \tag{4.45}\]
We observe that the reason why we had to turn on the higher times in the cases studied in the previous sections was related to the fact that \(d-1\) independent parameters were not enough to solve the flux constraints. In the case considered in this section however the \(d-1\) independent restrictions of \(\tau^{\mathbb{T}}=\sum_{A}\lambda_{A}\,\mathcal{C}_{A}\) are sufficient and there is no necessity to include higher times. Nonetheless, it is interesting to repeat the same computation of this section with the addition of higher times, which we report in appendix B.
The free energy is given by the second degree homogeneous component of the equivariant volume, which we write as
\[F=\mathbb{V}^{\,(2)}(\lambda_{A}\,,\epsilon_{I})=\int_{\mathbb{M}_{4}}\,\frac{( \tau^{\mathbb{T}})^{2}}{2\,p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}=\sum_{a} \frac{B_{a}^{(2)}}{d_{a,a+1}\,\epsilon_{1}^{a}\,\epsilon_{2}^{a}}\,, \tag{4.46}\]
where the \(B_{a}^{(2)}\) are the restrictions of the integrand to each fixed point. The value of the Kahler moduli, and consequently of the \(B_{a}^{(2)}\), is fixed by the flux constraints. We can easily do this by employing the same strategy as formula (3.16) to relate the \(B_{a}^{(2)}\) to the \(B_{a}^{(1)}\):
\[B_{a}^{(2)}\equiv\left(\frac{\left(\tau^{\mathbb{T}}\right)^{2} }{2\,p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}\right)\Big{|}_{a}=\frac{p}{2} \left(B_{a}^{(1)}\right)^{2}\left(\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\right) \rvert_{a} \tag{4.47}\] \[=\frac{\nu_{D3}^{2}}{2p}\big{(}b(\epsilon_{I})+m_{a}\epsilon_{1} ^{a}+m_{a+1}\epsilon_{2}^{a}\big{)}^{2}\big{(}p\epsilon_{3}-\epsilon_{4}+( \mathfrak{t}_{a}-p)\epsilon_{1}^{a}+(\mathfrak{t}_{a+1}-p)\epsilon_{2}^{a} \big{)}\big{(}\epsilon_{4}-\mathfrak{t}_{a}\epsilon_{1}^{a}-\mathfrak{t}_{a+1} \epsilon_{2}^{a}\big{)}.\]
We can also write the free energy as an integral over \(\mathbb{M}_{4}\) as follows:
\[F=\frac{p}{2}\,\nu_{D3}^{2}\int_{\mathbb{M}_{4}}\Big{(}b(\epsilon_{I})-\sum_{a }m_{a}\,c_{1}^{\mathbb{T}}(L_{a})\Big{)}^{2}\,\mathcal{C}_{d+1}\,\mathcal{C}_{ d+2}\,. \tag{4.48}\]
Notice that the equation that fixes \(b(\epsilon_{I})\) can be written as
\[\int_{\mathbb{M}_{4}}\Big{(}b(\epsilon_{I})-\sum_{a}m_{a}\,c_{1}^{\mathbb{T}} (L_{a})\Big{)}\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}=0 \tag{4.49}\]
and can be used to further rewrite the free energy as
\[F=-\frac{p}{2}\,\nu_{D3}^{2}\int_{\mathbb{M}_{4}}\sum_{a}m_{a}\,c_{1}^{ \mathbb{T}}(L_{a})\Big{(}b(\epsilon_{I})-\sum_{a}m_{a}\,c_{1}^{\mathbb{T}}(L_ {a})\Big{)}\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\,. \tag{4.50}\]
It would be very interesting to understand if our formulas can be written as the integral of the anomaly polynomial for some D3 brane theory wrapped over a two-cycle in \(\mathbb{M}_{4}\) and thus providing a field theory interpretation of the solution.
#### 4.3.1 Examples: Kahler-Einstein and Hirzebruch surfaces
We can check that our general formalism reproduces the know expressions for the toric cases \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\mathrm{dP}_{3}\) with equal fluxes. The fan and intersection matrix are given in (3.72) and (3.73). We take all the \(M_{a}\equiv M\), \(\mathfrak{t}_{a}\equiv\mathfrak{t}\) and \(m_{a}\equiv m\) equal. We find \(\sum_{ab}D_{ab}=dm_{k}\) and \(M=mm_{k}\) with \(m_{k}=3,2,1\) for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\mathrm{dP}_{3}\), respectively. Since \(\sum_{abc}D_{abc}=0\), there is no linear term in \(\epsilon_{1,2}\) in \(\mathrm{V}^{(2)}\) which is extremized at \(\epsilon_{1,2}=0\). By expanding (4.49) in integrals of Chern classes we find
\[b=\frac{m[(p\epsilon_{3}-\epsilon_{4})\mathfrak{t}+\epsilon_{4}(p-\mathfrak{t })]}{\mathfrak{t}(p-\mathfrak{t})} \tag{4.51}\]
\[\mathds{V}^{(2)}=-\nu_{D3}^{2}\frac{dM^{2}[\epsilon_{3}\epsilon_{4}p(p-3\mathfrak{ t})\mathfrak{t}+\epsilon_{3}^{2}p^{2}\mathfrak{t}^{2}+\epsilon_{4}^{2}(p^{2}-3p \mathfrak{t}+3\mathfrak{t}^{2})]}{2pm_{k}(p-\mathfrak{t})\mathfrak{t}}\,, \tag{4.52}\]
which reproduces formula (5.6) in [29] with \(\epsilon_{4}=\epsilon_{3}b_{2}/2\).32 This still needs to be extremized with respect to \(b_{2}\).
Footnote 32: In [29]\(N\) is the flux of the five-cycle fibred over \(c_{1}/m_{k}\) in \(\mathbb{M}_{4}\). To compare the formulas we need to identify \(M=\frac{m_{k}N}{d}\) and \(\mathfrak{t}=\frac{m_{2}}{m_{k}}\), which follows from (5.5). The formulas match for \(\epsilon_{3}\nu_{D3}=2\sqrt{6}\).
As we already discussed in section 3.2, the critical point is generically at a non-zero value of \(\epsilon_{1}\) and \(\epsilon_{2}\), unless there is some extra symmetry in the background and the fluxes. As an example where the critical point is not at \(\epsilon_{1}=\epsilon_{2}=0\) we consider the case of the Hirzebruch surface \(\mathbb{M}_{4}=\mathbb{F}_{k}\) with fan
\[v^{1}=(1,0)\,,\qquad v^{2}=(-k,1)\,,\qquad v^{3}=(-1,0)\,,\qquad v^{4}=(0,-1)\,. \tag{4.53}\]
The constraint \(\sum_{A}V_{I}^{A}M_{A}=0\) leaves two independent fluxes on the base \(\mathbb{M}_{4}\) and two fluxes associated with the fibre
\[\begin{split} M_{3}&=M_{1}-k\,M_{2}\,,\qquad M_{4} =M_{2}\,,\\ M_{5}&=\frac{M_{1}(\mathfrak{t}_{1}+\mathfrak{t}_{ 3}-2p)+M_{2}(pk-2p+\mathfrak{t}_{2}+\mathfrak{t}_{4}-k\mathfrak{t}_{3})}{p} \,,\\ M_{6}&=-\frac{M_{1}(\mathfrak{t}_{1}+\mathfrak{t}_{ 3})+M_{2}(\mathfrak{t}_{2}+\mathfrak{t}_{4}-k\mathfrak{t}_{3})}{p}\,.\end{split} \tag{4.54}\]
The vectors of the fan and the fluxes have a symmetry between the second and fourth entry, and therefore we expect that one of \(\epsilon_{i}\) will be zero at the critical point. Notice also that the physical fluxes depends only on two linear combinations of the \(\mathfrak{t}_{a}\). These are the combinations invariant under
\[\mathfrak{t}_{a}\to\mathfrak{t}_{a}+\sum_{i=1}^{2}\beta_{i}v_{i}^{a}\,. \tag{4.55}\]
In the free energy (4.50) this transformation can be reabsorbed in a redefinition of \(\epsilon_{4}\) using \(\sum_{a}v_{i}^{a}c_{1}(L_{a})^{\mathbb{T}}=-\epsilon_{i}\) and therefore the central charge depends only on the physical fluxes. We also solve \(M_{a}=\sum_{b}D_{ab}m_{b}\), for example, by \(m_{a}=(0,0,M_{2},M_{1})\).33 The constraint (4.49) and the free energy (4.50) can be expanded in a series of integral of Chern classes and expressed in terms of the intersections \(D_{a_{1}\ldots a_{k}}\), which are homogeneous of order \(k\) in \(\epsilon_{i}\). We see then that \(b\) and the free energy are homogeneous of degree one and two in all the \(\epsilon_{I}\), respectively. One can check explicitly that \(F\) is extremized at \(\epsilon_{2}=0\). The expressions are too lengthy to be reported so,
for simplicity, we restrict to the case \(M_{2}=M_{1}\). We also fix \(\mathfrak{t}_{a}=(\mathfrak{t}_{1},\mathfrak{t}_{2},\mathfrak{t}_{1},\mathfrak{t} _{2})\) using (4.55) for convenience. The free energy restricted to \(\epsilon_{2}=0\) reads
\[F=-\frac{\nu_{D3}^{2}M_{1}^{2}\mathcal{A}}{8p(p(\mathfrak{t}_{1}+\mathfrak{t}_{ 2})-2\mathfrak{t}_{1}\mathfrak{t}_{2})}\,, \tag{4.56}\]
where
\[\mathcal{A}=\epsilon_{3}^{2}p^{2}[(k-2)\mathfrak{t}_{1}-2\mathfrak {t}_{2}]^{2}+2\epsilon_{3}\epsilon_{4}p[(k-2)^{2}(p-2\mathfrak{t}_{1}) \mathfrak{t}_{1}+4(p+(k-2)\mathfrak{t}_{1})\mathfrak{t}_{2}-8\mathfrak{t}_{2} ^{2}] \tag{4.57}\] \[+\epsilon_{4}^{2}[(k-4)^{2}p^{2}-4(k-3)(k-2)p\mathfrak{t}_{1}+4(k -2)^{2}\mathfrak{t}_{1}^{2}+4(k-6)p\mathfrak{t}_{2}-8(k-2)\mathfrak{t}_{1} \mathfrak{t}_{2}+16\mathfrak{t}_{2}^{2}]\] \[+4\epsilon_{1}k[\epsilon_{3}p\mathfrak{t}_{1}((k-1)p\mathfrak{t} _{1}-(k-2)\mathfrak{t}_{1}^{2}-p\mathfrak{t}_{2})+\epsilon_{4}(-3(k-2)p \mathfrak{t}_{1}^{2}+2(k-2)\mathfrak{t}_{1}^{3}+p^{2}((k-3)\mathfrak{t}_{1}+ \mathfrak{t}_{2}))]\] \[+4\epsilon_{1}^{2}[p^{2}((k^{2}+k-3)\mathfrak{t}_{1}^{2}+(k-4) \mathfrak{t}_{1}\mathfrak{t}_{2}-\mathfrak{t}_{2}^{2})+p\mathfrak{t}_{1}(-(2k ^{2}+k-2)\mathfrak{t}_{1}^{2}+(10-3k)\mathfrak{t}_{1}\mathfrak{t}_{2}+4 \mathfrak{t}_{2}^{2})]\] \[+\epsilon_{1}^{2}\mathfrak{t}_{1}^{2}[k^{2}\mathfrak{t}_{1}^{2}+( 2k-4)\mathfrak{t}_{1}\mathfrak{t}_{2}-4\mathfrak{t}_{2}^{2}]\,,\]
which should be still extremized with respect to \(\epsilon_{1}\) and \(\epsilon_{4}\). One easily sees that the critical point is at a non-zero value of \(\epsilon_{1}\). This rectifies a result given in [29] where it was assumed that the R-symmetry does not mix with the isometries of \(\mathbb{F}_{k}\). The expression (4.56) for \(k=0\) is extremized at \(\epsilon_{1}=0\) and it correctly reduces to the \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) result (4.52) setting \(\epsilon_{1}=0\) and \(\mathfrak{t}_{1}=\mathfrak{t}_{2}\).
## 5 Conclusions
In this paper we have refined the proposal of [2], that the geometry of an extensive class of supersymmetric solutions is captured by a universal quantity, depending only on the topology of the internal space and equivariant parameters associated with the expected symmetries of the solutions. This quantity is an extension of the equivariant volume, familiar from symplectic geometry, where we have introduced additional moduli dubbed higher times, which are necessary to parameterize all the fluxes supported by a given topology. Although we have assumed from the outset that the spaces of interest are toric, we have indicated that this assumption may be relaxed by considering for example "non-convex" geometries as well as configurations including a four-sphere, that are not toric geometries in the strict mathematical sense. It is also possible to extend our construction to geometries with a number of expected abelian symmetries which is strictly less than half of the real dimension34 of the manifold/orbifold (or cone over it). It is well known that in many situations the metric on the internal space (or the cone over it, in the odd-dimensional case) solving the supersymmetry equations may not be compatible with a Kahler or even symplectic structure. Nevertheless, the equivariant volume is a robust topological quantity, insensitive to the details of the metrics. Indeed, it may be regarded as a gravitational analogue of anomalies in quantum field theory. In all cases that we
have analysed, we extract an extremal function from the equivariant volume and our prescription for fixing the parameters on which it depends consists of extremizing over _all_ the parameters that are left undetermined by the flux quantization conditions. This is consistent with the logic in the case of GK geometry [7] and indeed it is analogous to the paradigm of \(a\)-maximization in field theory [5]. Geometrically, the existence of critical points to the various extremal functions that we proposed may be interpreted as providing necessary conditions to the existence of the corresponding supergravity solutions and indeed it would be very interesting to study when such conditions are also sufficient. In any case, if we assume that a solution exists, then our method calculates the relevant observables, yielding non-trivial predictions for the holographically dual field theories. It is worth emphasizing that in the procedure of extremization one should allow all the equivariant parameters not fixed by symmetries to vary, otherwise it is not guaranteed that the critical point found will be a true extremum of the gravitational action. We have demonstrated this point in a number of explicit examples discussed in section 3.2.3 as well as section 4.3.1.
In this work we have focussed on setups involving internal geometries that are fibrations over four-dimensional orbifolds \(\mathbb{M}_{4}\), that may be interpreted as arising from branes wrapping completely or partially \(\mathbb{M}_{4}\). For example, the case of M5 branes completely wrapped on \(\mathbb{M}_{4}\) yields a proof of the gravitational block form of the trial central charge, conjectured in [14] (and derived in the field theory side in [2]). The case of M5 branes partially wrapped on a two-cycle inside \(\mathbb{M}_{4}\) is still poorly understood from the field theory side, the best understood setup being the case of \(\mathbb{M}_{4}=\Sigma_{g}\times S^{2}\), where \(\Sigma_{g}\) is Riemann surface of genus \(g\)[22]. The full internal space \(M_{6}\) may then be viewed also as the fibration of the second Hirzebruch surface \(\mathbb{F}_{2}\simeq S^{2}\times S^{2}\) over the Riemann surface \(\Sigma_{g}\), and interpreted as the backreaction of a stack of M5 branes at a (resolved) \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) singularity, further wrapped on \(\Sigma_{g}\), yielding insights about the dual four-dimensional field theories. In section 3.2.3 we have discussed the example of \(\mathbb{M}_{4}=\mathbb{E}\times S^{2}\), corresponding to M5 branes probing a \(\mathbb{C}^{2}/\mathbb{Z}_{2}\) singularity, further wrapped on a spindle \(\mathbb{E}\) and it would be interesting to confirm our predictions with a field-theoretic computation. It would also be nice to extend the methods of [22] for computing anomalies to setups where the M5 branes wrap a two-cycle with non-trivial normal bundle in an \(\mathbb{M}_{4}\).
In the context of type IIA supergravity, we have analysed the case of D4 branes completely wrapped on a general toric four-orbifold \(\mathbb{M}_{4}\), proving the gravitational block form of the entropy function conjectured in [14]. It would be very interesting to reproduce this from a field theory calculation of the partition function of five-dimensional SCFTs on \(S^{1}\times\mathbb{M}_{4}\), employing the method of [30] for performing localization on orbifolds. We have also analysed the case of D4 branes partially wrapped on a two-cycle inside \(\mathbb{M}_{4}\), providing a dual field theoretic proposal for a class of solutions to massive type IIA supergravity, constructed in [26]. Finally, we have also discussed the case of D3 branes partially wrapped on a two-cycle inside
\(\mathbb{M}_{4}\), corresponding to type IIB geometries of the form \(\text{AdS}_{3}\times M_{7}\), making contact with the framework of fibred GK geometries studied in [29]. In particular, we have improved some of the results previously obtained in [29], by revisiting some of the examples discussed there. In this paper we have not discussed geometries associated to M2 and D2 branes (already briefly mentioned in [2]), which are not naturally related to four-dimensional orbifolds \(\mathbb{M}_{4}\), but we expect that for these our method will generalize straightforwardly. It would be very interesting to incorporate new classes of supersymmetric geometries in our framework, such as for example \(\text{AdS}_{2}\times M_{8}\) in type IIB in order to study entropy functions of \(\text{AdS}_{5}\) black holes. It is tantalizing to speculate that our approach may be eventually extended to include geometries that do not necessarily contain AdS factors.
## Acknowledgements
We would like to thank Alessandro Tomasiello for useful discussions. EC, FF and DM are supported in part by the INFN. AZ is partially supported by the INFN and the MIUR-PRIN contract 2017CC72MK003.
Appendix A Fixing the Kahler moduli of \(\text{AdS}_{5}\times M_{6}\) solutions with \(\mathbb{Z}_{2}\) symmetry
In this appendix we verify that for the Calabi-Yau geometry considered in section 3.2.2 there is a critical point of \(\mathds{V}^{(3)}(\lambda_{A},\lambda_{AB},\epsilon_{I})\) with \(\lambda_{a}=\lambda_{ab}=0\). Even if the group of gauge transformations (2.14) has a sufficient number of parameters to potentially gauge away all \(\lambda_{a}\) and \(\lambda_{ab}\), in orbifolds \(\mathbb{M}_{4}\) with a small number of vectors in the fan there are often obstructions that make this impossible.
In the following we will verify that the values of the Kahler moduli \(\lambda_{A}\), \(\lambda_{AB}\) given by
\[\begin{cases}\lambda_{a}=\lambda_{ab}=0\,,\\ \overline{\lambda}_{a,d+1}\text{ such that }\sum_{b}D_{ab}\overline{\lambda}_{b,d +1}=-\overline{\nu}_{M5}\,M_{a}\,,\\ \overline{\lambda}_{d+1}\text{ such that }\partial_{\overline{\lambda}_{d+1}} \mathds{V}^{(3)}=0\,,\end{cases}\] (A.1)
are an extremum of \(\mathds{V}^{(3)}\), under the constraints imposed by the flux equations
\[\partial_{\lambda_{A}}\mathds{V}^{(2)}(\lambda_{A},\lambda_{AB},\epsilon_{I} )=-\overline{\nu}_{M5}\,M_{A}\;.\] (A.2)
In practice, we will show that there exists a value for the Lagrange parameters \(\rho_{A}\) such that the function
\[\mathcal{E}=\mathds{V}^{(3)}+\sum_{A}\rho_{A}\big{(}\partial_{\lambda_{A}} \mathds{V}^{(2)}+\overline{\nu}_{M5}\,M_{A}\big{)}\] (A.3)
has null derivatives with respect to \(\lambda_{A}\), \(\lambda_{AB}\). The equations that we will solve are then
\[\frac{\partial}{\partial\lambda_{A}}\,\mathcal{E}\left(\lambda_{A},\lambda_{AB}, \epsilon_{I},\rho_{A}\right)=0\,,\qquad\frac{\partial}{\partial\lambda_{AB}}\, \mathcal{E}\left(\lambda_{A},\lambda_{AB},\epsilon_{I},\rho_{A}\right)=0\,,\] (A.4)
where the \(\lambda\) are given by (A.1) while \(\epsilon_{1},\epsilon_{2}\) can take general values. We will study the case \(\epsilon_{1}=\epsilon_{2}=0\) separately.
Case \((\epsilon_{1},\epsilon_{2})\neq(0,0)\).We claim that values of \(\rho_{A}\) that solve (A.4) exist and they are the solutions of the following linear system:
\[\begin{split}&\sum_{A}\rho_{A}\big{(}\mathcal{C}_{A}\big{)}\big{|}_ {b}=h_{b}\,,\qquad b=1,\ldots,d\,,\\ & h_{b}\equiv-\frac{1}{2}\,\Big{[}\Big{(}\epsilon_{3}+\sum_{a}c_ {1}^{\mathbb{T}}(L_{a})\Big{)}^{2}\Big{(}\bar{\lambda}_{d+1}+\sum_{a}\bar{ \lambda}_{a,d+1}c_{1}^{\mathbb{T}}(L_{a})\Big{)}^{2}\Big{]}\Big{|}_{b}\,.\end{split}\] (A.5)
Indeed, when the above equations are satisfied we have
\[\begin{split}&\partial_{\lambda_{B}}\sum_{A}\rho_{A}\big{(} \partial_{\lambda_{A}}\mathbb{V}^{(2)}+\overline{\nu}_{M5}\,M_{A}\big{)}=\int _{\mathbb{M}_{4}}\frac{\sum_{A}\rho_{A}\,\mathcal{C}_{A}\,\mathcal{C}_{B}}{ \epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})}=\\ &\qquad=-\frac{1}{2}\int_{\mathbb{M}_{4}}\mathcal{C}_{B}\Big{(} \epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{a})\Big{)}\Big{(}\bar{\lambda}_{d+ 1}+\sum_{b}\bar{\lambda}_{b,d+1}c_{1}^{\mathbb{T}}(L_{b})\Big{)}^{2}=-\, \partial_{\lambda_{B}}\mathbb{V}^{(3)},\end{split}\] (A.6)
where we have used \(\lambda_{a}=\lambda_{ab}=0\). This gives us \(\partial_{\lambda_{A}}\mathcal{E}=0\). The \(\partial_{\lambda_{AB}}\mathcal{E}=0\) equations can also be derived from (A.5) in a similar manner.
Let us now discuss the existence of solutions to the equations (A.5). The restrictions of \(\sum_{A}\rho_{A}\,\mathcal{C}_{A}\) to the fixed points are not independent, they satisfy the following linear relation:
\[0=\int_{\mathbb{M}_{4}}\sum_{A}\rho_{A}\,\mathcal{C}_{A}=\sum_{b}\frac{\sum_{ A}\rho_{A}\big{(}\mathcal{C}_{A}\big{)}|_{b}}{d_{b,b+1}\,\epsilon_{1}^{b}\, \epsilon_{2}^{b}}\,.\] (A.7)
However, the \(h_{b}\) also satisfy the same linear relation, given that the value of \(\overline{\lambda}_{d+1}\) is set by the condition \(\partial_{\overline{\lambda}_{d+1}}\mathbb{V}^{(3)}=0\), which reads
\[0=\frac{1}{2}\int_{\mathbb{M}_{4}}(\epsilon_{3}+\sum_{a}c_{1}^{\mathbb{T}}(L_{ a}))^{2}(\bar{\lambda}_{d+1}+\sum_{a}\bar{\lambda}_{a,d+1}c_{1}^{\mathbb{T}}(L_{ a}))^{2}=-\sum_{b}\frac{h_{b}}{d_{b,b+1}\,\epsilon_{1}^{b}\,\epsilon_{2}^{b}}\,.\] (A.8)
We can thus always eliminate one of the equations (A.5). Considering that shifting \(\rho_{A}\to\rho_{A}+\sum_{I}\alpha^{I}V_{I}^{A}\) with \(\sum_{I}\alpha^{I}\epsilon_{I}=0\) leaves the left-hand side of (A.5) invariant, we can always gauge away \(\rho_{d+1}\)35 and one of the \(\rho_{a}\). We are left with a system of \(d-1\) equations in \(d-1\) variables that generally is not singular and thus has a solution.
Footnote 35: We note that the \((\epsilon_{1},\epsilon_{2})\neq(0,0)\) hypothesis is needed to set \(\rho_{d+1}=0\).
There is an edge case in which the system of equations must be further reduced: when there is \(\overline{a}\in\{1,\ldots,d\}\) such that \(\epsilon_{2}^{\overline{a}}=0\) (and consequently \(\epsilon_{1}^{\overline{a}-1}=0\)). Since
\((\epsilon_{1},\epsilon_{2})\neq(0,0)\) and \(\mathbb{M}_{4}\) is compact we must have \(\epsilon_{1}^{\overline{a}}\neq 0\), \(\epsilon_{2}^{\overline{a}-1}\neq 0\). The \(b=\overline{a}-1\) and \(b=\overline{a}\) equations are (\(\rho_{d+1}\) has been gauged away)
\[-\rho_{\overline{a}}\,\epsilon_{2}^{\overline{a}-1}=h_{\overline{a}-1}\,,\qquad -\rho_{\overline{a}}\,\epsilon_{1}^{\overline{a}}=h_{\overline{a}}\,.\] (A.9)
In principle depending on the value of \(h_{\overline{a}-1}\) and \(h_{\overline{a}}\) the above equations might not have a solution. However if we consider that \(h_{\overline{a}-1}\) and \(h_{\overline{a}}\) can only depend on \(\epsilon_{2}^{\overline{a}-1}\) and \(\epsilon_{1}^{\overline{a}}\) respectively,36 and that in general \(d_{\overline{a}-1,\overline{a}}\,\epsilon_{1}^{\overline{a}-1}=-d_{\overline {a},\overline{a}+1}\,\epsilon_{2}^{\overline{a}}\), then the only way for the right-hand side of (A.8) to be finite is for
Footnote 36: By definition \(h_{\overline{a}-1}\) and \(h_{\overline{a}}\) are the restrictions of an equivariant form on the fixed points \(\overline{a}-1\) and \(\overline{a}\).
\[\frac{h_{\overline{a}-1}}{\epsilon_{2}^{\overline{a}-1}}=\frac{h_{\overline{ a}}}{\epsilon_{1}^{\overline{a}}}\,,\] (A.10)
which means that equations (A.5) are solvable without issue.
Case \((\epsilon_{1},\epsilon_{2})=(0,0)\).It is not immediately clear whether the solutions to equations (A.5) are well-behaving in the limit \(\epsilon_{1},\epsilon_{2}\to 0\). However when \(\epsilon_{1}\) and \(\epsilon_{2}\) are zero the equations (A.4) are quite simple and we can solve them directly.
For \(\epsilon_{1}=\epsilon_{2}=0\) we have \(c_{1}^{\mathbb{T}}(L_{a})=c_{1}(L_{a})\) and thus
\[\int_{\mathbb{M}_{4}}c_{1}^{\mathbb{T}}(L_{a_{1}})\ldots c_{1}^{\mathbb{T}}(L_ {a_{k}})=\begin{cases}D_{a_{1}a_{2}}&k=2\\ 0&\text{otherwise}\end{cases}\,.\] (A.11)
From this relation it easily follows that
\[\int_{\mathbb{M}_{4}}\frac{c_{1}^{\mathbb{T}}(L_{a_{1}})\ldots c_{1}^{\mathbb{ T}}(L_{a_{k}})}{\epsilon_{3}+\sum_{b}c_{1}^{\mathbb{T}}(L_{b})}=\begin{cases}( \epsilon_{3})^{-1}D_{a_{1}a_{2}}&k=2\\ 0&k>2\end{cases}\,.\] (A.12)
Using the above relations the extremization equations (A.4) become
\[\begin{cases}\partial_{\lambda_{a}}\text{ equation:}&\frac{1}{2}\sum_{b}D_{ab}( \overline{\lambda}_{d+1}^{2}+2\,\epsilon_{3}\,\overline{\lambda}_{d+1} \overline{\lambda}_{b,d+1})+(\epsilon_{3})^{-1}\sum_{b}D_{ab}\rho_{b}=0\\ \partial_{\lambda_{ab}}\text{ equation:}&\frac{1}{2}\,\epsilon_{3}\,\overline{ \lambda}_{d+1}^{2}D_{ab}-\rho_{d+1}D_{ab}=0\\ \partial_{\lambda_{a,d+1}}\text{ equation:}&-\sum_{b}D_{ab}(\epsilon_{3}^{2}\, \overline{\lambda}_{d+1}\overline{\lambda}_{b,d+1}+\epsilon_{3}\,\overline{ \lambda}_{d+1}^{2})-\sum_{b}D_{ab}(\rho_{b}-\rho_{d+1})=0\\ \partial_{\lambda_{d+1,d+1}}\text{ equation:}&\frac{1}{2}\sum_{a,b}D_{ab}( \epsilon_{3}^{3}\,\overline{\lambda}_{a,d+1}\overline{\lambda}_{b,d+1}+6\, \epsilon_{3}^{2}\,\overline{\lambda}_{d+1}\overline{\lambda}_{b,d+1}+3\epsilon _{3}\,\overline{\lambda}_{d+1}^{2})\\ &+\sum_{a,b}D_{ab}(\rho_{b}-\rho_{d+1})=0\end{cases}\,.\] (A.13)
The \(\partial_{\lambda_{d+1}}\) equation was omitted because it is trivial: \(\partial_{\lambda_{d+1}}\sum_{A}\rho_{A}\big{(}\partial_{\lambda_{A}}\text{V} ^{(2)}+\overline{\nu}_{M5}M_{A}\big{)}=0\) and \(\partial_{\lambda_{d+1}}\text{V}^{(3)}=-\partial_{\overline{\lambda}_{d+1}} \text{V}^{(3)}=0\) because of (A.1).
The solution to (A.13) is37
Footnote 37: When we plug this solution into the left-hand side of the \(\partial_{\lambda_{d+1,d+1}}\) equation we do not get zero straight away, but rather we get the same expression as \(\partial_{\overline{\lambda}_{d+1}}\text{V}^{(3)}\), which is zero by (A.1).
\[\rho_{a}=-\epsilon_{3}^{2}\,\overline{\lambda}_{d+1}\overline{\lambda}_{b,d+1}- \frac{1}{2}\,\epsilon_{3}\,\overline{\lambda}_{d+1}^{2}\,,\qquad\rho_{d+1}= \frac{1}{2}\,\epsilon_{3}\,\overline{\lambda}_{d+1}^{2}\,,\] (A.14)
and thus (A.1) is the proper extremum of \(\text{V}^{(3)}\) under the flux constraints.
AdS\({}_{3}\times M_{7}\) solutions with the addition of higher times
In this appendix we revisit the computation of section 4.3, now with the inclusion of second and triple times in the equivariant volume. For the AdS\({}_{3}\times M_{7}\) solutions we considered in section 4.3 there was no need to add any higher times. We will now show that it is still possible to perform the computation even when the equivariant volume is over-parameterized. The extremization procedure for the parameters in excess plays a crucial role this time: relations that where automatically verified when \(\mathds{V}^{(2)}\) only included single times are now derived as extremization conditions. This provides further evidence that extremization is the correct way to deal with any parameter \(\lambda\) that is not fixed by the flux constraints.
The second degree homogeneous component of the equivariant volume with triple times is given by
\[\begin{split}&\mathbb{V}^{\,(2)}(\lambda_{A}\,,\lambda_{AB}\,, \lambda_{ABC}\,,\epsilon_{I})=\int_{\mathbb{M}_{4}}\,\frac{(\tau^{\mathbb{T}}) ^{2}}{2\,p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}\,,\\ &\tau^{\mathbb{T}}=\sum_{A}\lambda_{A}\,\mathcal{C}_{A}+\sum_{A, B}\lambda_{AB}\,\mathcal{C}_{A}\,\mathcal{C}_{B}+\sum_{A,B,C}\lambda_{ABC}\, \mathcal{C}_{A}\,\mathcal{C}_{B}\,\mathcal{C}_{C}\,.\end{split}\] (B.1)
We need to impose the following flux constraints:
\[-\nu_{D3}\,M_{A}=\partial_{\lambda_{A}}\mathbb{V}^{\,(2)}(\lambda_{A}\,, \lambda_{AB}\,,\lambda_{ABC}\,,\epsilon_{I})=\int_{\mathbb{M}_{4}}\,\frac{ \mathcal{C}_{A}\,\tau^{\mathbb{T}}}{p\,\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}}\,.\] (B.2)
Proceeding in a similar way as we did in section 4.3, we will set all the \(\lambda\) to zero except for \(\lambda_{d+1,\,d+2}\) and \(\lambda_{d+1,\,d+2,\,A}\). This assumption is justified by the fact that in principle the group of gauge transformation for the single, double and triple times has enough parameters to gauge away all the \(\lambda\) except \(\lambda_{d+1,\,d+2}\) and \(\lambda_{d+1,\,d+2,\,A}\).38 At the end of this appendix we will quickly check that \(\mathds{V}^{(2)}\) does indeed have a critical point for \(\lambda_{a,b}=\lambda_{a,b,d+1}=\lambda_{a,b,d+2}=\lambda_{a,b,c}=0\), thus verifying the correctness of this choice of \(\lambda\). The flux constraints (B.2) now read
Footnote 38: Note that \(\lambda_{a,b}\), \(\lambda_{a,b,d+1}\), \(\lambda_{a,b,d+2}\) and \(\lambda_{a,b,c}\) do not appear inside \(\mathds{V}^{(2)}\) unless \(a,b,c\in\{\overline{a},\overline{a}+1\}\) for some \(\overline{a}\in\{1,\ldots,d\}\).
\[-\nu_{D3}\,M_{a}=\frac{1}{p}\int_{\mathbb{M}_{4}}c_{1}^{\overline{\gamma}}(L_{ a})\Big{(}\lambda_{d+1,\,d+2}+\sum_{A}\lambda_{d+1,\,d+2,\,A}\,\mathcal{C}_{A} \Big{)}=\sum_{b}D_{ab}\overline{\lambda}_{b}\,,\] (B.3)
where
\[\overline{\lambda}_{a}=\frac{\lambda_{d+1,\,d+2,\,a}-\lambda_{d+1,\,d+1,\,d+ 2}}{p}+\frac{\mathfrak{t}_{a}(\lambda_{d+1,\,d+1,\,d+2}-\lambda_{d+1,\,d+2,\, d+2})}{p^{2}}\,.\] (B.4)
Up to gauge transformations, the \(\overline{\lambda}_{a}\) are then fixed to be \(\overline{\lambda}_{a}=-\nu_{D3}\,m_{a}\), where the \(m_{a}\) are such that \(\sum_{b}D_{ab}m_{b}=M_{a}\).
We notice that the flux constraints did not fix all the \(\lambda\), but rather there is one such parameter left:
\[\begin{split}\mathbb{V}^{\,(2)}=&\,\frac{1}{2p}\int_{ \mathbb{M}_{4}}\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\left(\lambda_{d+1,\,d+2}+ \sum_{A}\lambda_{d+1,\,d+2,\,A}\,\mathcal{C}_{A}\right)^{2}\\ =&\,\frac{p}{2}\,\nu_{D3}^{2}\int_{\mathbb{M}_{4}} \mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\left(\overline{\lambda}-\sum_{a}m_{a}\,c _{1}^{\mathbb{T}}(L_{a})\right)^{2}\,,\end{split}\] (B.5)
where
\[\overline{\lambda}=\,\frac{\lambda_{d+1,\,d+2}-\epsilon_{3}\,\lambda_{d+1,\,d +1,\,d+2}}{p\,\nu_{D3}}+\frac{\epsilon_{4}\left(\lambda_{d+1,\,d+1,\,d+2}- \lambda_{d+1,\,d+2,\,d+2}\right)}{p^{2}\,\nu_{D3}}\,.\] (B.6)
Our procedure prescribes to fix the value of \(\overline{\lambda}\) by extremizing \(\mathbb{V}^{\,(2)}\) with respect to it. If we call \(b(\epsilon_{I})\) the extremal value of \(\overline{\lambda}\), we find that
\[0=\frac{\partial}{\partial\overline{\lambda}}\mathbb{V}^{(2)}=p\,\nu_{D3}^{2} \int_{\mathbb{M}_{4}}\mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\left(b(\epsilon_{I}) -\sum_{a}m_{a}\,c_{1}^{\mathbb{T}}(L_{a})\right).\] (B.7)
Notably, the equation we obtain is the exact same as (4.49). In the context of the computation without higher times, equation (4.49) was a trivial relation, a predictable consequence of the fact that there are only \(d-1\) single times, but \(d\) fixed points. In the computation of this appendix the same relation is now derived as an extremization condition.
If we substitute (B.7) into \(\mathbb{V}^{(2)}\) we get
\[\mathbb{V}^{\,(2)}=-\frac{p}{2}\,\nu_{D3}^{2}\int_{\mathbb{M}_{4}}\mathcal{C} _{d+1}\,\mathcal{C}_{d+2}\left(b(\epsilon_{I})-\sum_{a}m_{a}\,c_{1}^{\mathbb{ T}}(L_{a})\right)\sum_{a}m_{a}\,c_{1}^{\mathbb{T}}(L_{a})\,,\] (B.8)
which is the same as the main result of section 4.3.
We can quickly verify that the values of \(\lambda\) that we have fixed are an extremum of \(\mathbb{V}^{(2)}\) by employing the same strategy as appendix A. We can find the values of the Lagrange parameters \(\rho_{A}\) such that the function
\[\mathcal{E}(\lambda_{A}\,,\lambda_{AB}\,,\lambda_{ABC}\,,\epsilon_{I},\rho_{A} )=\mathbb{V}^{(2)}+\sum_{A}\rho_{A}\big{(}\partial_{\lambda_{A}}\mathbb{V}^{( 2)}+\nu_{D3}\,M_{A}\big{)}\] (B.9)
has null derivatives with respect to \(\lambda_{A}\,,\lambda_{AB}\,,\lambda_{ABC}\) by solving the following linear system:
\[\sum_{A}\rho_{A}\big{(}\mathcal{C}_{A}\big{)}\big{|}_{b}=-p\,\nu_{D3}\Big{[} \mathcal{C}_{d+1}\,\mathcal{C}_{d+2}\left(b(\epsilon_{I})-\sum_{a}m_{a}\,c_{1 }^{\mathbb{T}}(L_{a})\right)\Big{]}\Big{|}_{b}\,.\] (B.10)
Using the same line of reasoning as in appendix A, solutions to this system exist and thus (B.8) is the proper extremal value of \(\mathbb{V}^{(2)}\) (with respect to the extremization in \(\lambda\)).
AdS\({}_{4}\times M_{6}\) gravity solutions
In this appendix we study the family of AdS\({}_{4}\times M_{6}\) solutions to massive type IIA supergravity constructed in [26]. The internal space is a \(\mathbb{P}^{1}\) bundle over a four-dimensional compact manifold, \(\mathbb{P}^{1}\hookrightarrow M_{6}\to B_{4}\), where the base space can be either a Kahler-Einstein manifold (\(B_{4}=\text{KE}_{4}\)) or the product of two Riemann surfaces (\(B_{4}=\Sigma_{1}\times\Sigma_{2}\)). In the general class of solutions in [26], the \(\mathbb{P}^{1}\) bundle is the projectivization of the canonical bundle over \(B_{4}\), \(\mathbb{P}(K\oplus\mathcal{O})\). In what follows, we will focus on spaces with positive curvature and set to zero the constant parameter \(\ell\) appearing in [26]. This last choice is motivated by our interest for systems with only D4 and D8 branes, therefore all fluxes, except for \(F_{(0)}\) and \(F_{(4)}\), must vanish. In both configurations, the metric in the string frame is39
Footnote 39: Notice the different normalization of \(\text{d}s_{M_{6}}^{2}\) with respect to [26].
\[\text{d}s_{\text{s.f.}}^{2}=\text{e}^{2A}\big{(}\text{d}s_{\text{AdS}_{4}}^{2} +\text{d}s_{M_{6}}^{2}\big{)}\,,\] (C.1)
where \(\text{d}s_{\text{AdS}_{4}}^{2}\) is the metric on AdS\({}_{4}\) with unit radius. The details of the internal space, along with the expressions for the dilaton and the form fluxes, will be given case by case. The solutions in [26] corresponding to the geometries discussed in section 4.2 are cut into half along the equator of the \(\mathbb{P}^{1}\) fibre due to the presence of an O8 plane.
### Kahler-Einstein base space
We begin considering \(B_{4}=\text{KE}_{4}\), in which case the metric on \(M_{6}\) is given by (setting \(\kappa=+1\) in [26])
\[\text{d}s_{M_{6}}^{2}=-\frac{q^{\prime}}{4xq}\,\text{d}x^{2}-\frac{q}{xq^{ \prime}-4q}\,D\psi^{2}+\frac{q^{\prime}}{3q^{\prime}-xq^{\prime\prime}}\, \text{d}s_{\text{KE}_{4}}^{2}\,,\] (C.2)
where
\[q(x)=x^{6}+\frac{\sigma}{2}x^{4}+4x^{3}-\frac{1}{2}\,,\] (C.3)
with \(\sigma\) a real parameter. Here, we introduced \(D\psi=\text{d}\psi+\rho\), where the one-form \(\rho\) is defined on \(\text{KE}_{4}\) and is such that \(\text{d}_{4}\rho=-\mathcal{R}\), with \(\mathcal{R}\) the Ricci form of \(\text{KE}_{4}\). The line element \(\text{d}s_{\text{KE}_{4}}^{2}\) is normalized such that its scalar curvature is \(R_{\text{KE}}=4\). The background under exam corresponds to \(\sigma>-9\), in which case the metric is smooth and well-defined given that \(\psi\) is \(2\pi\)-periodic and \(x\in[0,x_{+}]\), with \(x_{+}\) the only positive root of \(q(x)\). In \(x=0\) the \(S^{1}\) fibre parameterized by \(\psi\) does not shrink and here is located an O8-plane [26]. The warp factor of the ten-dimensional metric is
\[\text{e}^{2A}=L^{2}\sqrt{\frac{x^{2}q^{\prime}-4xq}{q^{\prime}}}\,,\] (C.4)
with \(L\) a real constant. The dilaton reads
\[\mathrm{e}^{2\Phi}=\frac{72L^{4}}{f_{0}^{2}}\,\frac{xq^{\prime}}{(3q^{\prime}-xq ^{\prime\prime})^{2}}\bigg{(}\frac{x^{2}q^{\prime}-4xq}{q^{\prime}}\bigg{)}^{3/2}\,,\] (C.5)
where we find convenient to introduce the constant \(f_{0}\) in order to parameterize the Romans mass
\[F_{(0)}=\frac{f_{0}}{L^{3}}\,,\] (C.6)
and the four-form flux is given by
\[F_{(4)}=-\frac{L\,f_{0}}{12}\bigg{[}\frac{3x(x^{6}-5x^{3}-\sigma x-5)}{(1-x^{3 })^{2}}\,\mathrm{d}x\wedge D\psi\wedge\mathcal{R}+\frac{9x^{5}+5\sigma x^{3}+ 45x^{2}+\sigma}{6(1-x^{3})}\,\mathcal{R}\wedge\mathcal{R}\bigg{]}\,.\] (C.7)
All the other fields, namely the two-forms \(B_{(2)}\) and \(F_{(2)}\), vanish.
The first step we take in the analysis is the quantization of the fluxes, which imposes
\[(2\pi\ell_{s})F_{(0)}=n_{0}\in\mathbb{Z}\,,\qquad\frac{1}{(2\pi\ell_{s})^{3}} \int_{\Sigma_{4}}F_{(4)}=N_{\Sigma_{4}}\in\mathbb{Z}\] (C.8)
for any four-cycle \(\Sigma_{4}\) on \(M_{6}\). Letting \(\Sigma_{\alpha}\) be a basis of two-cycles for \(H_{2}(\mathrm{KE}_{4},\mathbb{Z})\), we take as a basis for \(H_{4}(M_{6},\mathbb{Z})\) the set \(\{C_{\alpha},C_{+}\}\), where \(C_{\alpha}\) are the four-cycles obtained by considering the fibration \(\mathbb{P}^{1}\hookrightarrow C_{\alpha}\to\Sigma_{\alpha}\), and \(C_{+}\) is a copy of the KE base space at \(x=x_{+}\). Performing the integrals, we obtain the fluxes
\[\begin{split} N_{\alpha}&=\frac{\pi^{2}L\,f_{0}}{6( 2\pi\ell_{s})^{3}}\,\frac{x_{+}^{2}(3x_{+}^{3}+2\sigma x_{+}+15)}{1-x_{+}^{3}} \,m_{k}\,n_{\alpha}\,,\\ N_{+}&=-\frac{\pi^{2}L\,f_{0}}{18(2\pi\ell_{s})^{3} }\,\frac{9x_{+}^{5}+5\sigma x_{+}^{3}+45x_{+}^{2}+\sigma}{1-x_{+}^{3}}\,M_{k} \,,\end{split}\] (C.9)
where we defined the integers
\[n(\Sigma_{\alpha})=\frac{1}{2\pi}\int_{\Sigma_{\alpha}}\mathcal{R}=m_{k}\,n_{ \alpha}\,,\qquad M_{k}=\frac{1}{4\pi^{2}}\int_{\mathrm{KE}_{4}}\mathcal{R} \wedge\mathcal{R}\,.\] (C.10)
\(m_{k}\) is the Fano index of the \(\mathrm{KE}_{4}\) and is the largest positive integer such that all of the \(n_{\alpha}\) are integers. These integers take the values \(m_{k}=(3,2,1)\) and \(M_{k}=(9,8,6)\) for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\mathrm{dP}_{3}\), respectively.
For the rest of this subsection we will restrict to the case \(\sigma=0\). In order to understand this assumption, we first need to make contact with the equivariant volume extremization procedure. The toric manifold \(\mathrm{KE}_{4}\) is completely described by its fan \(v^{a}\), which defines the toric divisors \(\Sigma_{a}\) and their associated line bundles \(L_{a}\). The set of divisors \(D_{a}\subset M_{6}\) is naturally induced as the \(\mathbb{P}^{1}\) fibrations over \(\Sigma_{a}\), to which we must add \(D_{d+1}\), a copy of \(\mathrm{KE}_{4}\) at the pole of the half \(\mathbb{P}^{1}\). The corresponding integer fluxes are defined as
\[M_{A}=\frac{1}{(2\pi\ell_{s})^{3}}\int_{D_{A}}F_{(4)}\] (C.11)
and, for \(a=1,\ldots,d\), they read
\[M_{a}=\frac{\pi L\,f_{0}}{12(2\pi\ell_{s})^{3}}\,\frac{x_{+}^{2}(3x_{+}^{3}+2 \sigma x_{+}+15)}{1-x_{+}^{3}}\times\int_{\Sigma_{a}}\mathcal{R}\,.\] (C.12)
Recalling that \(\sum_{a}c_{1}(L_{a})=c_{1}(T\text{KE}_{4})=\mathcal{R}/(2\pi)\), we obtain
\[\sum_{a}\int_{\Sigma_{a}}\mathcal{R}=\sum_{a}\int_{\text{KE}_{4}}\mathcal{R} \wedge c_{1}(L_{a})=\frac{1}{2\pi}\int_{\text{KE}_{4}}\mathcal{R}\wedge \mathcal{R}\,,\] (C.13)
which allows us to compute the sum
\[\sum_{a}M_{a}=\frac{\pi^{2}L\,f_{0}}{6(2\pi\ell_{s})^{3}}\,\frac{x_{+}^{2}(3x_ {+}^{3}+2\sigma x_{+}+15)}{1-x_{+}^{3}}\,M_{k}\,.\] (C.14)
Identifying \(M_{d+1}\) with \(N_{+}\) we have
\[\sum_{A}M_{A}=\sum_{a}M_{a}+N_{+}=-\frac{\pi^{2}L\,f_{0}\,\sigma}{18(2\pi\ell _{s})^{3}}\,M_{k}\,,\] (C.15)
and consistency with the \(I=3\) component of the third condition in (4.13), which reads \(\sum_{A}M_{A}=0\), imposes \(\sigma=0\).
When \(\sigma\) vanishes, the zeros of (C.3) can be computed analytically,
\[x^{3}=-2\pm\frac{3}{\sqrt{2}}\qquad\Longrightarrow\qquad x_{+}=\left(\frac{ 3-2\sqrt{2}}{\sqrt{2}}\right)^{1/3},\] (C.16)
and the fluxes simplify to
\[N_{\alpha}=\frac{\pi^{2}L\,f_{0}}{2(2\pi\ell_{s})^{3}}\left(\frac{3+2\sqrt{2} }{\sqrt{2}}\right)^{1/3}m_{k}\,n_{\alpha}\,,\qquad N_{+}=-\frac{\pi^{2}L\,f_{ 0}}{2(2\pi\ell_{s})^{3}}\left(\frac{3+2\sqrt{2}}{\sqrt{2}}\right)^{1/3}M_{k}\,.\] (C.17)
In order for \(N_{\alpha}\) and \(N_{+}\) to be integers, as imposed by (C.8), we require
\[\frac{\pi^{2}L\,f_{0}}{2(2\pi\ell_{s})^{3}}\left(\frac{3+2\sqrt{2}}{\sqrt{2}} \right)^{1/3}=\frac{N}{h}\,,\] (C.18)
where \(N\) is an arbitrary integer and \(h=\text{hcf}(M_{k},m_{k})\). Specifically, \(h=(3,2,1)\) for \(\mathbb{P}^{2}\), \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\text{dP}_{3}\), respectively. On the other hand, the first condition of (C.8) yields
\[\frac{f_{0}}{L^{3}}=\frac{n_{0}}{2\pi\ell_{s}}\,.\] (C.19)
Combining (C.18) and (C.19) we obtain the following quantization conditions on the parameters \(L\) and \(f_{0}\)
\[\begin{split} L^{4}&=(2\pi\ell_{s})^{4}\,\frac{2^{ 4/3}(3-2\sqrt{2})^{1/3}}{\pi^{2}n_{0}}\bigg{(}\frac{N}{h}\bigg{)}\,,\\ f_{0}^{2}&=(2\pi\ell_{s})^{4}\,\frac{4(3-2\sqrt{2} )^{1/2}\,n_{0}^{1/2}}{\pi^{3}}\bigg{(}\frac{N}{h}\bigg{)}^{3/2}\,.\end{split}\] (C.20)
The free energy of our AdS\({}_{4}\times M_{6}\) background with KE base space can be read off from the four-dimensional effective Newton constant \(G_{(4)}\) as [31]
\[F=\frac{\pi}{2G_{(4)}}=\frac{16\pi^{3}}{(2\pi\ell_{s})^{8}}\int\mathrm{e}^{8A-2 \Phi}\,\mathrm{vol}(M_{6})\,,\] (C.21)
which gives the general result
\[F=\frac{1}{(2\pi\ell_{s})^{8}}\,\frac{8\pi^{6}L^{4}f_{0}^{2}}{135}\,x_{+}^{2}( 9x_{+}^{3}+5\sigma x_{+}+45)\,M_{k}\,.\] (C.22)
In this computation we used the fact that, in our conventions, the Kahler form is \(J_{\mathrm{KE}}=\mathcal{R}\), therefore the total volume of the KE\({}_{4}\) can be determined from
\[\mathrm{Vol}(\mathrm{KE}_{4})=\frac{1}{2}\int_{\mathrm{KE}_{4}}\mathcal{R} \wedge\mathcal{R}=2\pi^{2}M_{k}\,.\] (C.23)
Setting \(\sigma=0\) and substituting the expressions of \(x_{+}\), \(L\) and \(f_{0}\) into (C.22), the free energy then reads
\[F=\frac{32\sqrt{2}(3-2\sqrt{2})\pi}{5n_{0}^{1/2}}\bigg{(}\frac{N}{h}\bigg{)}^{ 5/2}M_{k}\,,\] (C.24)
which agrees with the the first equation in (4.25) with a plus sign, taking into account that, for our examples, \(h=m_{k}\).
### \(\boldsymbol{S^{2}\times S^{2}}\) base space
We now move to the second case, \(B_{4}=S_{1}^{2}\times S_{2}^{2}\), whose six-dimensional metric is (setting \(\kappa_{1}=\kappa_{2}=+1\) in [26])
\[\mathrm{d}s_{M_{6}}^{2}=-\frac{q^{\prime}}{4xq}\,\mathrm{d}x^{2}-\frac{q}{xq^{ \prime}-4q}\,D\psi^{2}+\frac{q^{\prime}}{xu_{1}}\,\mathrm{d}s_{S_{1}^{2}}^{2} +\frac{q^{\prime}}{xu_{2}}\,\mathrm{d}s_{S_{2}^{2}}^{2}\,,\] (C.25)
where
\[\begin{split} q(x)=x^{6}+\frac{\sigma}{2}x^{4}+2(1+t)x^{3}-\frac {t}{2}\,,\\ u_{1}(x)=12x(1-x^{3})\,,\qquad u_{2}(x)=12x(t-x^{3})\,,\end{split}\] (C.26)
with \(\sigma\) and \(t\) real constants. \(D\psi=\mathrm{d}\psi+\rho\), where \(\rho\) is a one-form on \(S_{1}^{2}\times S_{2}^{2}\) such that \(\mathrm{d}_{4}\rho=-(\mathcal{R}_{1}+\mathcal{R}_{2})\), with \(\mathcal{R}_{i}\) Ricci form of \(S_{i}^{2}\), while each \(\mathrm{d}s_{S_{i}^{2}}^{2}\) is the metric on a two-sphere with unit radius. The configuration of interest is realized when \(t>0\) and \(\sigma>-9\cdot 4^{-1/3}(1+t)^{2/3}\), and in this region the metric is smooth and well-defined given that \(\psi\) is \(2\pi\)-periodic and \(x\in[0,x_{+}]\), with \(x_{+}\) the only positive root of \(q(x)\). Also in this case, we have an O8-plane in \(x=0\). The warp factor has the same expression as in the previous case, namely
\[\mathrm{e}^{2A}=L^{2}\sqrt{\frac{x^{2}q^{\prime}-4xq}{q^{\prime}}}\,,\] (C.27)
whereas the dilaton is now given by
\[\mathrm{e}^{2\Phi}=\frac{72L^{4}}{f_{0}^{2}}\,\frac{q^{\prime}}{xu_{1}u_{2}} \bigg{(}\frac{x^{2}q^{\prime}-4xq}{q^{\prime}}\bigg{)}^{3/2}\,.\] (C.28)
The remaining non-vanishing fields are the Romans mass
\[F_{(0)}=\frac{f_{0}}{L^{3}}\,,\] (C.29)
with \(f_{0}\in\mathbb{R}\), and the four-form flux
\[F_{(4)} =-\frac{L\,f_{0}}{12}\bigg{[}\frac{3x\big{(}x^{6}-(t+4)x^{3}- \sigma x-(2t+3)\big{)}}{(1-x^{3})^{2}}\,\mathrm{d}x\wedge D\psi\wedge\mathcal{ R}_{1}\] \[+\frac{3x\big{(}x^{6}-(4t+1)x^{3}-\sigma tx-t(3t+2)\big{)}}{(t-x^ {3})^{2}}\,\mathrm{d}x\wedge D\psi\wedge\mathcal{R}_{2}\] (C.30) \[-\frac{9x^{8}+5\sigma x^{6}+18(t+1)x^{5}-2\sigma(t+1)x^{3}-9(t^{ 2}+3t+1)x^{2}-\sigma t}{3(1-x^{3})(t-x^{3})}\,\mathcal{R}_{1}\wedge\mathcal{ R}_{2}\bigg{]}\,.\]
In order to quantize the fluxes as in (C.8), we take as a basis for \(H_{4}(M_{6},\mathbb{Z})\) the set \(\{C_{1},C_{2},C_{+}\}\), where \(C_{i}\) are the fibrations \(\mathbb{P}^{1}\hookrightarrow C_{i}\to S_{i}^{2}\) (at a fixed point on the other sphere) and \(C_{+}\) is a copy of \(S_{1}^{2}\times S_{2}^{2}\) at \(x=x_{+}\). The expressions of the three fluxes are
\[N_{1} =\frac{\pi^{2}L\,f_{0}}{3(2\pi\ell_{s})^{3}}\,\frac{x_{+}^{2} \big{(}3x_{+}^{3}+2\sigma x_{+}+3(2t+3)\big{)}}{1-x_{+}^{3}}\,,\] \[N_{2} =\frac{\pi^{2}L\,f_{0}}{3(2\pi\ell_{s})^{3}}\,\frac{x_{+}^{2} \big{(}3x_{+}^{3}+2\sigma x_{+}+3(3t+2)\big{)}}{t-x_{+}^{3}}\,,\] (C.31) \[N_{+} =\frac{4\pi^{2}L\,f_{0}}{9(2\pi\ell_{s})^{3}}\,\frac{9x_{+}^{8}+5 \sigma x_{+}^{6}+18(t+1)x_{+}^{5}-2\sigma(t+1)x_{+}^{3}-9(t^{2}+3t+1)x_{+}^{2} -\sigma t}{(1-x_{+}^{3})(t-x_{+}^{3})}\,,\]
where we made use of the relation
\[\frac{1}{2\pi}\int_{S_{i}^{2}}\mathcal{R}_{i}=\chi(S_{i}^{2})=2\,.\] (C.32)
As before, we will restrict to configurations with \(\sigma=0\), in which case the equation \(q(x)=0\) can be solved analytically, giving
\[x^{3}=-(t+1)\pm\sqrt{\frac{(t+2)(2t+1)}{2}}\quad\implies\quad x_{+}=\bigg{(} \sqrt{\frac{(t+2)(2t+1)}{2}}-(t+1)\bigg{)}^{1/3}\,.\] (C.33)
When \(\sigma\) vanishes \(N_{+}=-2(N_{1}+N_{2})\), therefore we will focus exclusively on the quantization of the fluxes \(N_{1}\) and \(N_{2}\), since the quantization of \(N_{+}\) follows immediately. Setting \(\sigma=0\), the fluxes simplify to
\[N_{1}=\frac{\pi^{2}L\,f_{0}}{(2\pi\ell_{s})^{3}}\,\frac{x_{+}^{2}(x_{+}^{3}+2t +3)}{1-x_{+}^{3}}\,,\qquad N_{2}=\frac{\pi^{2}L\,f_{0}}{(2\pi\ell_{s})^{3}}\, \frac{x_{+}^{2}(x_{+}^{3}+3t+2)}{t-x_{+}^{3}}\,,\] (C.34)
and taking their ratio we can immediately determine \(t\)
\[t=\frac{\big{[}\sqrt{9N_{1}^{2}+14N_{1}N_{2}+9N_{2}^{2}}\pm 3(N_{1}-N_{2})\big{]}^{ 2}}{32N_{1}N_{2}}\,.\] (C.35)
Since \(t\) needs to be positive, \(N_{1}\) and \(N_{2}\) must have the same sign, _i.e._\(N_{1}N_{2}>0\); for the sake of simplicity, we will take both of them positive. Taking the product of the fluxes (C.34) and making use of (C.19) we obtain
\[\begin{split} L^{4}&=(2\pi\ell_{s})^{4}\,\frac{1}{ \pi^{2}n_{0}}\bigg{(}\frac{2}{t}\bigg{)}^{1/2}\bigg{(}\sqrt{\frac{(t+2)(2t+1)} {2}}-(t+1)\bigg{)}^{1/3}(N_{1}N_{2})^{1/2}\,,\\ f_{0}^{2}&=(2\pi\ell_{s})^{4}\,\frac{n_{0}^{1/2}}{ \pi^{3}}\bigg{(}\frac{2}{t}\bigg{)}^{3/4}\bigg{(}\sqrt{\frac{(t+2)(2t+1)}{2}}- (t+1)\bigg{)}^{1/2}(N_{1}N_{2})^{3/4}\,.\end{split}\] (C.36)
The free energy of the AdS\({}_{4}\) solution under exam can be computed performing the integral (C.21) and takes the general expression
\[F=\frac{1}{(2\pi\ell_{s})^{8}}\,\frac{32\pi^{6}L^{4}f_{0}^{2}}{135}\,x_{+}^{2 }\big{(}18x_{+}^{3}+10\sigma x_{+}+45(t+1)\big{)}\,,\] (C.37)
which, once all the ingredients are substituted, becomes
\[\begin{split} F&=\frac{4\sqrt{2}\pi}{5n_{0}^{1/2} }\Big{(}(N_{1}+N_{2})\sqrt{9N_{1}^{2}+14N_{1}N_{2}+9N_{2}^{2}}-(3N_{1}^{2}+2N_ {1}N_{2}+3N_{2}^{2})\Big{)}\\ &\times\sqrt{3(N_{1}+N_{2})-\sqrt{9N_{1}^{2}+14N_{1}N_{2}+9N_{2} ^{2}}}\,.\end{split}\] (C.38)
Parameterizing the fluxes as \(N_{1}=(1+\mathbf{z})N\), \(N_{2}=(1-\mathbf{z})N\), with \(|\mathbf{z}|<1\), we obtain
\[F=\frac{32\pi}{5n_{0}^{1/2}}\,\big{(}\sqrt{8+\mathbf{z}^{2}}-(2+\mathbf{z}^{2} )\big{)}\sqrt{3-\sqrt{8+\mathbf{z}^{2}}}\,N^{5/2}\,,\] (C.39)
which agrees with the the first equation in (4.29) with a plus sign. Setting \(\mathbf{z}=0\) we consistently retrieve the result (C.24) specified to the case \(\mathbb{P}^{1}\times\mathbb{P}^{1}\).
|
2303.18057 | Critical dynamics of long range models on Dynamical Lévy Lattices | We investigate critical equilibrium and out of equilibrium properties of a
ferromagnetic Ising model in one and two dimension in the presence of long
range interactions, $J_{ij}\propto r^{-(d+\sigma)}$. We implement a novel local
dynamics on a dynamical L\'evy lattice, that correctly reproduces the static
critical exponents known in the literature, as a function of the interaction
parameter $\sigma$. Due to its locality the algorithm can be applied to
investigate dynamical properties, of both discrete and continuous long range
models. We consider the relaxation time at the critical temperature and we
measure the dynamical exponent $z$ as a function of the decay parameter
$\sigma$, highlighting that the onset of short range regime for the dynamical
critical properties appears to occur at a value of $\sigma$ which differs from
the equilibrium one. | Riccardo Aiudi, Raffaella Burioni, Alessandro Vezzani | 2023-03-31T13:41:59Z | http://arxiv.org/abs/2303.18057v2 | # Critical dynamics of long range models on Dynamical Levy Lattices
###### Abstract
We investigate critical equilibrium and out of equilibrium properties of a ferromagnetic Ising model in one and two dimension in the presence of long range interactions, \(J_{ij}\propto r^{-(d+\sigma)}\). We implement a novel local dynamics on a _dynamical_ Levy lattice, that correctly reproduces the static critical exponents known in the literature, as a function of the interaction parameter \(\sigma\). Due to its locality the algorithm can be applied to investigate dynamical properties, of both discrete and continuous long range models. We consider the relaxation time at the critical temperature and we measure the dynamical exponent \(z\) as a function of the decay parameter \(\sigma\), highlighting that the onset of short range regime for the dynamical critical properties appears to occur at a value of \(\sigma\) which differs from the equilibrium one.
pacs:
## I Introduction
Systems with non-local long range interactions are known to give rise to interesting physics in classical [1] and quantum [2] many-body regimes, both at equilibrium and out-of equilibrium. Long range interactions can induce spontaneous symmetry breaking even at low dimensions [3] and, at continuous phase transitions, a sufficiently slow decay can modify the universality classes, resulting in critical exponents which depends on the interaction decay [4]. Out of equilibrium, long range interactions modify the dynamical exponents in coarsening phenomena, [5; 6; 7; 8], also giving rise to non trivial metastable states that can affect the dynamics [9; 10].
A wide interest in the field has been devoted to the investigation of magnetic models on lattices in the presence of long range interactions of the type \(J_{ij}\propto r^{-(d+\sigma)}\). The accepted description of the critical properties in the ferromagnetic case was given by Sak [11]. For \(\sigma<d/2\), the critical behavior is mean field, while in the so called "long range" regime, \(d/2<\sigma<2-\eta_{SR}\), the critical exponents depend non-trivially on \(\sigma\) (\(\eta_{SR}\) is the exponent of the critical correlation function in the short range model). For large enough \(\sigma>2-\eta_{SR}\), the short range behavior is recovered.
In this class of models, very often studies are based on numerical approaches. In _physical_ long range models, interactions involve all degrees of freedoms since all pairs of sites interact and thus they form a fully connected model on a complete graph with weighted links. This implies that the number of interactions scales as \(O(N^{2})\), requiring large resources for simulations. In recent years, several solutions to this problem have been proposed. Cluster algorithms [12; 13] can simulate long range model with a computational cost of \(O(N\log N)\) or \(O(N)\) and reduce the relaxation time, so that they are not influenced by critical slowing down, at the price of a non local dynamical evolution. On the other hand the kinetic Monte Carlo [7; 8; 9; 10] provides an effective tools for simulating a local dynamics only at very low temperatures.
Another interesting solution are Levy lattices [14; 15; 16; 17]. These are random diluted graphs with interactions between pairs which are constant and occur with probability \(\propto r^{-(d+\sigma)}\), with the total number of interactions being \(O(N)\). Levy lattices drastically reduce the computational cost while keeping a local dynamics, however an average over different realizations is required. In 1 dimension, Levy lattices seem to fall in a different universality class than their fully connected counterparts, due to long range correlations induced by the disorder in the random lattice realization [18], while in 2 dimension consistent results have been obtained in the XY model [17; 19].
In this paper, we introduce an alternative local dynamics for the long range model based on a _dynamical_ Levy lattice, that is a dynamical sampling at each time step of the long range interacting model, in the spirit of the q-Ising model [20; 21]. In a nutshell, each spin interacts with a constant interaction J only with q neighbours, randomly drawn from the corresponding long range probability distribution. Unlike Levy lattices, the underlying graph is not fixed before the dynamics take place, but evolves dynamically with the system and it is built during the simulation. The system can be thought of as living on a temporal Levy graph [22; 23; 24], in which at each time step the underlying structure evolves and it is reshuffled. As a result, the long range correlations char |
2309.12247 | Bad Actor, Good Advisor: Exploring the Role of Large Language Models in
Fake News Detection | Detecting fake news requires both a delicate sense of diverse clues and a
profound understanding of the real-world background, which remains challenging
for detectors based on small language models (SLMs) due to their knowledge and
capability limitations. Recent advances in large language models (LLMs) have
shown remarkable performance in various tasks, but whether and how LLMs could
help with fake news detection remains underexplored. In this paper, we
investigate the potential of LLMs in fake news detection. First, we conduct an
empirical study and find that a sophisticated LLM such as GPT 3.5 could
generally expose fake news and provide desirable multi-perspective rationales
but still underperforms the basic SLM, fine-tuned BERT. Our subsequent analysis
attributes such a gap to the LLM's inability to select and integrate rationales
properly to conclude. Based on these findings, we propose that current LLMs may
not substitute fine-tuned SLMs in fake news detection but can be a good advisor
for SLMs by providing multi-perspective instructive rationales. To instantiate
this proposal, we design an adaptive rationale guidance network for fake news
detection (ARG), in which SLMs selectively acquire insights on news analysis
from the LLMs' rationales. We further derive a rationale-free version of ARG by
distillation, namely ARG-D, which services cost-sensitive scenarios without
querying LLMs. Experiments on two real-world datasets demonstrate that ARG and
ARG-D outperform three types of baseline methods, including SLM-based,
LLM-based, and combinations of small and large language models. | Beizhe Hu, Qiang Sheng, Juan Cao, Yuhui Shi, Yang Li, Danding Wang, Peng Qi | 2023-09-21T16:47:30Z | http://arxiv.org/abs/2309.12247v2 | # _Bad Actor, Good Advisor:_
###### Abstract
Detecting fake news requires both a delicate sense of diverse clues and a profound understanding of the real-world background, which remains challenging for detectors based on small language models (SLMs) due to their knowledge and capability limitations. Recent advances in large language models (LLMs) have shown remarkable performance in various tasks, but whether and how LLMs could help with fake news detection remains under-explored. In this paper, we investigate the potential of LLMs in fake news detection. First, we conduct an empirical study and find that a sophisticated LLM such as GPT 3.5 could generally expose fake news and provide desirable multi-perspective rationales but still underperforms the basic SLM, fine-tuned BERT. Our subsequent analysis attributes such a gap to the LLM's inability to select and integrate rationales properly to conclude. Based on these findings, we propose that current LLMs may not substitute fine-tuned SLMs in fake news detection but can be a good advisor for SLMs by providing multi-perspective instructive rationales. To instantiate this proposal, we design an adaptive rationale guidance network for fake news detection (ARG), in which SLMs selectively acquire insights on news analysis from the LLMs' rationales. We further derive a rationale-free version of ARG by distillation, namely ARG-D, which services cost-sensitive scenarios without inquiring LLMs. Experiments on two real-world datasets demonstrate that ARG and ARG-D outperform three types of baseline methods, including SLM-based, LLM-based, and combinations of small and large language models.
## 1 Introduction
The wide and fast spread of fake news online has posed real-world threats in critical domains like politics Fisher et al. (2016), economy (CHEQ, 2019), and public health Naeem and Bhatti (2020). Among the countermeasures to combat this issue, _automatic fake news detection_, which aims at distinguishing inaccurate and intentionally misleading news items from others automatically, has been a promising solution in practice Shu et al. (2017); Roth (2022).
Though much progress has been made Hu et al. (2022), understanding and characterizing fake news is still challenging for current models. This is caused by the complexity of the news-faking process: Fake news creators might manipulate any part of the news, using diverse writing strategies and being driven by inscrutable aims behind. Therefore, to maintain both effectiveness and universality for fake news detection, an ideal method is required to have: 1) a delicate sense of diverse clues (_e.g._, style, facts, commonsense); and 2) a profound understanding of the real-world background.
Recent methods Zhang et al. (2021); Kaliyar et al. (2021); Zhu et al. (2022); Mosallanezhad et al. (2022); Hu et al. (2023) generally exploit pre-trained **small language models (SLMs)1** like BERT (De
Figure 1: Illustration of the role of large language models (LLMs) in fake news detection. In this case, (a) the LLM fails to output correct judgment of news veracity but (b) helps the small language model (SLM) judge correctly by providing informative rationales.
vlin et al., 2019) and RoBERTa Liu et al. (2019) to understand news content and provide fundamental representation, plus optional social contexts Shu et al. (2019); Cui et al. (2022), knowledge resources Popat et al. (2018); Hu et al. (2022), or news environment Sheng et al. (2022) as supplements. SLMs do bring improvements, but their knowledge and capability limitations also compromise further enhancement of fake news detectors. For example, BERT was pre-trained on text corpus including Wikipedia and BookCorpus Devlin et al. (2019) and thus struggled to handle news items that require knowledge not included Sheng et al. (2021).
Footnote 1: [https://github.com/ICTMCG/ARG](https://github.com/ICTMCG/ARG)
As a new alternative to SLMs, **large language models (LLMs)**OpenAI (2022); Anthropic (2023); Touvron et al. (2023), which are usually trained on the larger-scale corpus and aligned with human preferences, have shown impressive emergent abilities on various tasks Wei et al. (2022) and are considered promising as general task solvers Ma et al. (2023). However, the potential of LLMs in fake news detection remains underexplored:
* Can LLMs help detect fake news with their internal knowledge and capability?
* What solution should we adopt to obtain better performance using LLMs?
To answer these two questions, we first conduct a deep investigation of the effective role of LLMs in fake news detection and attempt to provide a practical LLM-involved solution. Unlike contemporary works Pelrine et al. (2023); Caramancion (2023) which only simply prompt LLMs to provide predictions with the task instruction, we conduct a detailed empirical study to mine LLMs' potential. Specifically, we use four typical prompting approaches (zero-shot/few-shot vanilla/chain-of-thought prompting) to ask the LLM to make veracity judgments of given news items (Figure 1(a)) and find that even the best-performing LLM-based method still underperforms task-specific fine-tuned SLMs. We then perform an analysis of the LLM-generated explanatory rationales and find that the LLM could provide reasonable and informative rationales from several perspectives. By subsequently inducing the LLM with perspective-specific prompts and performing rule-based ensembles of judgments, we find that rationales indeed benefit fake news detection, and attribute the unsatisfying performance to the LLM's inability to select and integrate rationales properly to conclude.
Based on these findings, we propose that the current LLM may not be a good substitute for the well-fine-tuned SLM but could serve as a good advisor by providing instructive rationales, as presented in Figure 1(b). To instantiate our proposal, we design the adaptive rationale guidance (ARG) network for fake news detection, which bridges the small and large LMs by selectively injecting new insight about news analysis from the large LM's rationales to the small LM. The ARG further derives the rationale-free ARG-D via knowledge distillation for cost-sensitive scenarios with no need to inquire LLMs. Experiments on two real-world datasets show that both ARG and ARG-D outperform existing SLM-only, LLM-only, and combination methods. Our contributions are as follows:
* **Detailed investigation:** We investigate the effective role of LLMs in fake news detection and find the LLM is bad at veracity judgment but good at analyzing contents;
* **Novel and practical solution:** We design a novel ARG network and its distilled version ARG-D that complements small and large LMs by selectively acquiring insights from LLM-generated rationales for SLMs, which has shown superiority based on extensive experiments.
* **Useful resource:** We construct a rationale collection from GPT-3.5 for fake news detection in two languages Chinese and English) and will make it publicly available to facilitate further research.2 Footnote 2: [https://github.com/ICTMCG/ARG](https://github.com/ICTMCG/ARG)
## 2 Is the LLM a Good Detector?
In this section, we evaluate the performance of the representative LLM, _i.e._, GPT-3.5 in fake news detection to reveal its judgment capability. We exploit
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\#} & \multicolumn{3}{c}{**Chinese**} & \multicolumn{3}{c}{**English**} \\ \cline{2-7} & Train & Val & Test & Train & Val & Test \\ \hline Real & 2,331 & 1,172 & 1,137 & 2,878 & 1,030 & 1,024 \\ Fake & 2,873 & 779 & 814 & 1,006 & 244 & 234 \\ Total & 5,204 & 1,951 & 1,951 & 3,884 & 1,274 & 1,258 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the fake news detection datasets.
four typical prompting approaches and perform a comparison with the SLM (here, BERT) fine-tuned on this task. Formally, given a news item \(x\), the model aims to predict whether \(x\) is fake or not.
### Experimental Settings
DatasetWe employ the Chinese dataset Weibo21 (Nan et al., 2021) and the English dataset GossipCop (Shu et al., 2020) for evaluation. Following existing works (Zhu et al., 2022; Mu et al., 2023), we preprocess the datasets with deduplication and temporal data split to avoid possible performance overrating led by data leakage for the SLM. Table 1 presents the dataset statistics.
Large Language ModelWe evaluate GPT-3.5-turbo, the LLM developed by OpenAI and supporting the popular chatbot ChatGPT (OpenAI, 2022), due to its representativeness and convenient calling. The large scale of parameters makes task-specific fine-tuning almost impossible for LLMs, so we use the prompt learning paradigm, where an LLM learns tasks given prompts containing instructions or few-shot demonstrations (Liu et al., 2023). In detail, we utilize the following four typical prompting approaches to elicit the potential of the LLM in fake news detection (Figure 2):
* **Zero-Shot Prompting** constructs prompt only containing the task description and the given news. To make the response more proficient and decrease the refusal ratio, we optionally adopt the role-playing technique when describing our task (Liu et al., 2023; Ramlochan, 2023).
* **Zero-Shot CoT Prompting**(Kojima et al., 2022) is a simple and straightforward chain-of-thought (CoT) prompting approach to encourage the LLM to reason. In addition to the elements in zero-shot prompting, it adds an eliciting sentence such as _"Let's think step by step."_
* **Few-Shot Prompting**(Brown et al., 2020) provides task-specific prompts and several news-label examples as demonstrations. After preliminary tests of {2,4,8}-shot settings, we choose 4-shot prompting which includes two real and two fake samples.
* **Few-Shot CoT Prompting**(Wei et al., 2022) not only provides news-label examples but also demonstrates reasoning steps with previously written rationales. Here, we obtain the provided rationale demonstrations from the correct and reasonable outputs of zero-shot CoT prompting.
Small Language ModelWe adopt the pre-trained small language models, BERT (Devlin et al., 2019) as the representative, given its wide use in this task (Kaliyar et al., 2021; Zhu et al., 2022; Sheng et al., 2022). Specifically, we limit the maximum length of the text to 170 tokens and use _chinese-bert-wwm-ext_ and _bert-base-uncased_ from Transformers package (Wolf et al., 2020) for the Chinese and English evaluation, respectively. We use Adam (Kingma and Ba, 2014) as the optimizer and do a grid search for the optimal learning rate (2e-5/2e-4 for Chinese/English). We report the testing result on the best-validation checkpoint.
### Comparison between Small and Large LMs
Table 2 presents the performance of GPT-3.5-turbo with four prompting approaches and the fine-tuned BERT on the two datasets. We observe that: **1)**
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Usage** & **Chinese** & **English** \\ \hline \multirow{3}{*}{GPT-3.5-turbo} & Zero-Shot & 0.676 & 0.568 \\ & Zero-Shot CoT & 0.677 & 0.666 \\ & Few-Shot & 0.725 & 0.697 \\ & Few-Shot CoT & 0.681 & 0.702 \\ \hline \multirow{2}{*}{BERT} & \multirow{2}{*}{Fine-tuning} & **0.761** & **0.774** \\ & & **(+5.0\%)** & **(+10.3\%)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance in macro F1 of the large and small LMs. The best two results are **bolded** and underlined, respectively. The relative increases over the second-best results are shown in the brackets.
Figure 2: Illustration of prompting approaches for LLMs.
Though the LLM is generally believed powerful, **the LLM underperforms the fine-tuned SLM using all four prompting approaches**. The SLM has a relative increase of 5.0%\(\sim\)12.6% in Chinese and 10.3%\(\sim\)36.3% in English over the LLM, indicating that the LLM lacks task-specific knowledge while the SLM learns during fine-tuning.
**2)** Few-shot versions outperform zero-shot ones, suggesting the importance of task samples. However, introducing several samples only narrow the gap with the SLM but does not lead to surpassing.
**3)** CoT prompting brings additional performance gain in general, especially under the zero-shot setting on the English dataset (+9.8%). However, we also observe some cases where CoT leads to a decrease. This indicates that effective use of rationales may require more careful design.
Overall, given the LLM's unsatisfying performance and higher inference costs than the SLM, the current LLM has not been a "good enough" detector to substitute task-specific SLMs in fake news detection.
### Analysis on the Rationales from the LLM
Though the LLM is bad at news veracity judgment, we also notice that the rationales generated through zero-shot CoT prompting exhibit a unique multi-perspective analytical capability that is challenging and rare for SLMs. For further exploration, we sample 200 samples from each of the two datasets and manually categorize them according to the perspectives from which the LLM performs the news analysis. Statistical results by perspectives and cases are presented in Table 3.3 We see that: **1) The LLM is capable of generating human-like rationales on news content from various perspectives**, such as textual description, commonsense, and factuality, which meets the requirement of the delicate sense of diverse clues and profound understanding of the real-world background in fake news detection. **2)** The detection performance on the subset using certain perspectives is higher than the zero-shot CoT result on the full testing set, and some even outperform few-shot approaches. This indicates the potential of analysis by perspectives, though the coverage is moderate. **3)** The analysis from the perspective of factuality leads to the performance lower than average, indicating the unreliability of using the LLM for factuality analysis based on its internal memorization. We speculate this is caused by the hallucination issue [11, 12].
Footnote 3: Note that a sample may be analyzed from multiple perspectives and thus the sum of _proportions_ might be bigger than 100%.
We further investigate the LLM's performance when asked to perform analysis from a specific perspective on the full testing set (_i.e._, 100% coverage).4 From the first group in Table 4, we see that the LLM's judgment with single-perspective analy
\begin{table}
\begin{tabular}{c l l l} \hline \hline
**Model** & **Usage** & **Chinese** & **English** \\ \hline \multirow{3}{*}{GPT-3.5-turbo} & Zero-Shot CoT & 0.677 & 0.666 \\ & from Perspective TD & 0.674 & 0.611 \\ & from Perspective CS & 0.676 & 0.698 \\ \hline BERT & Fine-tuning & 0.761 & 0.774 \\ \hline \multirow{2}{*}{Ensemble} & Majority Voting & 0.750 & 0.753 \\ & Oracle Voting & 0.907 & 0.876 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of the LLM using zero-shot CoT with perspective specified and other compared models. TD: Textual description; CS: Commonsense.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Perspective**} & \multicolumn{2}{c}{**Chinese**} & \multicolumn{2}{c}{**English**} \\ \cline{2-5} & Prop. & macF1 & Prop. & macF1 \\ \hline \hline
**Textual Description** & 68\% & 0.746 & 59\% & 0.629 \\ \hline \multicolumn{5}{l}{**News:** Everyone! Don’t buy cherries anymore: Cherries of this year are infested with maggots, and nearly 100\% are affected.} \\ \multicolumn{5}{l}{**LLM Rationale:**...The tone of the news is extremely urgent, seemingly trying to spread panic and anxiety.} \\ \multicolumn{5}{l}{**Prediction:** Fake} & \multicolumn{2}{c}{**Ground Truth:** Fake} \\ \hline \hline
**Commonsense** & 69\% & 0.745 & 56\% & 0.642 \\ \hline \multicolumn{5}{l}{**News:** Huang, the chief of Du’an Civil Affairs Bureau, gets subsistence allowances of 509 citizens, owns nine properties, and has six wives...} \\ \multicolumn{5}{l}{**LLM Rationale:**...The news content is extremely outrageous...Such a situation is incredibly rare in reality and even could be thought impossible.} \\ \multicolumn{5}{l}{**Prediction:** Fake} & \multicolumn{2}{c}{**Ground Truth:** Fake} \\ \hline \hline
**Factuality** & 18\% & 0.597 & 46\% & 0.592 \\ \hline \multicolumn{5}{l}{**News:** The 18th National Congress has approved that individuals who are at least 18 years old are now eligible to marry...} \\ \multicolumn{5}{l}{**LLM Rationale:** First, the claim that Chinese individuals at least 18 years old can register their marriage is real, as this is stipulated by Chinese law...} \\ \multicolumn{5}{l}{**Prediction:** Real} & \multicolumn{2}{c}{**Ground Truth:** Fake} \\ \hline \hline
**Others** & 8\% & 0.750 & 17\% & 0.694 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Analysis of different perspectives of LLM’s rationales in the sample set, including the data ratio, LLM’s performance, and cases. Prop.: Proportion.
sis elicited is still promising. Compared with the comprehensive zero-shot CoT setting, the single-perspective-based LLM performs comparatively on the Chinese dataset and is better on the English dataset (for the commonsense perspective case). The results showcase that the internal mechanism of the LLM to integrate the rationales from diverse perspectives is ineffective for fake news detection, limiting the full use of rationales. In this case, combining the small and large LMs to complement each other is a promising solution: The former could benefit from the analytical capability of the latter, while the latter could be enhanced by task-specific knowledge from the former.
To exhibit the advantages of this solution, we apply majority voting and oracle voting (assuming the most ideal situation where we trust the correctly judged model for each sample, if any) among the two single-perspective-based LLMs and the BERT. Results show that we are likely to gain a performance better than any LLM-/SLM-only methods mentioned before if we could adaptively combine their advantages, _i.e._, the flexible task-specific learning of the SLM and the informative rationale generated by the LLM. That is, **the LLM could be possibly a good advisor for the SLM by providing rationales, ultimately improving the performance of fake news detection.**
## 3 ARG: Adaptive Rationale Guidance Network for Fake News Detection
Based on the above findings and discussion, we propose the adaptive rationale guidance (ARG) network for fake news detection. Figure 3 overviews the ARG and its rationale-free version ARG-D, for cost-sensitive scenarios. The objective of ARG is to empower small fake news detectors with the ability to adaptively select useful rationales as references for final judgments. Given a news item \(x\) and its corresponding LLM-generated rationales \(r_{t}\) (textual description) and \(r_{c}\) (commonsense), the ARG encodes the inputs using the SLM at first (Figure 3(a)). Subsequently, it builds news-rationale collaboration via predicting the LLM's judgment through the rationale, enriching news-rationale feature interaction, and evaluating rationale usefulness (Figure 3(b)). The interactive features are finally aggregated with the news feature \(\mathbf{x}\) for the final judgment of \(x\) being fake or not (Figure 3(c)). ARG-D is derived from the ARG via distillation for scenarios where the LLM is unavailable (Figure 3(d)).
### Representation
We employ two BERT models separately as the news and rationale encoder to obtain semantic representations. For the given news item \(x\) and two corresponding rationales \(r_{t}\) and \(r_{c}\), the representations are \(\mathbf{X}\), \(\mathbf{R_{t}}\), and \(\mathbf{R_{c}}\), respectively.
### News-Rationale Collaboration
The step of news-rationale collaboration aims at providing a rich interaction between news and rationales and learning to adaptively select useful rationales as references, which is at the core of our design. To achieve such an aim, ARG includes three modules, as detailed and exemplified using the textual description rationale branch below:
#### 3.2.1 News-Rationale Interaction
To enable comprehensive information exchange between news and rationales, we introduce a news-rationale interactor with a dual cross-attention mechanism to encourage feature interactions. The cross-attention can be described as:
\[\mathrm{CA}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax}\left(\mathbf{Q }^{\prime}\cdot\mathbf{K}^{\prime}/\sqrt{d}\right)\mathbf{V}^{\prime}, \tag{1}\]
where \(\mathbf{Q}^{\prime}=\mathbf{W_{Q}Q}\), \(\mathbf{K}^{\prime}=\mathbf{W_{K}K}\), and \(\mathbf{V}^{\prime}=\mathbf{W_{V}V}\). \(d\) is the dimensionality. Given representations of the news \(\mathbf{X}\) and the rationale \(\mathbf{R_{t}}\), the process is:
\[\mathbf{f_{t\rightarrow x}}=\mathrm{AvgPool}\left(\mathrm{CA}(\mathbf{R_{t}}, \mathbf{X},\mathbf{X})\right), \tag{2}\]
\[\mathbf{f_{x\rightarrow t}}=\mathrm{AvgPool}\left(\mathrm{CA}(\mathbf{X}, \mathbf{R_{t}},\mathbf{R_{t}})\right), \tag{3}\]
where \(\mathrm{AvgPool}(\cdot)\) is the average pooling over the token representations outputted by cross-attention to obtain one-vector text representation \(\mathbf{f}\).
#### 3.2.2 LLM Judgement Prediction
Understanding the judgment hinted by the given rationale is a prerequisite for fully exploiting the information behind the rationale. To this end, we construct the LLM judgment prediction task, whose requirement is to predict the LLM judgment of the news veracity according to the given rationale. We expect this to deepen the understanding of the rationale texts. For the textual description rationale branch, we feed its representation \(\mathbf{R_{t}}\) into the LLM judgment predictor, which is parametrized using a
multi-layer perception (MLP)5:
Footnote 5: For brevity, we omit the subscripts of all independently parametrized MLPs.
\[\hat{m}_{t}=\mathrm{sigmoid}(\mathrm{MLP}(\mathbf{R_{t}})), \tag{4}\]
\[L_{pt}=\mathrm{CE}(\hat{m}_{t},m_{t}), \tag{5}\]
where \(m_{t}\) and \(\hat{m}_{t}\) are respectively the LLM's actual judgment (extracted from the response) and its prediction. The loss \(L_{pt}\) is a cross-entropy loss \(\mathrm{CE}(\hat{y},y)=-y\log\hat{y}-(1-y)\log(1-\hat{y})\). The case is similar for commonsense rationale \(\mathbf{R_{c}}\).
#### 3.2.3 Rationale Usefulness Evaluation
The usefulness of rationales from different perspectives varies across different news items and improper integration may lead to performance degradation. To enable the model to adaptively select appropriate rationale, we devise a rationale usefulness evaluation process, in which we assess the contributions of different rationales and adjust their weights for subsequent veracity prediction. The process comprises two phases, _i.e._, evaluation and reweighting. In the evaluation phase, we input the news-aware rationale vector \(\mathbf{f_{x\to t}}\) into the rationale usefulness evaluator (parameterized by an MLP) to predict its usefulness \(u_{t}\). Following the assumption that rationales leading to correct judgments are more useful, we regard if the judgment along with the rationale is correct as the usefulness labels.
\[\hat{u}_{t}=\mathrm{sigmoid}(\mathrm{MLP}(\mathbf{f_{x\to t}})), \tag{6}\]
\[L_{et}=\mathrm{CE}(\hat{u}_{t},u_{t}). \tag{7}\]
In the reweighting phase, we input vector \(\mathbf{f_{x\to t}}\) into an MLP to obtain a weight number \(w_{t}\), which is then used to reweight the rationale-aware news vector \(\mathbf{f_{t\to x}}\). The procedure is as follows:
\[\mathbf{f_{x\to t}}^{\prime}=w_{t}\cdot\mathbf{f_{x\to t}}. \tag{8}\]
We also use attentive pooling to transform the representation matrix \(\mathbf{X}\) into a vector \(\mathbf{x}\).
### Prediction
Based on the outputs from the last step, we now aggregate news vector \(\mathbf{x}\) and rationale-aware news vector \(\mathbf{f^{\prime}_{t\to x}}\), \(\mathbf{f^{\prime}_{t\to x}}\) for the final judgment. For news item \(x\) with label \(y\in\{0,1\}\), we aggregate these vectors with different weights:
\[\mathbf{f_{cls}}=w_{x}^{cls}\cdot\mathbf{x}+w_{t}^{cls}\cdot\mathbf{f^{\prime }_{t\to x}}+w_{c}^{cls}\cdot\mathbf{f^{\prime}_{c\to x}}, \tag{9}\]
where \(w_{x}^{cls}\), \(w_{t}^{cls}\) and \(w_{c}^{cls}\) are learnable parameters ranging from 0 to 1. \(\mathbf{f_{cls}}\) is the fusion vector, which is then fed into the MLP classifier for final prediction of news veracity:
\[L_{ce}=\mathrm{CE}(\mathrm{MLP}(f_{cls}),y). \tag{10}\]
Figure 3: Overall architecture of our proposed adaptive rationale guidance (ARG) network and its rationale-free version ARG-D. In the ARG, the news item and LLM rationales are (a) respectively encoded into \(\mathbf{X}\) and \(\mathbf{R_{*}}(*\in\{t,c\})\). Then the small and large LMs collaborate with each other via news-rationale feature interaction, LLM judgment prediction, and rationale usefulness evaluation. The obtained interactive features \(\mathbf{f^{\prime}_{s\to x}}\) (\(*\in\{t,c\}\)). These features are finally aggregated with attentively pooled news feature \(\mathbf{x}\) for the final judgment. In the ARG-D, the news encoder and the attention module are preserved and the output of the rationale-aware feature simulator is supervised by the aggregated feature \(\mathbf{f_{cls}}\) for knowledge distillation.
The total loss function is the weighted sum of the loss terms mentioned above:
\[L=L_{ce}+\beta_{1}L_{et}+\beta_{2}L_{pt}+\beta_{3}L_{ce}+\beta_{4}L_{pc}, \tag{11}\]
where \(\beta_{*}(*\in\{1,2,3,4\})\) are hyperparameters.
### Distillation for Rationale-Free Model
The ARG requires sending requests to the LLM for every prediction, which might not be affordable for cost-sensitive scenarios. Therefore, we attempt to build a rationale-free model, namely ARG-D, based on the trained ARG model via knowledge distillation Hinton et al. (2015). The basic idea is simulated and internalized the knowledge from rationales into a parametric module. As shown in Figure 3(d), we initialize the news encoder and classifier with the corresponding modules in the ARG and train a rationale-aware feature simulator (implemented with a multi-head transformer block) and an attention module to internalize knowledge. Besides the cross-entropy loss \(L_{ce}\), we let the feature \(\mathbf{f_{cls}^{d}}\) to imitate \(\mathbf{f_{cls}}\) in the ARG, using the mean squared estimation loss:
\[L_{kd}=\mathrm{MSE}(\mathbf{f_{cls}},\mathbf{f_{cls}^{d}}). \tag{12}\]
## 4 Evaluation
### Experimental Settings
BaselinesWe compare three groups of methods: **G1 (LLM-Only)**: We list the performance of the best-performing setting on each dataset in Table 2, _i.e._, few-shot in Chinese and few-shot CoT in English.
**G2 (SLM-Only)6**: **1) Baseline:** The vanilla BERTbase model whose setting remains consistent with that in Section 2. **2) EANN\({}_{\mathrm{T}}\)Wang et al. (2018):** A model that learns effective signals using auxiliary adversarial training, aiming at removing event-related features as much as possible. We used publication year as the label for the auxiliary task. **3) Publisher-Emo Zhang et al. (2021):** A model that fuses a series of emotional features with textual features for fake news detection. **4) ENDEF Zhu et al. (2022):** A model that removes entity bias via causal learning for better generalization on distribution-shifted fake news data. All methods in this group used the same BERT as the text encoder.
Footnote 6: As this paper focuses on text-based news, we use the text-only variant of the original EANN following Sheng et al. (2021) and the publisher-emotion-only variant in Zhang et al. (2021).
**G3 (LLM+SLM)**: **1) Baseline+Rationale:** It concatenates features from the news encoder and rationale encoder and feeds them into an MLP for prediction. **2) SuperICL Xu et al. (2023):** It exploits the SLM as a plug-in for the in-context learning of the LLM by injecting the prediction and the confidence for each testing sample into the prompt.
Implementation DetailsWe use the same datasets used in Section 2 and keep the setting the same in terms of pre-trained model, learning rate, and optimization method. For the ARG-D network, the parameters of the news encoder and classifier are derived from the ARG model. A four-head transformer block is implemented in the rationale-aware feature simulator. The weight of loss functions \(L_{et},L_{pt},L_{ec},L_{pc}\) in the ARG and \(L_{kd}\) in the ARG-D are grid searched in (0, 5).
### Performance Comparison and Ablation Study
Table 5 presents the performance of our proposed ARG and its variants and the compared methods. From the results, we observe that: **1)** The ARG outperforms all other compared methods in macro F1, demonstrating its effectiveness. **2)** The rationale-free ARG-D still outperforms all compared methods except ARG and its variants, which shows the positive impact of the distilled knowledge from ARG. **3)** The two compared LLM+SLM methods exhibit different performance. The simple combination of features of news and rationale yields a small yet consistent performance improvement, showing the usefulness of our prompted rationales. SuperICL outperforms the LLM-only method but falls behind the baseline SLM. We speculate that this is due to the complexity of our task, where injecting prediction and confidence of an SLM does not bring sufficient information. **4)** We evaluate three ablation experiment groups to evaluate the effectiveness of different modules in ARG network. From the result, we can see that w/o LLM Judgement Predictor or w/o Rationale Usefulness Evaluator both bring a significant decrease in ARG performance, highlighting the significance of these two structures. Besides, we found that even though ARG w/o Predictor & Evaluator is the weakest one among the variants of ARG, it still outperforms all other methods, which shows the importance of the news-rationale interaction structure we designed.
### Result Analysis
To investigate which part the additional gain of the ARG(-D) should be attributed to, we perform statistical analysis on the additional correctly judged samples of ARG(-D) compared with the vanilla BERT. From Figure 4, we observe that: **1)** The proportions of the overlapping samples between ARG(-D) and the LLM are over 80%, indicating that the ARG(-D) can exploit (and absorb) the valuable knowledge for judgments from the LLM, even its performance is unsatisfying. **2)** The samples correctly judged by the LLM from both two perspectives contribute the most and the related knowledge is more likely to preserve in distillation (56.6%\(\rightarrow\)64.2% at the inner doughnut), suggesting more diverse rationales may enhance the ARG(-D)'s training. **3)** 14.7% and 18.2% of correct judgments should be attributed to the model itself. We speculate that it produces some kinds of "new knowledge" based on the wrong judgments of the given knowledge.
For analysis of success and failure cases and additional analysis, please refer to the technical appendix.
### Cost Analysis in Practice
We showcase a possible model-shifting strategy to balance the performance and cost in practical systems. Inspired by Ma et al. (2023), we simulate the situation where we use the more economic ARG-D by default but query the more powerful ARG for part of the data. As presented in Figure 5, by sending only 52% of the data (according to the confidence of ARG-D) to the ARG, we could achieve 0.789 in macro F1, which is close to the performance fully using the ARG (0.790).
## 5 Related Work
Fake News DetectionFake news detection methods could be roughly categorized into two groups: social-context-based and content-based methods. Methods in the first group aim at differentiating fake and real news during the diffusion procedure by observing the propagation patterns Zhou and Zafarani (2019), user feedback Min et al. (2022), and social networks Nguyen et al. (2020). The second group focuses on finding hints based on the given content, including text Przybyla (2020) and images Qi et al. (2021) and may require extra assistance from knowledge bases Popat et al. (
Figure 4: Statistics of additional correctly judged samples of (a) ARG and (b) ARG-D over the BERT baseline. right\((\cdot)\) denotes samples correctly judged by the method \((\cdot)\). TD/CS: Textual description/commonsense perspective.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{5}{c}{**Chinese**} & \multicolumn{5}{c}{**English**} \\ \cline{2-10} & macF1 & Acc. & F1\({}_{\text{real}}\) & F1\({}_{\text{fake}}\) & macF1 & Acc. & F1\({}_{\text{real}}\) & F1\({}_{\text{fake}}\) \\ \hline G1: LLM-Only GPT-3.5-turbo & 0.725 & 0.734 & 0.774 & 0.676 & 0.702 & 0.813 & 0.884 & 0.519 \\ \hline \multirow{4}{*}{G2: SLM-Only} & Baseline & 0.761 & 0.762 & 0.780 & 0.741 & 0.774 & 0.869 & 0.920 & 0.628 \\ & EANN\({}_{\text{T}}\) & 0.768 & 0.769 & 0.784 & 0.752 & 0.775 & 0.868 & 0.920 & 0.630 \\ & Publishers-Emo & 0.755 & 0.757 & 0.779 & 0.730 & 0.783 & 0.871 & 0.921 & 0.645 \\ & ENDEF & 0.768 & 0.769 & 0.779 & 0.758 & 0.777 & 0.878 & 0.927 & 0.626 \\ \hline \multirow{4}{*}{G3: LLM+SLM} & Baseline + Rationale & 0.763 & 0.764 & 0.778 & 0.748 & 0.785 & 0.883 & 0.930 & 0.641 \\ & SuperfCL & 0.757 & 0.759 & 0.779 & 0.734 & 0.736 & 0.864 & 0.920 & 0.551 \\ \cline{1-1} \cline{2-10}
**ARG** & **0.790** & **0.792** & 0.811 & 0.770 & **0.801** & 0.889 & 0.933 & 0.668 \\ & _(Relative Impr. over Baseline)_ & _(+3.8\%)_ _(+3.9\%)_ & _(+4.0\%)_ _(+3.9\%)_ & _(+3.5\%)_ & _(+2.3\%)_ _(+1.4\%)_ _(+6.4\%)_ \\ \cline{1-1} & w/o LLM Judgment Predictor & 0.784 & 0.787 & 0.809 & 0.759 & 0.797 & **0.890** & 0.935 & 0.658 \\ \cline{1-1} & w/o Rationale Usefulness Evaluator & 0.786 & 0.790 & 0.816 & 0.757 & 0.798 & 0.887 & 0.932 & 0.664 \\ \cline{1-1} & w/o Predictor \& Evaluator & 0.773 & 0.776 & 0.797 & 0.750 & 0.793 & 0.882 & 0.928 & 0.658 \\ \cline{1-1} \cline{2-10}
**ARG-D** & 0.777 & 0.778 & 0.790 & 0.765 & 0.790 & 0.886 & 0.932 & 0.649 \\ _(Relative Impr. over Baseline)_ & _(+2.1\%)_ _(+2.1\%)_ & _(+1.3\%)_ _(+3.2\%)_ & _(+2.1\%)_ _(+2.0\%)_ & _(+1.3\%)_ _(+3.3\%)_ \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of the ARG and its variants and the LLM-only, SLM-only, LLM+SLM methods. The best two results in macro F1 and accuracy are respectively **bolded** and underlined. For GPT-3.5-turbo, the best results in Table 2 are reported.
2018), news environment (Sheng et al., 2022). Both two groups of methods obtain textual representation from small pre-trained models like BERT as a convention but rarely explore its potential for fake news detection. We attempted in this paper by combining large and small LMs and obtained good improvement only using textual content.
LLMs for Natural Language Understanding LLMs, though mostly generative models, also have powerful natural language understanding (NLU) capabilities, especially in the few-shot in-context learning scenarios (Brown et al., 2020). Recent works in this line focus on benchmarking the latest LLM in NLU. Results show that LLMs may not have comprehensive superiority compared with a well-trained small model in some types of NLU tasks (Zhong et al., 2023; Kocon et al., 2023). Our results provide empirical findings in fake news detection.
## 6 Conclusion and Discussion
We investigated whether large LMs help in fake news detection and how to properly utilize their advantages for improving performance. Results show that the large LM (GPT-3.5) underperforms the task-specific small LM (BERT), but could provide informative rationales and complement small LMs in news understanding. Based on these findings, we designed the ARG network to flexibly combine the respective advantages of small and large LMs and developed its rationale-free version ARG-D for cost-sensitive scenarios. Experiments demonstrated the superiority of the ARG and ARG-D.
Discussion.Our findings in fake news detection exemplify the current barrier for LLMs to be competent in applications closely related to the sophisticated real-world background. Though having superior analyzing capability, LLMs may struggle to properly make full use of their internal capability. This suggests that "mining" their potential may require novel prompting techniques and a deeper understanding of its internal mechanism. We then identified the possibility of combining small and LLMs to earn additional improvement and provided a solution especially suitable for situations where the better-performing models have to "select good to learn" from worse ones. We expect our solution to be extended to other tasks and foster more effective and cost-friendly use of LLMs in the future.
Limitations.We identify the following limitations: 1) We do not examine other well-known LLMs (_e.g._, Claude7 and Ernie Bot8) due to the API unavailability for us when conducting this research; 2) We only consider the perspectives summarized from the LLM's response and there might be other prompting perspectives based on a conceptualization framework of fake news; 3) Our best results still fall behind the oracle voting integration of multi-perspective judgments in Table 4, indicating that rooms still exist in our line regarding performance improvements.
Footnote 7: [https://claude.ai/](https://claude.ai/)
Footnote 8: [https://yiyan.baidu.com/](https://yiyan.baidu.com/)
|
2310.20396 | Product Line Management with Graphical MBSE Views | Reducing the cost and delay and improving quality are major issues for
product and software development, especially in the automotive domain. Product
line engineering is a wellknown approach to engineer systems with the aim to
reduce costs and development time as well as to improve the product quality.
Feature models enable to make logical selection of features and obtain a
filtered set of assets that compose the product. We propose to use a color code
in feature models to make possible decisions visual in the feature tree. The
color code is explained and its use is illustrated. The completeness of the
approach is discussed. | Pascal Krapf, Sébastien Berthier, Nicole Levy | 2023-10-31T12:17:31Z | http://arxiv.org/abs/2310.20396v1 | # Product Line Management with Graphical MBSE Views
###### Abstract
Reducing the cost and delay and improving quality are major issues for product and software development, especially in the automotive domain. Product line engineering is a well-known approach to engineer systems with the aim to reduce costs and development time as well as to improve the product quality. Feature models enable to make logical selection of features and obtain a filtered set of assets that compose the product. We propose to use a color code in feature models to make possible decisions visual in the feature tree. The color code is explained and its use is illustrated. The completeness of the approach is discussed.
Configuration, variants, product line, model-based system engineering (MBSE)
## 1 Introduction
Reducing the cost and delay and improving quality are major issues for product and software development. To achieve these challenges, strategies for reuse and standardization of products and software have been developed. In this way, development and validation of components and software assets are mutualitized over several projects, which reduces the global cost of the product. Indeed, the number of individual assets that are required to build complex products like personal cars, aircrafts, trains or industrial facilities can reach several thousands (from ten to several hundred thousand). Moreover only some of them are present in all products, while the others are associated to particular products. Products can be differentiated by several characteristics:
* Products may differ in the offered functionalities.
* Products may differ in performance values.
* Products may differ in the non-functional properties.
* Products may differ in the chosen execution platform.
In the automotive domain, a widespread practice is to define from the beginning a Product Line (PL) approach [22]. It consists in designing a set of defined products embedding physical and software components developed from a common set of core assets and having a managed set of variable features [11].
Developing from the beginning a PL, means to focus on the variability and on the potential differences between products. The method we applied focuses on the creation of a product line right from the initial product development stage. The aim is to propose possible variants from the very start, knowing that some others could be added to later. It's a very different approach to parameterization. Defining a parameterized product means concentrating on the common functionalities. The overall architecture is generally not variable and, as a result, non-functional properties are less variable and less emphasized.
Variant management languages and associated tools have been developed with high expressiveness to describe product lines [15, 9, 14, 17, 5]. However, up to now, their deployment in the automotive industry is not effective.
In the automotive industry, dozens of development teams are specialized in various domains of engineering. They all contribute to the definition product variants. They each have specific concerns about variability and favor processes and tools suited to their specific concern. But at the end of development, a reduced number of people in the project team has to be able to select the project variants without being expert in all the engineering domains. Thus, the way the variability is structured has to be understandable by people outside the specific engineering domain.
This is why powerful specialized tools are of little help when deploying a PL approach on an industrial project in the automotive domain.
Our proposal is to define processes methods and an associated tool with simple and visual interfaces made intuitive for users not accustomed to software-oriented tools. Even if less powerful for constraints expression, consistence analysis and solving capabilities than existing ones, such a tool may be better suited to meet user acceptance in our specific domain.
The first section is the present introduction.
In the second section of this article, we list a set of qualities targeted when configuring a system.
In the third section, we present useful concepts issued from our experience that are frequently used in industries managing different kinds of product lines.
In the fourth section, we present our approach to build and use PL inspired from FODA's feature models [17]. We use for this purpose our existing system engineering modeling platform, which is under improvement.
In the fifth section, we discuss the possibility to handle any kind of logical constraints in the framework we have defined.
## 2 Overall strategy and targeted properties
### Parameterized software versus Product Line
The most straightforward way to develop a set of related products is to develop a first product and adapt this product using tune-able parameters and adding components.
The activity of making a system tune-able can be split into:
* identify the possible adjustable parameters and components associated to variabilities,
* define the values to be selected for the parameters and design specific additional components.
Such strategies have been applied by carmakers and have shown some limitations:
* Trade-offs are driven by the first designed product, which does not mean global optimization for all products.
* Adding variabilities as add-ons considerably increase the complexity of the product and can increase the risk of malfunction.
* It generally requires deep knowledge of the product to be able to tune the variable parameters, while companies often want the configuration to be done by non-specialists.
As a consequence, carmakers are preferring the product line approach [25]. A set of defined products that share a common, managed set of features and are developed from a common set of core assets [12].
### Qualities targeted when configuring a system
Following system engineering practices, the first step is to capture needs about the variant configuration management framework. The need is a framework (process and tool) that has the following characteristics [11]:
* **Operability**: the number of actions to select a product variant is reduced and available to human decision, both for the first setup and for later updates,
* **Evolutivity**: it is possible to add features and parts to the product line and continue to use former versions. When the product line is enriched, new features are added along with new added constraints,
* **Reusability**: parts and groups of parts can be reused with confidence without modification in new products or new product lines,
* **Simplicity**: no deep knowledge about the system design is necessary to select a variant. Parameterization can be done in a way that is accessible for non-specialists of the domain,
* **Modularity**: Architects can select coherent subsets of the product line by the selection of sets of variants,
* **Consistency**: Compliance with design rules constraining the choice of variants is ensured. These design rules can be of a norm, regulations or chosen method to be followed.
A product line allows to abstract the construction from the configuration of the reusable components, to identify reasoning and decisions behind the selection of a configuration. Reasoning means building a series of relations between causes and consequences while taking into account a set of logical constraints. A PL management framework is likely to satisfy the former list of characteristics.
## 3 Field data
### Architectural point of views
Cyber-physical systems are more and more developed using Model based system engineering (MBSE). In these models, systems and their relations with their environment are described by views corresponding to different viewpoints. The approach we were using includes the following viewpoints [3]:
* **Operational viewpoint**, focused on the concern of how the system is operated and interacts with surrounding systems.
* **Functional viewpoint** focused on the concern of system functionalities, functional interfaces, functioning modes and behavior.
* **Organic viewpoint** focused on the concern of system components, components allocation of functions and requirements and how internal components interact.
These models are used for the product development and are included in the digital twin of the product [20]. The following paragraphs describe some model elements that are of interest for variant management. We empathize that variant management is about selecting components (organic viewpoint), but with key drivers coming from the other viewpoints.
### Operational variability
A use case is a specific situation in which a product could potentially be used. For example personal cars can be driven on railways, on open roads, in town, on tracks. They can be parked on road, in garden lane or in a garage, etc. Each use case carries specific requirements the car has to comply with in order to satisfy the customer. Since all customers do not have the same expectations and use cases, different variants of cars are commercialized. Level of outfitting, seat comfort, acoustics, dynamism, speed, smoothness, product durability in specific mission profiles are operational characteristics that can be in the scope of variant management.
### Functional variability
Customers can choose among a set of functionalities for their personal car. For example, driver assistance systems, guiding assistance, comfort adaptation, entertainment for passengers, door opening etc. The are an important source of variability. Some of them induce the presence of specific components like sensors and actuators, but others can be activated or deactivated by software. End to end functionalities are split into sub-functions forming functional chains. Each sub-function uses inputs to produce some outputs. The combination of these functions ensure the overall product functionalities.
### Component variability
#### 3.4.1 Bill of material
In industry, the list of all parts or assets that can be purchased to build a product in a product line must be managed. This list is called the "Bill Of Material", or BOM. A 150% BOM is a list containing the parts to be used to produce the whole set of products of the product line. An individual product will not include all the parts, but only a subset of them: the BOM containing only the parts for an individual product is called 100% BOM. The BOM is managed as a list in which each standardized part appears only once (an eventual multiplicity will be managed later as we will see).
A BOM may contain up to several hundreds of individual parts, and a large proportion of these parts (frequently 10% to 50%) are linked to a variant, as they are not present in all individual products. The first target of variability management is to select efficiently parts corresponding to a specific product. If one wants to select individually each part, one would have to know all the components that are required for each functionality. This choice tends to be impossible for functionalities requiring several hundred thousand parts. Even for engineers in the appropriate field, this is just impossible.
#### 3.4.2 Asset library
Complex systems are often software intensive, meaning that functions are realized by software. Software intensive systems are made of a combination of many interacting software components. Thus, the issue of variant management for physical parts is mirrored in the software domain. Software components are listed in a software library. Assets also include models, specifications, assembly instructions, procedures, tools, validation facilities, safety assessments etc. Actually, any asset contributing to the product definition can be in the scope of variant management. Thus, physical or purchasable parts listed in a BOM are not enough to define all possibilities to build a product line. It is more flexible and more accurate to consider an asset library that contains any kind of artifacts.
#### 3.4.3 Product breakdown structure
System engineering is the general framework used to develop complex products [2, 4, 3, 5, 21]. A product is broken down into systems. Each system is broken down into subsystems, and so on until reaching individual parts that can be subcontracted and purchased from suppliers. Assets are organized in a tree structure called Product Breakdown Structure (PBS). The PBS contains components that have an active role in the system functioning (sensors, control unit...) as well as components associated to liabilities (tight box, firewall...). Software engineers generally build their software with software components. The software components library is a part of the PBS. Thus, the PBS gives a structured view of the assets that constitute the product.
Engineering teams organize the PBS according to the system breakdown. This breakdown often reflects the organization of the engineering teams and corresponding engineering domains. The manufacturing team has interest to organize the PBS according to how the system is assembled. This may not fit with the engineering team's organization. The purchasing team may want to structure the PBS according to possible suppliers. The maintenance team may want to organize the PBS in accordance with maintenance schedule and process. If several teams like purchasing, manufacturing, maintenance, use the same PBS then, the structure has to be a compromise between their needs. Companies that try to manage variants by merging all parts in a single PBS that is managed in an Enterprise Resource Planning (ERP) tool often create dissatisfaction in every domain team. This may contribute to the high failure rate of ERP deployment projects [13].
### Feature model
The word "feature" refers to a characteristic or a set of characteristics of the product line. As already discussed, PBS is not the only model element that is impacted by variant management. Thus, there is no reason to have a one-to-one correspondence of nodes of the PBS and features. The PBS structure does not necessarily reflect a selection logic of the components. Requirement satisfaction may involve the contribution of several different parts located in different branches of the PBS.
General software qualities like cybersecurity, energy consumption efficiency, human machine interfaces, etc. often must be managed with variability. The random selection of software components does not ensure the quality of the final product.
Companies describe the features that are likely to be variant in a feature model. It describes variable features that can be selected for an individual product within the product line.
Feature Diagrams (FD) are a family of modeling languages used to address the description of products in product lines [24]. FD were first introduced by Kang as part of the FODA (Feature Oriented Domain Analysis) method back in 1990 [17].
Feature models are generally represented in a tree structure. Each node is a feature that can be selected. A natural rule is to select a son node only if the father node is already selected. In any product line, the feature choice is submitted to rules allowing or forbidding some associations. Rules can result from physics (not enough place), regulation (no such combination of functions), marketing, etc. For example, cars can have diesel, gasoline, hybrid or electrical engines, but only one among this list. These features are not independent, and the dependence is only partially represented by the position in the feature tree. When complex constraints are involved, then it often requires a solver to be able to determine whether a set of selected features comply with these rules. The problem of deciding if a set of logical sentences has possible solutions is known to be NP complex [24].
Variable characteristics usually reach the number of several hundreds to several thousand for cars or
aircrafts. Thus, it is still difficult to make choices because of the need to be coherent. Furthermore, the number of possible configurations is still enormous. If 100 nodes can be selected in a feature tree, then the number of possible different products is equal to 2 to the power of 100. This number of combinations cannot be managed extensively. A structured methodology with associated tools is needed. The domain engineer designs the product line in a way to minimize circular or interwoven constraints. The aim is to make the feature model easily understandable to applications engineers. The PL engineer has to define a smart structure for the product line, and the application engineer needs deep knowledge of the product line to select features without losing time with attempts and errors.
### Variation criteria
The PL is described in a model, that contains products assets. Some assets are present in all individual products. These assets form the invariant backbone of the PL. The other assets are present or absent depending on the features that are chosen. A variation criterion is a logical formula, that defines the asset variability. This logical formula is expressed using the features of the feature model. The asset is present in the product if the formula is evaluated TRUE. In this way, assets can be filtered according to the feature selection. The completeness and coherency of this association between assets and features fully relies on the PL design engineer.
## 4 Framework for variant management
In order to be efficient, companies need a framework of combined and coherent processes, methods and tools. In this section, we describe the framework we have developed to manage a PL. Our proposal allows a very broad acceptance of the notion of PL among the multiple actors involved. We have drawn inspiration from a number of existing proposals [7, 14].
### Processes
The product line strategy relies on the following major processes: Build the PL, configure a product in the PL and maintain & enrich the PL.
#### 4.1.1 Build the PL
Companies want to have competitive advantages, and to answer more and more customer needs with individual adaptation. Before designing a system, system engineers have to analyze needs. They examine the system's environment and identify interactions, constraints and available resources. The capture of stakeholder's needs is the key to system engineering. It is also the first source of variants. Thus, in a PL process, the outcome is not only a set of elicited needs, but also a variability assessment of the PL. Needs capture and analysis is combined with the analysis of the PL variability.
Building the PL includes:
* defining a set of assets (components, software, models...) that are designed with the target of addressing a wide range of user needs.
* Building a feature model that describes product features and constraints between them.
* Associating assets to features.
Assets are associated to features with the target of ensuring modularity and enabling evolutivity criteria mentioned in section 2.
#### 4.1.2 Configure a PL
Products to be sold to customers are built as configurations of the product line. The PL contains all assets describing possible products. When a product engineer selects a product for a customer, he defines the features of this specific product. Features are selected in the feature model of the product line. Assets of the specific product are obtained as a consequence of the features. PL assets are filtered according to chosen features to obtain the asset lists of the specific product. Thus, the specific product is a configuration of the product line. This process accounts for the reusability criterion mentioned in section 2.
#### 4.1.3 Maintain & enrich the PL
When engineers have to design a solution for new customer needs or new project requests, they first try to integrate in their design existing assets from the product line. And this shall be done on the system as a whole, considering needs that shall be satisfied, and at the component level for component functionalities and tested qualities. Thus, the design method consists in searching within the existing assets which ones could be reused as they are, which ones could be reused with only small modifications or additional tests and which ones could be integrated in the product via the adaptation of some interfaces or the use of adaptation parts (brackets, connectors, embedding...).
New assets are developed only if existing assets do not allow to answer the new elicited needs. And if so, they are designed in a way to enable their reuse for future products. Each time a new component is developed, it can be included in the asset library. Asset's characteristics are standardized and recorded in the aim of reuse.
The product line also has to be maintained, meaning obsolescence of PL assets is monitored and new assets are developed in the right schedule to replace the obsolete ones without shortage. This process accounts for the evolutivity criterion mentioned in section 2.
### Method
The method we present is intended to describe the product line management and configuration for people who are not necessarily familiar with software development tools. To do so, the steps of the product line use are made visual with graphical diagrams and simple colors codes.
#### 4.2.1 Association between assets and features
Assets have to be associated to features in order to model the transition from the asset library to the selected assets constituting a specific product. For each asset, a logical statement defines its presence. This statement uses logical connectors and features. In that way it is possible to use a feature configuration to filter the asset library and obtain the assets of a specific product. The introduction of the feature conditions in the description of the behavior of components allows for a configurable behavioral model of the product line. The granularity of the features and the association to groups of assets in the library strongly influences the number of operations to be done to configure a product. It has a major impact on the operability criterion introduced in section 2. To define a product, it will not be necessary to select individual parts but product features. If features are well structured, then their choice is operable by humans. As an illustration, a limited set of feature choice is proposed to a customer purchasing a car, impacting the presence of dozens of components in the product.
#### 4.2.2 Constraints expressions in a feature diagram
Each node in the feature tree represents a decision. At each decision step, constraints are limiting the number of possible choices. In many cases, the constraints concern neighbor nodes. Thus, being able to display in an intuitive way these constraints is of interest. The color of a node can be used to display these constraints. In our method we propose to consider tree type of choices that are displayed by a color code. The color code can be replaced by any other graphical characteristic of the boxes, especially if accessibility for color-blind users is required.
Optional features are features that can be selected or not without further constraint. Those features are represented in white boxes as shown in Figure 1. A white box can be selected or discarded independently of neighbor boxes. If the parent node is selected, then the children selection can be a single child, both of them or none.
Blue is used as shown in Figure 1 to indicate a mandatory choice: the parent (blue) node has to be kept and at least one of the boxes below has to be selected. When using this colour code, it was found more intuitive for non-specialists to have the blue colour on the upper node, where the decision is taken, rather than on the lower node.
Red is used as shown in Figure 1 to indicate exclusive options: only one of the neighboring red boxes can be selected. If one is selected, then the neighboring red boxes have to be discarded. The red color is applied on lower nodes. It is a difference with the blue color and was found more intuitive for non-specialist users who have to define a product within the PL. Plus, it allows to combine easily red and blue nodes.
#### 4.2.3 Product configuration
The most natural way to fill a variant tree is top down. One begins with the upper node and goes along the branches down to the leaves. At each step, the possible choices are defined by the color of the surrounding boxes. The variant selection process is made visual as displayed in Figure 2 and intuitive to users that have to do it.
A product configuration is obtained by the selection of a set of features that drive the selection of the associated assets. The method to obtain such a configuration requires a set of decisions, whether to keep or not each feature. A feature model is a set of possible decisions, containing also mandatory features. It is important to have them as they can imply sub-decisions. Our feature model defines variable features and constraints between them. In a product configuration, some features are selected, and others are discarded. We use the green color to indicate that a feature is selected, and the gray color to specify that a feature is discarded from a specific product. Thus, a feature model fully colored with green and gray
Figure 1: Representation of the different kinds of choice in the feature model
like the one displayed at the bottom of in Figure 2 is a description of an individual product of the product line.
Defining a product configuration and verifying at the end if compatibility rules are satisfied is likely to produce configurations incompatible with the rules. The selection has to be organized in a succession of decisions. To carry out this selection, the feature model is scanned down from the root to the leaves. When a node is discarded, then the whole branch below the node is discarded as well. So there cannot be an alternation of green an gray colors down a branch. Coherency rules are checked at each decision step. If a decision would lead to inconsistent feature selection, then the corresponding choice is not possible. Figure 2 illustrates successive decision steps that lead to a configuration.
Feature selection gives a progressive coloring of the feature model with green and gray. The selection process can be interrupted at any moment. Partially selected feature model can be produced, in which some parts are selected, some others are discarded and some others are still open options. The color of the boxes provides a comprehensive way to describe partial configuration and to define rules to continue the selection process.
### Tool
As the processes to build a new product rely on system engineering, it is natural to use system engineering tools for PL engineering. Thus, variant management shall be embedded in the system engineering tool. One key success factor is to make people working in different domain understand each other and communicate efficiently. Therefore, it is crucial to provide graphical views intended to be intuitive for non-specialists and so to fulfill the simplicity criterion in section 2. We have developed a private model-based system engineering tool that is already in use [19, 8, 9, 22]. In a true digital twin perspective behavioral models can be amended by the feature selection. A model of the feature selection is embedded in the tool. In this perspective, we obtain a model of the product line that describes all the products in the product line with their individual characteristics and behavior. While graphical views presented below are available in the tool, the interpretation of constraints stated as logical formulas is still under development.
## 5 Discussion
In the former paragraph, we have proposed a description for a feature model. Its semantics is similar to the one proposed by [24]. As different boxes may carry the same label, it describes a Directed Acyclic Graph (DAG). In addition, the colors are used to express "require" and "exclude" relations. Let's take a closer look at how these constraints are used.
It is clear that the "exclude" relation between neighbor nodes can be directly expressed with the color code. Let A and B be two features in different branches of a variant model. The sentence "Feature A excludes Feature B" means that if feature A is selected, then feature B cannot be selected (let us note that A excludes B is equal to B excludes A). Figure 3 shows how this constraint can be expressed in our colored box language. Beside the main feature tree, we introduce a new branch labeled "constraints" with node A and B within red boxes, meaning the user has to make an exclusive choice. If the user selects the first A node, then through a decision propagation, the other node labeled A is automatically selected because it has the same label. Node B is automatically discarded because of mutual exclusion with A.
The sentence "Feature A requires Feature B" means that if feature A is selected, then feature B
Figure 2: Representation of successive decisions in the feature model
is mandatory. Figure 3 shows how this constraint can be expressed in our colored box language. We introduce a new branch labeled "constraints" with a node labeled B. Below this node, a single node labeled A is placed. When the user selects A somewhere in the tree, then decision propagation selects automatically all nodes labeled A. B is automatically selected as a parent of A.
Thus, our graphical language is able to describe the "require" and the "exclude" relations.
Let us express basic logical constraints with this language. Fig. 9 defines a variable B, that corresponds to NOT(A), a variable C that corresponds to (D OR E), and a variable F that corresponds to (G AND H). Since (AND, OR, NOT) is a complete set of connectors in Boolean logic, any Boolean formula can be expressed by this language. For example a nested constraint ((A AND B) \(\Rightarrow\) C) has first to be written as a normal disjunctive formula. After that, it is possible to express it with the colour code. Figures 4, gives a general pattern that makes it possible to transform a Boolean expression into a graph with our color definition. The expressiveness of the defined language is universal as soon as we consider feature models that are directed graphs rather than trees.
However the large number of actors involved in the definition of product variants and product configuration is limiting the use of complex tools. We have therefore defined a framework for product line management in order to address this issue. The proposed method is based on a color code that makes possible decisions visual and intuitive for users unfamiliar with variant management. The method has been illustrated with an example. Our model-based system engineering tool was used to draw the diagrams. The completeness of the method was discussed.
The current development of our tool includes an allowed configuration only if logical constraints are satisfied at each decision step. In that way, a correct product line feature model does not require a solver to check coherence while configuring a product, as constraints are taken into account at each decision step. The scaling to larger product lines relies on a well structured feature model, broken down into as many sub-trees as necessary to keep each graphical view understandable.
We plan to apply our approach of introducing a product line when defining an initial requirement for a system in an industrial domain, using the platform under development.
## Acknowledgment
The authors are grateful to Caroline CABY for her insights and the very fruitful discussions she was involved in.
|
2309.14150 | Fast LiDAR Informed Visual Search in Unseen Indoor Environments | This paper details a system for fast visual exploration and search without
prior map information. We leverage frontier based planning with both LiDAR and
visual sensing and augment it with a perception module that contextually labels
points in the surroundings from wide Field of View 2D LiDAR scans. The goal of
the perception module is to recognize surrounding points more likely to be the
search target in order to provide an informed prior on which to plan next best
viewpoints. The robust map-free scan classifier used to label pixels in the
robot's surroundings is trained from expert data collected using a simple cart
platform equipped with a map-based classifier. We propose a novel utility
function that accounts for the contextual data found from the classifier. The
resulting viewpoints encourage the robot to explore points unlikely to be
permanent in the environment, leading the robot to locate objects of interest
faster than several existing baseline algorithms. Our proposed system is
further validated in real-world search experiments for single and multiple
search objects with a Spot robot in two unseen environments. Videos of
experiments, implementation details and open source code can be found at
https://sites.google.com/view/lives-2024/home. | Ryan Gupta, Kyle Morgenstein, Steven Ortega, Luis Sentis | 2023-09-25T14:04:31Z | http://arxiv.org/abs/2309.14150v11 | # Learned Contextual LiDAR Informed Visual Search in Unseen Environments
###### Abstract
This paper presents LIVES: LiDAR Informed Visual Search, an autonomous planner for unknown environments. We consider the pixel-wise environment perception problem where one is given 2D range data from LiDAR scans and must label points contextually as map or non-map in the surroundings for visual planning. LIVES classifies incoming 2D scans from the wide Field of View (FoV) LiDAR in unseen environments without prior map information. The map-generalizable classifier is trained from expert data collected using a simple cart platform equipped with a map-based classifier in real environments. A visual planner takes contextual data from scans and uses this information to plan viewpoints more likely to yield detection of the search target. While conventional frontier based methods for LiDAR and multi sensor exploration effectively map environments, they are not tailored to search for people indoors, which we investigate in this paper. LIVES is baselined against several existing exploration methods in simulation to verify its performance. Finally, it is validated in real-world experiments with a Spot robot in a 20x30m indoor apartment setting. Videos of experimental validation can be found on our project website at [https://sites.google.com/view/lives-icra-2024/home](https://sites.google.com/view/lives-icra-2024/home).
## I Introduction
Autonomous planning and real-world execution for active search and exploration are receiving significant attention from the robotics community due to their relevance in several scenarios including inspection, surveillance, and Search and Rescue (SAR) [27]. Autonomous ground robots cannot assume reliable map information a priori during active search missions, for example in a disaster response scenario. This demands environment perception tailored for the task of locating any persons in need quickly. Today's robots are frequently equipped with LiDAR sensors that cast a full view of the surroundings. However, use of these scans is often limited to the task of localization and mapping. This paper addresses the exploitation of wide Field of View (FoV) LiDAR scans for visual planning. We propose to classify LiDAR scan points as map or non-map in the robot's surroundings for improved visual search planning.
Several methods exist for pixel-wise classification of 3D LiDAR scans [22, 23, 34]. However, [15] notes that these methods are limited by the high computational requirements to handle to large input size of 3D cloud data. In contrast, real time projection-based methods aim to detect moving obstacles [13] or objects to be avoided in 3D scans, often for autonomous driving applications [18, 5, 20]. Despite parallel improvements in efficiency of planning algorithms for both LiDAR and vision sensors [6, 14, 28, 32] it remains an open problem to enhance performance in autonomous planning and execution for the task of visual search. There exists a gap of real-time pixel-wise classification of LiDAR scans for active visual search tasks. This work addresses this limitation with an online 2D pixel-wise LiDAR scan classifier and context informed visual planner. In particular, classified LiDAR scans are used to generate highly informative plans quickly with a multisensor Frontier Exploration [33] based planner.
Active planners in unknown spaces are required to make decisions based on incomplete and noisy information about partial environments. State of the art visual task planners like [6, 32] account for LiDAR scans without considering contextual information. We provide a visual planner with map and non-map data in the surroundings found in wide FoV LiDAR scans. This is accomplished by learning to distinguish points unlikely to be permanent features in unknown environments. The visual planner gives priority to features of interest under the assumption that the search target belongs to the set of non-map points. This information obviates the need to visually inspect parts of the environment that are likely to be permanent. An overview of the proposed method can be found in Fig. 1.
The contributions of this work can be summarized as follows:
* A learning strategy for pixel-wise scan classification as map and non-map in unseen, real-world environments
* A formulation that incorporates classified scan information into a planner for visual search
* Demonstrate reduced mission completion time in the visual search task over existing planners
* Deploy and verify the proposed method on a real robotic system in an unseen indoor environment
Sections II, III and IV describe related works, methods an results including baseline comparisons.
## II Related Works
### _Lidar Classification_
Indoor pointcloud classification is tackled in 3D [30] using the KPConv architecture [29]. This method requires the offline annotation and map building at instantiation, and is used for localization and navigation in later work [31]. Neither work validates on a real robot platform. Ref. [21] presents an in depth comparison and framework for Iterative Closest Point (ICP) methods, similar to that of [30], focused on map building. Finally, [26] present a map-free framework for segmenting 3D pointclouds into ground, edges and planar points, which are used in a feature extraction process for mapping environments. In contrast LIVES recognizes non-map features.
### _Next-Best View and Informative Path Planning_
Exploration is often solved using next-best viewpoint planners like Frontier Exploration [33, 14]. A common approach uses an active perception formulation to maximize entropy reduction [17, 12]. Ref. [11] learns a policy to generate non-myopic plans online by estimating potential gains of exploring different areas. Ref. [19] similarly learn an information aware policy offline that proposes waypoints for the robot. A local planner then generates dynamically feasible trajectories based on these suggestions. In contrast to the proposed approach, [11, 19] only provide simulation results. Similar to our method, [36] focuses on the problem of visual search. They propose an end-to-end policy that learns to generate discrete actions that lead a robot to a prescribed viewpoint in the given environment, however, the action space is limited to simple discrete actions and they do not leverage LiDAR information. Ref. [35] propose a POMDP planner for generalized local object search in cluttered 3D environments and extend it with a 2D planner for larger areas. In contrast to LVES, this work is aimed at small areas and small objects, rather than quickly searching buildings for people.
### _World Prediction Based Planning_
Often world prediction methods involve learning a model for relevant environments and leveraging their structure for exploring high-value regions first. Ref. [25] use topological features from a database of worlds to inform a frontier-based exploration policy and increase the rate of area explored. Similarly, [28] uses a prediction module that detects semantic objects, classifies frontiers, and predicts information gain beyond those frontiers. A planner uses this to explore rooms fully before continuing global exploration, unlike LIVES that avoids complete exploration to reduce search completion time. Ref. [24] instead consider visual navigation with occupancy anticipation from egocentric RGB-D based visual context. Ref. [10] instead learn to predict locations of exits in building with a convolutional neural network (CNN) using a database of building blueprints. Prediction guide the robot towards likely exit locations.
## III Methods
The overall scan classifier and planning module are described in Alg. 1. The key components are detailed in this section.
Fig. 1: Overview of LIVES. In (a) raw 2D LiDAR scans, pose information and image data are received from the Spot robot. They are sent through a neural network classifier III-C (b) to label each point as map or non-map (c). Separately, LiDAR scans and pose data are used to maintain the global search maps (d). Global search maps are used to generate frontier points (e), which are used for finding candidate viewpoints. Finally, each frontier point is sampled at four viewpoints (f) and scored III-E2. In this toy example the left viewpoint in (f) will be scored highest and selected as the next waypoint due to the high number of unknown cells and non-map points observed. Finally, the selected viewpoint is sent back to the robot for execution.
### _Environment_
The environment is described as a discrete set of points \(\mathcal{E}\). This can be split into two subsets \(\mathcal{E}^{\text{free}}\subset\mathcal{E}\) and \(\mathcal{E}^{\text{occ}}\subset\mathcal{E}\), which represent the free and occupied cells in the environment. Both of these sets are initially unknown to the robot, but are assumed to follow a structure inherent in indoor environments (e.g. hallways, planar walls, open rooms and loops).
### _Ground Truth LiDAR Scan Classification_
The search target of interest belongs to \(\mathcal{E}^{\text{occ}}\subset\mathcal{E}\). In particular, scan classification attempts to divide \(\mathcal{E}^{\text{occ}}\) into two subsets, namely \(\mathcal{E}^{\text{non-map}}\) and \(\mathcal{E}^{\text{map}}\). In this work, \(\mathcal{E}^{\text{map}}\) are points in the environment that are deemed to be Long-Term Features (LTFs) during classification, while \(\mathcal{E}^{\text{non-map}}\) are points that are either Short-Term Features (STFs) or Dynamic Features (DFs) [9]. LTFs represent the static map obstacles, STFs represent static un-mapped obstacles and DFs represent moving points. Given map information, scan points can be classified into these three categories as follows.
Let \(x_{i}\) denote the pose of the robot, and \(s_{i}\) denote observation at time step \(t_{i}\). Each observation \(s_{i}\) consists of \(n_{i}\) 2D points, \(s_{i}=\{p_{i}^{j}\}_{j=1:n_{i}}\). Observations are transformed from robot local frame into the global frame using an affine transformation \(T_{i}\in SE(3)\). Finally, let map \(\mathcal{M}\) be represented as a set of lines \(\{l_{i}\}_{1:n}\).
#### Iii-B1 Ltf
First, an analytic ray cast is performed [8] to determine expected lasersccan based on map \(\mathcal{M}\) and current robot position \(x_{i}\). Given observations, the probability that points correspond to one of the lines of that static map can be written
\[P(p_{i}^{j}|x_{i},M)=\text{exp}\left(-\frac{\text{dist}(T_{i}p_{i}^{j},l_{j})^ {2}}{\Sigma_{s}}\right) \tag{1}\]
where \(\Sigma_{s}\) is the scalar variance of observations, which comes from sensor accuracy. If Eq. 1 is greater than a threshold, point \(p_{i}^{j}\) is classified as a LTF.
#### Iii-B2 Sft
Remaining points will be classified as STF or DF. Observations at current time \(i\), \(p_{i}^{j}\), are compared with prior observations at time \(k\), \(p_{k}^{l}\) to determine correspondence between points in subsequent observations. The likelihood of the remaining points corresponding to the same point as in a previous lasersccan is computed as
\[P(p_{i}^{j},p_{k}^{l}|x_{i},x_{k})=\text{exp}\left(-\frac{||T_{i}p_{i}^{j}-T_ {k}p_{k}^{l}||^{2}}{\Sigma_{s}}\right) \tag{2}\]
where \(p_{k}^{l}\) is the nearest point from \(p_{i}^{j}\) among points which does not belong to LTF at other timesteps, defined as
\[p_{k}^{l}=\text{arg}\min||T_{i}p_{i}^{j}-T_{k}p_{k}^{l}|| \tag{3}\]
When Eq. 2 is greater than some threshold, point \(p_{i}^{j}\) is classified as an STF. Remaining points in \(p_{i}^{j}\) are classified as DFs. The result is classified scan \(s_{i}^{\text{class}}\). In practice, there are a large number of STFs and DFs due to the size of scans, therefore it is important to filter them. First, the pooling operator is used to reduce duplicates within a radius. Second, pooled STFs and DFs within some distance of LTFs are removed to eliminate false positives caused by localization drift. Finally, points inside visually observed regions of the map are removed.
### _Map-free LiDAR Scan Classification_
The classification method defined in III-B is limited by the necessity of static map information \(\mathcal{M}\). Therefore, this work uses supervised learning to train a model to reproduce classifications for \(\{p_{i}^{j}\}_{j=1:n_{i}}\) in scan \(s_{i}\). Similarly to the ground-truth, non-map points are filtered by pooling and then by removing those in previously inspected regions.
#### Iii-C1 Dataset and Data Acquisition
The collected dataset \(\mathcal{D}\) consists of tuples of data from the ground truth classification model. The tuples are given as \(\mathcal{D}=\{(x_{i},s_{i},s_{i}^{\text{class}})\}_{i=1}^{N}\), where \(x_{i}\) represents robot pose, \(s_{i}\) is the raw LiDAR scan data, and \(s_{i}^{\text{class}}\) is the classified scan. \(N\) is the total number of data points collected. Collecting a dataset using a real robot requires a human expert to control the robot over long periods of time. This approach is difficult to scale due to mechanical limits of the robot and difficulty of control over large, potentially out-of-sight places. To ease data acquisition we design a steerable cart platform (see fig. 3) to mimic the robot. Human operators easily maneuver this cart to generate the dataset without strain
Fig. 2: Pixel-wise LiDAR scan classification model architecture. \(k\) is the length of the history buffer. (a) \([3,k]\) pose history matrix containing \([x,y,\theta]\). (b) \([n_{i},k]\) LiDAR range history matrix. (c) \([n_{i},k-1]\cup[n_{i},1]\) estimated label history matrix concatenated with its pixel-wise exponential weighted average. (d) The model consists of three temporal-convolutional encoders (TCN), one for each input. The encoded poses, scans, and labels are combined to produce a pixel-wise classification of the LiDAR scan. In (e) a threshold is applied to the raw logits such that each pixel is classified as either a map point or a non-map point.
to the robot. We mount a Velodyne Puck LiDAR for recording scan information and a front-mounted Intel RealSense T265 stereo camera to obtain the cart's odometry estimate, required for running the ground-truth classification. Finally, a laptop is used to maintain the pose estimate [9], collecting the data and generating online ground-truth labels for the dataset. This laptop records 1) robot pose at each timestep, 2) raw scan data, and 3) classified scan from the ground-truth module for dataset \(\mathcal{D}\). The Puck has \(n_{i}=897\) and classified scans are recorded at 5Hz. A dataset of size \(N\simeq 145,000\) is collected in this study, representing roughly 8 hours of time. In practice, the model only receives raw scan data, pose estimate and previously predicted labels from the robot during operation and must classify pixel-wise the scan online.
#### Iii-C2 Architecture and Training
We formulate pixel-wise LiDAR classification as a supervised learning problem during training and an auto-regressive problem during inference. Given dataset \(\mathcal{D}\), we shuffle and batch the dataset between time steps and locations to minimize location-based and temporal bias during training. Because no map information is provided explicitly, we provide the model with a history buffer of \(k\) time steps for scan ranges and robot poses, and \(k-1\) previously estimated ground truth labels. Because we do not have access to ground truth labels at inference time, we corrupt the ground truth labels during training by randomly bit-flipping the classification of 10% of the labels, chosen by sampling indices from a uniform random distribution. We then take the exponential weighted average of the last \(k-1\) corrupted pixel-wise classifications to estimate the current pixel-wise classification. This estimate is concatenated with the \(k-1\) corrupted ground truth labels to produce a \(k\)-length matrix of estimated scan labels. During inference, the label history buffer is populated by previous estimates from the policy, cropped by a length \(k\) sliding-window. The history buffer is then concatenated with the updated pixel-wise exponential weighted average. At the initialization of inference, the policy is bootstrapped with zero-value poses, scans, and labels, and run for \(k\) steps of inference until the history buffer is initialized.
The policy consists of a temporal-convolutional encoder (TCN) for each input. Each encoder contains a single convolutional layer with scan-wise circular padding and a single linear layer. The scan and label encoders have kernel size \([k,k]\), and the pose encoder has kernel size \([1,3]\). Hyperbolic tangent activations are used for all layers. The output of each encoder is the same size as the LiDAR scan to be classified. The output of the pose encoder is summed with the output of the scan encoder as a pose correction, and then normalized by applying a hyperbolic tangent function. The resulting LiDAR scan encoding is multiplied element-wise with the label encoding. A threshold is applied to the raw logits such that each pixel is classified as either a map point or a non-map point. The policy is trained with mean squared error loss against ground truth classifications. Training for 20 epochs on a workstation Nvidia 3080 12Gb takes 3 minutes. The full model architecture is shown in Fig. 2.
### _Map Updates_
Two global occupancy maps (search maps) are maintained. The first is a LiDAR global search map and the second is a visual global search map. Global search maps are implemented in this work using ROS Costmap2D [2]. All cells in both global maps are initialized to be unknown, \(e=-1\)\(\forall e\in\mathcal{E}\). The robot is equipped with a visual sensor with a cone shaped FoV, implemented as a triangular costmap. The LiDAR sensor has a 360 degree FoV, represented as a square-type costmap. Observations for both sensors are a set of range estimates to the nearest occupied point \(e\in\mathcal{E}^{\text{occ}}\). Points between the robot and the nearest occupied point along each ray of the LiDAR scan are free points \(e\in\mathcal{E}^{\text{free}}\). Given incoming observations from the robot, each global costmap is updated as free space \(e=0\) or occupied \(e=1\). This process is shown in Fig. 1(d).
### _Multisensor Frontier Exploration_
The planning module seeks to generate a viewpoint at each planning step that results in the highest likelihood to find the target of interest. This is achieved in several steps: candidate viewpoints are generated, viewpoints are scored, and finally the point with the highest score is fed to the robot for navigation to that point.
#### Iii-E1 Candidate Viewpoint Generation
Frontier points in each of the two global search maps are generated using a process similar to [33], shown in Fig. 1(e). Any free cell (\(e=0\)) adjacent to an unknown cell (\(e=-1\)) is a frontier edge, which are grouped together into frontiers. Candidate viewpoints at frontiers are similar to those of [32]. First, a set of vision based viewpoints are generated by clustering frontiers on the visual global search map. Next, a set of LiDAR based viewpoints are generated by clustering frontiers on the LiDAR global search map. Each frontier cluster becomes four candidate viewpoints, facing each direction, to account for the limited FoV of the vision sensor.
#### Iii-E2 Viewpoint Selection
Candidate viewpoints are selected based on a computationally efficient heuristic utility function. High value viewpoints are those that are 1) near the robot, 2) allow inspection of many unknown map points and 3) result in inspection of non-map points. This process is shown in Fig. 1(f). The utility functions is a weighted sum of the following:
Fig. 3: A figure depicting the cart used for labeled data acquisition and the Spot robot used for deployment of the proposed method. The RealSense provides odometry estimate to the cart platform, required for localization and ground-truth estimation. The spot is equipped with an RGB-D Azure Kinect for detection during the search task.
* Subtract path distance from robot position to the viewpoint truth classifier. The output of the policy is a vector of size \(n_{i}\) normalized between -1 and 1 for each scan point. If this value is non-negative, the point is classified as a member of \(\mathcal{E}^{\text{map}}\). Otherwise the point is classified as a member of \(\mathcal{E}^{\text{non-map}}\). We express performance in terms of per-scan accuracy of the classified LiDAR scans from the neural network estimator versus ground truth scan classification from map information. This metric for a classified scan \(s_{i}^{\text{class}}\) at time \(i\) compared to ground truth \(s_{i}^{\text{class, true}}\) is given by \[\text{accuracy}=\frac{1}{n_{i}}\sum_{j=0}^{n_{i}}\{+1\text{ if }p_{i}^{j}==p_{i}^{j,\text{true}}\text{ else }0\}\] (4)
The final test accuracy of the policy compared to ground truth is \(86.19\%\pm 0.03\%\) with no a priori map requirement. We perform two ablation studies, modulating the model architecture and injected noise during training, with results in Fig. 4. The inclusion of the label encoder is responsible for a 11.63% increase in test accuracy. Because the robot moves relatively slowly compared to the policy update frequency, recently predicted labels strongly bias the current estimate toward the correct classification. We fix the history buffer to 9 time steps, corresponding to 1.8 seconds. The history buffer enables the policy to reason about its environment even though no reference map is provided. During training, uniform random noise is injected into the label history buffer to simulate inference-time auto-regressive accumulated errors. The policy is robust up to 30% of injected noise before the policy begins to over-fit and performance degrades. We select a policy trained with 10% injected noise to deploy on the robot. All ablation studies were performed with a fixed seed (0) and initialized with identical weights. We do not tune the random seed.
### _Simulation and Baseline Comparison_
Simulations are performed using Gazebo-ROS on the Turtlebot [4]. ROS move base is used for navigation [1]. Baselines are: 1) a lightly modified version of [32] (Multi-sensor Frontier Exploration), 2) the Next-Best View Planner (NBVP) [7], 3) a Rapidly-Exploring Random Tree (RRT) [3], and 4) LVES given ground-truth scan classifications. The Multisensor baseline is equivalent to LVES and ground-truth without contextual LiDAR information. Two Gazebo world environments are tested with easy and hard setups in each. The first map is the apartment (20x30m) and examines the impact of non-target objects on LVES (see Fig. 5(a-b)). Here the easy setting incorporates fewer objects in the environment and the search target being closer to the robot while hard is defined by more objects in the environment and larger distance to the target. The second map is the office (25x45m) and tests the method in a large environment under
Fig. 4: Ablation studies over policy architecture (top) and injected noise during training (bottom). The inclusion of the label history buffer yields 11.63% higher test accuracy. The policy is robust up to 30% to bit-flipping errors in the label history buffer. The mean accuracy depicts the 5-step moving average.
Fig. 5: The Apartment simulation environment (top row) in the Easy configuration (a) and Hard Configuration (b). The Office (c) shows the robot starting position as a red arrow. Easy setup involves the search target to the left of the cyan line and Hard involves the search target on the right of the cyan line.
significantly varied target position (see Fig. 5(c)). The easy and hard settings are defined by the search target being in the same or different half of the environment as the robot starting position. Both difficulty levels in each map are repeated 10 times. Average detection times and success rates are reported in Table I. The time limit given to planners is 2 and 3 minutes in the apartment easy and hard settings then 3 and 5 minutes in the office easy and hard settings. Trials where the planner fails or the time limit is reached are reported as a failure and failed trials are reported as the maximum allotted time.
Results indicate that LIVES, the Ground-Truth variant and the Multisensor Frontier Exploration method significantly outperform the RRT and NBVP baselines in all settings except for the hard office setting, where the RRT method performs similarly to LIVES and Ground-Truth methods. In the easy apartment setting we observe similar performance among the three planners. However in the hard setting, where LIVES and Ground-Truth perform nearly identically, there is roughly a 15% improvement with the addition of the knowledge of non-map points compared with the Multisensor method. We see a large gap, however, in the easy office setting with a near 30% improvement with the addition of contextual LiDAR information over the Multisensor variant. In the hard office setting, LIVES and Ground-Truth variant outperform the RRT method by 5-12%. Overall, the results indicate that the proposed method to inform the frontier planner with non-map information outperforms the baselines across the full spectrum of environments. We hypothesize this is due to the requirement for the exploration planners to exhaustively explore the environment, whereas the proposed method focuses on non-map points, allowing the agent to move on when the target is unlikely to be nearby. These results support hypotheses (1) and (2) that contextual LiDAR information significantly improves performance in the search task.
### _Hardware Experiments_
Experiments are in a 20mx30m indoor apartment setting. The environment is out of distribution as it is not included in dataset \(\mathcal{D}\) used to train the pixel-wise classification policy. The platform used is a Boston Dynamics Spot robot with a Velodyne Puck LiDAR and Azure Kinect RGB-D camera. The search target of interest is a small rolling suitcase and YOLOv5 [16] is used for detection of the object. On detection, the exploration node is shut down and the task is complete. We perform several trials over varying initial robot position and target locations. A time lapse of one experiment is shown in Fig. 6 where the search target is situated behind the robot starting position. This example demonstrates the benefit of non-map information. In (b), if selecting simply based on number of unknown cells discovered, the agent may select another frontier, for example the one on the left hand side. However, equipped with this contextual information the agent selects the frontier that enables inspection of many non-map features nearby. The success of LIVES onboard a real robot verify hypothesis (3). Videos of experiments can be found on the project website [https://sites.google.com/view/lives-icra-2024/home](https://sites.google.com/view/lives-icra-2024/home).
## V Conclusions
We present a method for visual search in unseen environments using contextual LiDAR information to inform planning. Results show LIVES to outperform the closest of several baseline methods in varied environments by 10-30%. Ablative studies on the map-free scan classifier are performed to validate the choice in architecture and training scheme. Furthermore, the proposed approach is deployed on a real robot in an unseen environment. Experiments certify both the visual search planner and the learning method for map-free scan classification in real-world scenarios. Ongoing work aims to extend the LIVES planner to multiple robots.
Fig. 6: A time lapse of one experiment scenario. In (a) the robot pose is a purple arrow with the visual costmap, global search map, frontier points and non-map points shown. The raw scan data is shown as small white squares. The target object is behind the robot starting position. Figure (b) occurs immediately before a planning step, wherein the marked frontier is selected due to the high occurrence of non-map points nearby. (c) occurs after the planning step and results in detection. |
2302.14222 | Spatio-temporal pulse cleaning in multi-pass cells | We study both numerically and experimentally the use of two third-order
nonlinear temporal filtering techniques, namely nonlinear ellipse rotation
(NER) and cross-polarized wave (XPW) generation, for spatio-temporal cleaning
of mJ energy 30 fs Titanium:Sapphire laser pulses in a multi-pass cell. In both
cases, a contrast enhancement greater than 3 orders of magnitude is observed,
together with excellent output pulse quality and record high conversion
efficiencies. Careful balancing of nonlinearity and dispersion inside the
multi-pass cell helps tune the spectral broadening process and control the
post-compressed pulse duration for specific applications. | Jaismeen Kaur, Louis Daniault, Zhao Cheng, Oscar Tourneur, Olivier Tcherbakoff, Fabrice Réau, Jean-François Hergott, Rodrigo Lopez-Martens | 2023-02-28T01:00:15Z | http://arxiv.org/abs/2302.14222v1 | # Spatio-temporal pulse cleaning in multi-pass cells
###### Abstract
We study both numerically and experimentally the use of two third-order nonlinear temporal filtering techniques, namely nonlinear ellipse rotation (NER) and cross-polarized wave (XPW) generation, for spatio-temporal cleaning of mJ energy 30 fs Titanium:Sapphire laser pulses in a multi-pass cell. In both cases, a contrast enhancement greater than 3 orders of magnitude is observed, together with excellent output pulse quality and record high conversion efficiencies. Careful balancing of nonlinearity and dispersion inside the multi-pass cell helps tune the spectral broadening process and control the post-compressed pulse duration for specific applications.
[http://dx.doi.org/10.1364/ao.XX.XXXXXX](http://dx.doi.org/10.1364/ao.XX.XXXXXX)
## 1 Introduction
In the past few years, multi-pass cells (MPC) have emerged as efficient tools for nonlinear spectral broadening and temporal manipulation of ultrashort laser pulses [1, 2], mainly because of their high throughput [3, 4, 5, 6, 7], power scalability [8, 9, 10], high compression factors [6], spatio-spectral beam homogeneity [11], and strong potential for nonlinear spatio-temporal pulse shaping applications [3, 12]. The attractiveness of MPCs lies in the fact that extended nonlinear propagation occurs over a relatively small footprint and that large B-integrals can be accumulated in small increments for every pass with minimal spatio-temporal couplings, provided the input beam size is carefully matched to the cell eigenmode and the beam size on the end-mirrors remains fairly constant for every pass [13]. MPCs also provide a large number of degrees of freedom in terms of choice of nonlinear medium as they are compatible with bulk material [2, 4, 5, 14] or even hybrid geometries [15].
Recently, MPC-based post-compression was extended from Ytterbium(Yb)-based laser systems to Titanium:Sapphire (Ti:Sa) lasers, with compressibility down to the few-cycle regime in a single stage [16, 17]. When used to reach ultra-high focused intensities in the frame of light-matter interaction experiments, most Ti:Sa laser systems rely on some form of nonlinear temporal filtering to suppress amplified spontaneous emission (ASE) and parasitic pulses surrounding the pulse peak. Such contrast enhancement techniques include saturable absorbers [18, 19], optical parametric amplifiers [20, 21, 22], second harmonic generation [23], plasma mirrors [24, 25], spectrally filtered self-phase modulation (SPM) [26], nonlinear ellipse rotation (NER) [27, 28, 29, 30, 31, 32] and the widely used cross-polarized wave generation (XPW) [33].
In this article, we benchmark both NER and XPW techniques in an MPC architecture for simultaneous post-compression and spatio-temporal cleaning of mJ-energy 30 fs pulses from a Ti:Sa laser, with compressibility down to few-cycle duration and record efficiency. We also rely on comprehensive (3+1)D numerical simulation of the nonlinear process, which accurately reproduces the measured data, to pinpoint the role played by the different experimental parameters and to find optimized, application-specific configurations for both techniques.
## 2 Nonlinear ellipse rotation in multipass cells
In NER, temporal cleaning is achieved thanks to the intensity-dependent rotation of the polarization ellipse of an ultrashort pulse undergoing nonlinear propagation [34]. Polarizing optics can be used to discriminate the high-intensity pulse peak that experiences nonlinear rotation against the unwanted, unrotated low-intensity ASE pedestal and the parasitic side pulses. NER was tested early on for contrast enhancement and spatial mode cleaning in air [30, 31] and gas-filled hollow fibers [29], the latter having been later shown to enable post-compression down to few-cycle duration with internal conversion efficiencies in the range of 40-55% [35, 36]. Very recently, NER in a gas-filled MPC was first explored through simulations [37] and then experimentally with Yb lasers both in a gas [38] and in a multi-plate arrangement [39]. In the following section, we describe how NER implemented in a gas-filled MPC can be used as a direct method for generating few-cycle pulses with high temporal fidelity using a Ti:Sa laser.
### Experimental setup for NER
A schematic layout of the general experimental setup is shown in fig. 1. A Ti:Sa chirped pulse amplifier system (Femtopower Pro-HE) delivers mJ-energy 30 fs pulses at 1 kHz repetition rate and features an acousto-optic programmable dispersion filter (Dazzler, Fastlite) that enables precise control over the output pulse spectral phase. A pair of focusing/diverging optics is used to fulfill the mode-matching conditions for stable MPC operation. A pair of folding mirrors mounted on a translation stage after the telescope is used to tune the position of the beam-wasit inside the cell. The quasi-concentric MPC consists of two 50.8 mm diameter enhanced-silver coated mirrors with a radius of curvature of 1.5 m, separated by \(\sim 3\) m. A small off-axis plane injection mirror couples the beam into the cavity with each beam pass forming a circular pattern on the mirror. After the required number of round-trips, the beam is picked off by another small plane off-axis mirror located on the same side as the injection. The 50.8 mm cavity mirror size ensures sufficient distance between the adjacent beam spots on the mirrors and minimizes beam steering losses. The number of passes in the cavity can be varied in even numbers by simply moving the pick-off mirror and can be set to a maximum of 18. The MPC can therefore support up to 54 m of propagation folded within an effective 3 m footprint.
The MPC is filled with Argon at a controlled pressure. Nonlinear refraction in Argon is taken into account in the simulations to find the appropriate mode-matching conditions and maintain an overall constant beam size on the cavity end-mirrors and at the center of the cavity, which also avoids damaging the optics and minimizes ionization [40]. The beam waist is set to 450 \(\mu\)m, corresponding to a 2.9 mm beam diameter on the end-mirrors. The number of passes is set to 12, corresponding to a total propagation length of 36 m.
Elliptical polarization is achieved by inserting a broadband quarter-wave plate (QWP) into the collimated beam before entering the mode-matching telescope. A half-wave plate (HWP) is placed before it to tune the input polarization direction and hence the ellipticity \(\epsilon_{i}\) inside the MPC from 0 (linear) to 1 (circular). A second crossed broadband QWP is used to retrieve linear polarization at the output of the MPC. A low-dispersion broadband thin film polarizer (TFP) (Femtolasers GmbH) with an extinction ratio of \(5\times 10^{-3}\) filters out the pulse peak rotated by NER at 90\({}^{\circ}\) with respect to the input polarization, while rejecting the low-intensity unrotated background. The temporally filtered pulses are then post-compressed using a set of chirped mirrors (up to \(-\) 450 fs\({}^{2}\), PC42, Ultrafast Innovations GmbH) and a pair of adjustable thin fused silica wedges. A range of diagnostic tools are used to characterized the NER-filtered pulses: An SHG-FROG (FemtoEasy) characterizes the pulses spectrally and temporally, a high dynamic range third-order cross-correlator (TUNDRA, Ultrafast Innovations GmbH) is used to detect the ASE pedestal and parasitic pulses over a time window of 200 ps around the pulse peak, and an imaging spectrometer (MISS, FemtoEasy) is used to assess the output spatio-spectral beam quality.
### Maximizing NER efficiency
Internal NER efficiency is defined as the ratio between the power measured after and before the output polarizer. Maximum NER efficiency is therefore achieved when the polarization ellipse rotates by exactly 90\({}^{\circ}\), such that transmission through the TFP is maximized. However, the NER angle is time-dependant along the time-varying pulse profile, which leads to a time-dependent transmission through the TFP. Moreover, as the pulses simultaneously experience spectral broadening and chirping through self-phase modulation (SPM), the NER angle also becomes wavelength-dependant, implying that the broadened spectrum is not uniformly transmitted through the polarizer. All these effects combined limit both the energy and the spectral throughput and thus drastically affect post-compression performance.
Fig. 2 (left panel) shows the evolution of internal NER efficiency versus B-integral for different ellipticities, obtained from numerical simulations for a plane-wave 1 mJ, 30 fs pulse propagating in Argon, including SPM, cross-phase modulation (XPM), self-steepening and gas dispersion. One clearly notices that lower ellipticities lead to a reduced spread of NER angles and, when combined with high B-integrals, yield higher throughput. However, our previous work on direct post-compression of 1 mJ 30 fs TiSa pulses in the same MPC configuration showed that the B-integral cannot be pushed much beyond 6 rad, corresponding to an Ar gas pressure around 400 mbar, where excessive self-steepening effects lead to poor pulse compressibility [16]. In the case of NER, the ellipticity should therefore not be lower than 0.15 and the maximum NER efficiency should not exceed 75%. Fig. 2(right panel) shows the simulated time-dependent NER angle during the rise and fall of the pulse along with the
Figure 1: Experimental setup layout for both NER- and XPW-MPC. Components marked in dark blue and green are used for performing NER and XPW, respectively. For clarity, only a few passes in the cell are represented. HWP: Half-Wave Plate, QWP: Quarter-Wave Plate
Figure 2: Left: Simulated NER efficiency vs. B-integral for various pulse ellipticities. Right: Temporal profile of NER pulse (grayed), transmitted pulse (solid blue) and rejected (dashed blue) pulse by the TFP; time-dependant NER angle (solid red), all simulated for \(\epsilon_{i}=0.25\) at maximum efficiency (\(B\simeq 6\) rad).
transmitted and rejected pulse profiles for \(\epsilon_{i}=0.25\) at maximum efficiency (\(B\simeq 6\) rad). The maximum NER angle is above 90\({}^{\circ}\), such that the transmission averaged over the whole pulse profile is maximized.
We performed the experiment using \(\epsilon_{i}=0.25\) and measured the internal NER efficiency along with the output pulse spectrum for increasing Ar pressures up to 420 mbar, where pulse break-up starts. Fig. 3 compares the measured data with the results obtained from (3+1)D simulations now including temporal, radial, and polarization dimensions and using measured device losses and the experimental laser pulse spectrum as input. Simulations show excellent agreement with measurements. Experimentally, the spectral bandwidth increases fast at first, then starts flattening out before increasing again around 420 mbar. The effects of pulse break-up can be seen in simulations at higher pressures, with the sudden appearance of wings and deep modulations in the broadened spectrum. The agreement between experiment and simulations is particularly good for the internal NER efficiency versus pressure, which reaches a maximum of 58% at an optimum Ar pressure of \(\simeq 350\) mbar and then rolls off because of the peak NER angle exceeding 90\({}^{\circ}\). We now can tune the ellipticity to a lower value, such that the maximum NER efficiency occurs just before pulse break-up around 420 mbar. This should lead to both a higher throughput and a broader spectrum, and therefore yield shorter compressed output pulses.
### Experimental results at optimum ellipticity
By setting the Ar pressure to 420 mbar and the pulse ellipticity to \(\epsilon_{i}=0.18\), the internal NER efficiency increases from 58% to 69%. The output polarizer is rotated accordingly to preserve the extinction ratio and the output energy drops to 0.49 mJ, yielding a global NER efficiency of 49%, including losses in the MPC. In this configuration, we obtain, nearly-FTL 5.8 fs pulses (2.2 optical cycles at \(\lambda=800\) nm) as shown in fig. 4. The reconstructed 5.8 fs pulse profile (solid blue curve) is very close to the Fourier-transform-limited profile (FTL, dotted black curve) exhibiting near-perfect compression with a very low-intensity pedestal structure, limited only by residual phase and spectral modulations introduced by the paired double-angle chirped mirror compressor. The compressor efficiency is measured to be 87%, leading to an overall post-compression efficiency of 43%.
Fig. 5 compares the long-range temporal profiles of input and output pulses measured by the TUNDRA device. The contrast enhancement obtained via NER is at least 3 orders of magnitude, with ASE levels dropping down to \(1:10^{-11}\) a few ps prior to the pulse peak, and is limited by the extinction ratio of the TFP. The pre-pulse seen at -7.5 ps in the traces for both the input and NER pulses is an inherent artifact of the TUNDRA. The train of post-pulses visible in the NER trace originates from internal reflections within the TFP itself and are not observed when a Glan polarizer is used for extinction. However, the high thickness of a Glan polarizer leads to excessive dispersion and nonlinear effects distorting the output pulse and beam. Finally, the spectrally-resolved beam profile in the horizontal and vertical dimensions and the corresponding V-parameter measured for \(\epsilon_{i}=0.18\) are shown in fig. 6. In both dimensions, the output beam exhibits good spectral homogeneity, \(>98\%\) at FWHM and above \(>85\%\) at \(1/e^{2}\).
These results must be compared to direct post-compression
Figure 4: Top: measured (left) and retrieved (right) SHG-FROG traces. Bottom: input and NER spectra (left) and the corresponding temporal profile (right) for \(\epsilon_{i}=0.18\).
Figure 5: Temporal contrast enhancement between input and NER (\(\epsilon_{i}=0.18\)) pulses measured using a high dynamic range third-order autocorrelator.
Figure 3: Top: evolution of the measured (left) and the simulated (right) transmitted NER spectrum with Ar gas pressure, for \(\epsilon_{i}=0.25\). Bottom: variation of the measured spectral width and Fourier-transform-limited (FTL) duration (left) and corresponding experimental NER conversion efficiency with Ar pressure compared to simulated values (right).
experiments [16]. Under the exact same experimental conditions, albeit with only 16 passes in the MPC, we measured 5.3 fs post-compressed pulses with 66% overall efficiency, chirped mirror compressor included. The 43% overall transmission measured in the case of NER amply justifies its implementation in an MPC post-compressor as it enables the direct generation of high-contrast few-cycle pulses with moderately higher losses, little compromise on the output pulse duration and very low added complexity.
## 3 Cross-polarized wave generation in multi-pass cells
XPW generation is a well-established and perhaps the most widely used technique for temporal contrast enhancement in high-energy Ti:Sa lasers. It is a degenerate four-wave mixing process governed by the anisotropy of the real part of the third-order nonlinearity tensor \(\chi^{(3)}\). Inside the nonlinear medium, a new orthogonally polarized wave is generated at the same central frequency as the incident wave. Conventionally, its implementation is quite straightforward: a linearly polarized laser pulse is focused into a crystal placed between crossed polarizers. Due to the cubic intensity dependence of the process in space and time, efficient conversion occurs only at high intensities, making it easy to filter out the cleaned XPW pulse from the lower-intensity ASE pedestal and parasitic pulses. The XPW conversion efficiency depends on the intensity of the pulse incident on the crystal, the thickness of the crystal, the input spatio-temporal pulse quality and its spectral phase [41, 42]. The incident intensity on the XPW crystal is limited by the threshold of white light generation in the crystal (e.g. \(\sim 10^{12}\) W/cm\({}^{2}\) for BaF\({}_{2}\) crystals). Using thicker crystals to achieve higher conversion leads to unwanted nonlinear third-order processes that tend to compete with XPW generation, making the XPW beam properties more sensitive to spatial-temporal couplings. The input intensity is also limited by damage due to self-focusing inside the crystal, which tends to reduce its lifetime. So far, the highest demonstrated global conversion efficiency has been limited to 10-20% using a double thin-crystal configuration [43] and, for m) energy pulses, some form of spatial filtering or shaping is needed to ensure a smoother or more homogeneous incident spatial beam profile on the crystals [44, 45]. In this work, we tested the implementation of XPW in the MPC, since the nonlinearity inside an MPC is acquired in small increments and spatially redistributed across the beam for every pass.
### Experimental setup for XPW
The setup for testing XPW in the MPC is depicted in fig. 1. Here, no QWPs are needed since the linear polarization direction of the XPW signal can simply be set by the HWP at the MPC input. The chamber is operated under vacuum and two anti-reflection-coated, 600 um thick, holographic cut BaF\({}_{2}\) crystals aligned with same orientation are placed symmetrically with respect to the center of the MPC. This configuration helps to mitigate spatial nonlinear effects and ensures spatio-spectral beam homogeneity. The distance of the crystals from the waist enables the tuning of the nonlinearity for every pass and is set to about 1 m, placing the crystals approximately 50 cm away from the end-mirrors. The chirped mirrors of the output compressor are changed to accommodate the narrower spectral bandwidth typically produced by the XPW process, and to introduce higher dispersion in order to compensate for the total amount of material traversed by the pulses in the BaF\({}_{2}\) plates. The input pulse parameters, output polarizer and pulse diagnostics all remain the same as for NER.
In the case of XPW, mode matching into the MPC becomes much more complicated. First, for a Gaussian input pulse in space and time, both spatial and temporal pulse profiles of the XPW wave are shortened by a factor of \(\sqrt{3}\) because of the cubic nonlinearity. Therefore, both input and XPW beams do not share the same beam matching conditions and their respective caustics cannot be stabilized together. Moreover, the input pulse peak power is about \(5\times 10^{3}\) times higher than the critical power for self-focusing in BaF\({}_{2}\), under our experimental conditions. Although the BaF\({}_{2}\) plates are thin enough to prevent catastrophic beam collapse, they have to be considered as thin nonlinear lenses that can disturb mode matching and overall propagation in the MPC. Moreover, the repeated energy transfer from the input pulse to the XPW wave leads to changes in the pulse peak power, such that the Kerr lensing experienced through the BaF\({}_{2}\) plates is different for both pulses and for every pass, especially for such high nonlinearities. Material dispersion and reflection losses from the plate surfaces further exacerbate this behavior.
Figure 6: (Top) spectrally-resolved beam profile in the horizontal (left) and vertical (right) dimensions and (bottom) output beam profile in arbitrary units in the horizontal (left) and vertical dimensions (right) along with their spectral homogeneity, measured for \(\epsilon_{i}=0.18\).
Figure 7: Left: Beam caustic of the fundamental wave alone including Kerr Lensing in the BaF\({}_{2}\) plates (solid blue) compared to linear mode matching in the MPC (dashed red). Right: XPW and fundamental beam caustics.
Numerical simulations were performed to determine the best beam matching when including Kerr lensing from the BaF\({}_{2}\) plates. First, it was run for the input fundamental wave alone, while inhibiting XPW generation. Fig. 7 shows that the beam caustic of the fundamental wave in the MPC, which is stable for the first few passes and then becomes strongly modulated, illustrating the impact of dispersion on the caustic stability. However, the beam size on the MPC end-mirrors is always larger than that of the linearly matched beam, which excludes any risk of optical damage by design. Excessive ionization at the beam waist, which can occur if the caustic is unstable, is not a concern here as the MPC is operated under vacuum. Second, XPW generation was enabled in the simulation by choosing the proper polarization that maximises efficiency, as detailed in the next section. The XPW caustic is plotted in fig.7. It can be seen that the newly created XPW beam in the first BaF\({}_{2}\) plate pass is smaller than the fundamental. As expected, its caustic is highly modulated throughout the MPC, but here again the beam sizes on the mirrors are always larger than in the linear regime. The fundamental beam is more modulated in the presence of XPW generation due to the energy transfer, but the minimum beam size on the mirrors still remains close to that for linear beam matching, thus avoiding optical damages. These simulations show that Kerr lensing in the BaF\({}_{2}\) plates drastically disturbs beam propagation and caustic stability throughout the MPC, but also leads to larger beam sizes on the mirrors, such that the beam fluence systematically remains below the damage threshold of the end-mirrors.
### Experimental results of XPW generation
Fig. 8 shows the XPW efficiency simulated as a function of the number of passes in the MPC, including reflection losses introduced by the enhanced-silver-coated mirrors (\(\sim 1\%\) per bounce) and the BaF\({}_{2}\) crystals (\(\sim 3\%\) per MPC pass). The input polarization is set to 64.5\({}^{\circ}\) with respect to the optical axes of the BaF\({}_{2}\) crystals, which maximizes XPW conversion efficiency per pass [46]. For 16 passes, the total internal XPW efficiency reaches up to 65%, an unprecedented value which cannot be obtained for XPW in free space. First, the beam size on the crystals is well controlled thanks to the MPC geometry and enables good conversion efficiency per pass, as opposed to a free-space two-plate arrangement, where the beam constantly diverges. Second, the MPC geometry mitigates nonlinear spatial effects, enabling higher efficiencies without distorting the beam profile. However, such a high number of passes through the BaF\({}_{2}\) plates implies higher reflection losses, such that the overall efficiency, calculated as the output XPW power over the input power, rapidly drops. In practice, the number of passes should be limited to 10 passes to maximize the overall efficiency to about 35%, which is still higher than the current state-of-the-art. The beam size is set to 330 \(\mu\)m at the waist and 4 mm on the end-mirrors in order to fulfill the stability conditions of the MPC. The input pulses are p-polarized and the output TFP is oriented to select the XPW pulses along the orthogonal polarization direction. The XPW pulses are then compressed outside the MPC using a set of chirped mirrors (total dispersion \(\simeq-750\)\(fs^{2}\), HD58 from Ultrafast Innovations) and a pair of adjustable thin fused silica wedges. By compensating dispersion (entrance window, propagation in air) with the Dazzler in the laser chain, we can inject nearly-FTL 30 fs pulses into the MPC, and obtain 295 \(\mu\)J total XPW pulse energy, corresponding to 57% internal and 28% global conversion efficiency, respectively, while taking into account all the losses in the MPC.
The broadened XPW spectrum can be compressed down to 15 fs as shown in fig. 9. Fig. 10 shows the spectrally-resolved beam profile measured in both the vertical and horizontal dimensions with the imaging spectrometer, which exhibits a nearly-Gaussian profile in both cases. The homogeneity factor, as defined in [3], is also shown for both the dimensions in fig. 10. The beam exhibits excellent spectral homogeneity above 99% at the FWHM and above 95% at 1/e\({}^{2}\) in both dimensions. This is a direct advantage of implementing XPW in an MPC, where incremental accumulation of B-integral mitigates spatio-temporal couplings and yields excellent output beam quality. Fig. 11 shows the long-range temporal intensity profiles measured with the TUNDRA for both input and XPW pulses. The contrast enhancement is at least 3 orders of magnitude and limited, as for NER, by polarization extinction capability of the TFP. The pre-pulse at +7.5 ps and the train of post-pulses are similar to those observed in the NER measurements.
### Maximizing XPW efficiency
XPW generation has been shown to be accompanied by significant spatio-temporal reshaping due to interplay between XPM and SPM involving both fundamental and XPW pulses [44, 47].
Figure 8: Left: Internal (red) and overall (blue) XPW generation efficiency as a function of number of passes in the MPC.
Figure 9: Top: measured (left) and retrieved (right) SHG-FROG traces. Bottom: input and XPW spectra (left) and the corresponding compressed XPW temporal profile (right), for FTL pulses injected into the MPC.
When an intial -500 \(fs^{2}\) spectral phase is applied to the input pulses with the Dazzler to globally compensate for the effects of dispersion inside the MPC, the XPW energy increases to 360 \(\mu\)J, corresponding to 65% internal and 34% global conversion efficiencies, respectively. To our best knowledge, this is the highest conversion efficiency reported so far for XPW generation. However, this increase in conversion efficiency comes at the cost of lower spectral broadening and therefore slightly longer recompressed pulses of 19 fs, as shown in fig. 12. This result is in good agreement with previous studies on the effect of residual chirp on the output spectral behaviour [48], where narrower albeit smoother output XPW spectra were observed for negatively chirped input pulses. Finally, spectral homogeneity and contrast enhancement factors similar to the FTL case were measured for negatively chirped input pulses. Overall, the smooth XPW spectrum together with the increased available XPW pulse energy could be particularly useful for efficient seeding of further laser amplification stages in a double chirped pulse amplifier architecture.
## 4 Summary
In conclusion, we demonstrate efficient spatial-temporal cleaning of mJ-energy 30 fs pulses in an MPC using two different third-order nonlinear filtering techniques: XPW and NER. Comprehensive (3+1)D numerical simulations show excellent agreement with the measured data in both cases and enables us to carefully design the MPC architectures so as to obtain the highest output pulse fidelity. In both cases, a contrast enhancement \(>10^{3}\) could be observed together with near-FTL post-compressed pulse durations.
To the best of our knowledge, this is the first time that XPW has been implemented inside an MPC, exhibiting several advantages over a more conventional free-space setup: (1) record high efficiencies (up to 65% internal and 34% global), (2) no need for spatial filtering, (3) excellent output beam quality and spectral homogeneity, and (4) relatively higher tolerance to input beam pointing fluctuations. More adapted surface coatings on the nonlinear crystals and cavity end-mirrors should help significantly increase the overall energy throughput and polarization optics with higher extinction ratios could easily increase the contrast enhancement factor (\(>10^{3}\)) by 2-3 orders of magnitude. This approach could therefore aid in designing efficient and compact devices for spatio-temporal pulse cleaning in high-peak-power laser systems.
To the best of our knowledge, this is also the highest total internal efficiency (up to 69%) reported for NER for 30 fs input pulses. Implementing NER in an MPC architecture with such pulses enables the direct generation of high-contrast few-cycle pulses (\(<6\) fs) with up to 43% global efficiency, in a single post-compression stage. The inherent power scalability of MPCs makes this an attractive end-of-chain solution for producing high peak-power few-cycle pulses with high temporal contrast suited to ultra-high intensity laser-matter experiments. Shorter post-compressed pulse duration, down to the single-cycle regime, could in principle be reached using dispersion-engineered coatings targeting net-zero linear chirp to suppress the saturation of Kerr nonlinearities, as observed in [16], which should enable octave-spanning broadening with high throughput. For this, however, the limitations on pulse compressibility
Figure 11: Temporal contrast enhancement between fundamental and XPW pulses.
Figure 12: Top: measured (left) and retrieved (right) SHG-FROG traces; bottom: input and XPW spectral properties (left) and corresponding output XPW temporal profile (right).
Figure 10: Top: measured spatio-spectra traces in horizontal (left) and vertical (right) dimensions; bottom: output beam profile in arbitrary units in horizontal (left) and vertical dimensions (right) along with their spectral homogeneity.
imposed by the residual phase profile of these coatings and ionization remains to be investigated.
FundingThis work was supported by the Agence National de la Recherche (ANR-10-LABX-0039-PALM), the Horizon 2020 Framework program (Advanced Grant ExCoMet 694596), LASERLAB-EUROPE (871124) and the Region Ile-de-France (SESAME 2012-ATTOLITE).
DisclosuresThe authors declare no conflicts of interest regarding publication of this article.
Data Availability StatementData underlying the results presented in this article are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2301.13009 | DeFi: data-driven characterisation of Uniswap v3 ecosystem & an ideal
crypto law for liquidity pools | Uniswap is a Constant Product Market Maker built around liquidity pools,
where pairs of tokens are exchanged subject to a fee that is proportional to
the size of transactions. At the time of writing, there exist more than 6,000
pools associated with Uniswap v3, implying that empirical investigations on the
full ecosystem can easily become computationally expensive. Thus, we propose a
systematic workflow to extract and analyse a meaningful but computationally
tractable sub-universe of liquidity pools. Leveraging on the 34 pools found
relevant for the six-months time window January-June 2022, we then investigate
the related liquidity consumption behaviour of market participants. We propose
to represent each liquidity taker by a suitably constructed transaction graph,
which is a fully connected network where nodes are the liquidity taker's
executed transactions, and edges contain weights encoding the time elapsed
between any two transactions. We extend the NLP-inspired graph2vec algorithm to
the weighted undirected setting, and employ it to obtain an embedding of the
set of graphs. This embedding allows us to extract seven clusters of liquidity
takers, with equivalent behavioural patters and interpretable trading
preferences. We conclude our work by testing for relationships between the
characteristic mechanisms of each pool, i.e. liquidity provision, consumption,
and price variation. We introduce a related ideal crypto law, inspired from the
ideal gas law of thermodynamics, and demonstrate that pools adhering to this
law are healthier trading venues in terms of sensitivity of liquidity and
agents' activity. Regulators and practitioners could benefit from our model by
developing related pool health monitoring tools. | Deborah Miori, Mihai Cucuringu | 2022-12-20T19:15:34Z | http://arxiv.org/abs/2301.13009v2 | DeFi: data-driven characterisation of Uniswap v3 ecosystem & an ideal crypto law for liquidity pools
###### Abstract
Uniswap is a Constant Product Market Maker built around liquidity pools, where pairs of tokens are exchanged subject to a fee that is proportional to the size of transactions. At the time of writing, there exist more than 6,000 pools associated with Uniswap v3, implying that empirical investigations on the full ecosystem can easily become computationally expensive. Thus, we propose a systematic workflow to extract and analyse a meaningful but computationally tractable sub-universe of liquidity pools. Leveraging on the \(34\) pools found relevant for the six-months time window January-June 2022, we then investigate the related liquidity consumption behaviour of market participants. We propose to represent each liquidity taker by a suitably constructed _transaction graph_, which is a fully connected network where nodes are the liquidity taker's executed transactions, and edges contain weights encoding the time elapsed between any two transactions. We extend the NLP-inspired _graph2vec_ algorithm to the weighted undirected setting, and employ it to obtain an embedding of the set of graphs. This embedding allows us to extract seven clusters of liquidity takers, with equivalent behavioural patters and interpretable trading preferences. We conclude our work by testing for relationships between the characteristic mechanisms of each pool, i.e. liquidity provision, consumption, and price variation. We introduce a related _ideal crypto law_, inspired from the ideal gas law of thermodynamics, and demonstrate that pools adhering to this law are healthier trading venues in terms of sensitivity of liquidity and agents' activity. Regulators and practitioners could benefit from our model by developing related pool health monitoring tools.
Clustering Decentralised Finance Ideal gas law Network Analysis NLP Uniswap v3
## 1 Introduction
A _blockchain_ is a type of Distributed Ledger Technology (DLT) that stores users transactions on an increasingly long sequence of blocks of data. The ledger is replicated across a network of servers to allow the validation of new transactions by the peer-to-peer (P2P) computer network and consequent addition of blocks, thus increasing trust, security, transparency, and the traceability of data. Bitcoin was the first blockchain to acquire worldwide notoriety. It was designed by the person(s) known via the pseudonymous Satoshi Nakamoto during 2007 and 2008, and subsequently described in the whitepaper [1] in 2009. The project was indeed released as an open source software in 2009 and from that moment, Bitcoin started slowly acquiring increasing value and seeing higher trading volumes. However, Bitcoin lacks the capacity for logical loops and conditionals, and this limitation fueled the rise of the Ethereum blockchain, which was first described in the 2013 whitepaper [2].
Ethereum supports smart contract functionality and, due to this, it is able to offer financial services that do not rely on intermediaries such as brokerages, exchanges or banks. Thus, Ethereum is commonly considered as the protocol that
first allowed the formulation of foundations for Decentralised Finance (DeFi). Within DeFi, individuals can lend, trade, and borrow using software that automatically broadcasts their intentions for P2P verification. Then, valid financial actions are recorded on the blockchain. Decentralised Exchanges (DEXs) are a direct result of this setup, and started being designed and implemented mainly from 2017. They differ from the usual centralised exchanges, since they are non-custodial and leverage the self-execution of smart contracts for P2P trading, allowing users to retain control of their private keys and funds. One of the first and most established DEXs at the time of writing is Uniswap, built on Ethereum and launched in November 2018. There exist three versions of Uniswap (namely v1, v2, v3, see the whitepapers [3], [4], [5] respectively) that update its design and evolve its functionalities.
We focus on Uniswap v3 data in this research, and investigate the related ecosystem. Thus, we now highlight some of Uniswap core aspects and begin with the simpler features that characterise its first two versions v1 and v2. Uniswap is an Automated Market Maker (AMM), and in particular, a Constant Function Market Maker (CFMM). This implies that digital assets are traded without centralised permission and the pricing occurs following a mathematical formula, rather than relying on an order book as in traditional exchanges. Uniswap smart contracts hold liquidity reserves of various token pairs and trades initiated by liquidity takers (LTs) are executed directly against these reserves. Reserves are pooled from a network of liquidity providers (LPs) who supply the system with tokens in exchange for a proportional share of transaction fees. A standard Uniswap liquidity pool allows the exchange, or _swap_, of two assets via the constant product market maker mechanism
\[(x-\Delta x)\cdot\Big{[}y+\big{(}1-\frac{\gamma}{10^{6}}\big{)}\Delta y\Big{]} =x\cdot y=k, \tag{1}\]
where \(x,y\in\mathbb{Q}^{+}\) are the current pool reserves of tokens \(X,Y\) respectively. Then, \(k\in\mathbb{Q}^{+}\) tracks the evolution of liquidity of the pool, and \(\gamma\in\{100,500,3000,10000\}\) (i.e. \(\{1,5,30,100\}\) basis points) denotes the _feeTier_ characteristic of the pool. Here, a LT sells an amount \(\Delta y\) of token \(Y\) to the pool and receives \(\Delta x\) of token \(X\) back. The instantaneous exchange rate \(Z\) between the two digital assets is given by the proportion of respective reserves in the pool, i.e.
\[Z=\frac{x}{y}, \tag{2}\]
and changes following the trades of LTs. A proportion \(\frac{\gamma}{10^{6}}\) of each swap is kept by the protocol to reward LPs for their service. Swaps do not change \(k\), while this invariant does vary if new liquidity is _minted_ (added) or _burned_ (withdrawn) in the pool by LPs. Of course, higher liquidity assures less price slippage for LTs and is thus preferred. LPs profit an amount proportional to their involvement into the whole liquidity of the pool for each trade occurred. However, they also incur _impermanent loss_ due to the need to stake both tokens to provide liquidity, while bearing the risk of varying exchange rates.
In Uniswap v3, the main difference from the previous versions is that _concentrated liquidity_ is implemented. The continuous space of exchange rates of pools is discretised into intervals whose boundaries are called _ticks_, and every minting action of LPs specifies two ticks between which the liquidity will specifically be provided. This means that LPs can choose the range of prices and proportions over which they lock their tokens. However, concentrated liquidity also implies that LPs collect LTs' fees only while the exchange rate of executed trades lies between two ticks over which they are indeed providing liquidity. For a full characterisation of the wealth of LTs and LPs in Uniswap v3, we point to the mathematical analyses pursued in [6, 7].
For completeness, it is also worth mentioning that every action (i.e. creation of a pool, swap, mint or burn operation...) that occurs on Uniswap, or in the general DeFi universe, must be validated and registered on the blockchain to be considered executed. This introduces a further cost for the initiator of the action, who needs to pay non-negligible _gas fees_[8] to miners to compensate them for the computational power they devote. This is especially significant for blockchains that use a Proof-of-Work consensus protocol, such as Ethereum until September 2022. Indeed, Ethereum transitioned to Proof-of-Stake via the upgrade named "The Merge", allowing validation of transactions to not only rely on computational power, and opening the opportunity to have lower gas fees and enhanced users participation in DeFi (despite not yet reality).
Many further interesting new and old finance concepts live within DeFi and beyond DEXs. One such concept worth mentioning first is that of _stablecoins_. Stablecoins are digital assets that are pegged to the value of a fiat currency, and can be useful to exit risky positions while remaining inside the crypto ecosystem. Some stablecoins are fiat-backed (e.g. USDC, Tether), while others are backed by an over-collateralised pool of cryptocurrencies (e.g. DAI). There also exist algorithmic coins (e.g. UST), which closely resemble traditional pegged exchange rates and are indeed also vulnerable to speculative attacks, i.e. as it happened with the Terra-Luna crash in May 2022. Apart from stablecoins, DeFi provides several lending protocols (e.g. Aave, Compound, Instadapp, Maker), protocols for derivatives trading (e.g. dYdX, Futureswap, Nexus), and DEX aggregators (e.g. lunch) that optimise routing to take advantage of the best exchange rates across multiple other exchanges. In [9], there is an interesting study of the interactions between different blockchain protocols.
While DeFi is fascinating, it is also the stage of many scams, speculative high-risk investments, direct blockchain attacks, and money laundering events. On top of that, its complexity and atomicity might disadvantage small users, whose transactions can e.g. be re-ordered before execution by the validators for their own profit, known here as _miner extractable value_ (MEV). Despite the current effort of regulators to penetrate the crypto world and establish some equilibrium between centralisation and decentralisation of power, the current situation and possible upcoming developments are still highly confusing, especially for outsiders or newcomers. Interesting overviews and critical thoughts are presented in [10] and [11], where the latter work especially discusses enforcing tax compliance, anti-money laundering laws and how to possibly prevent financial malfeasance. The different layers of DeFi are studied in [12], where the related specific risks, i.e. at the blockchain, protocol, pool, and token level, are also analysed and a risk parity approach for portfolio construction in Uniswap is proposed.
The current academic research is also at its early stages in terms of understanding the inner dynamics of DEXs and external relationships with the well-known traditional stock market, especially from an empirical and data-driven point of view. Among interesting recent studies there is [13], where the authors investigate how promoting a greater diversity of price-space partitions in Uniswap v3 can simultaneously benefit both liquidity providers and takers. Then, [14] studies whether AMM protocols, such as Uniswap, can sustainably retain a portion of their trading fees for the protocol and the expected outflow of traders to competitor venues. Inefficiencies between Uniswap and SushiSwap are investigated in [15], where sub-optimal trade routing is addressed. However, [16] shows that constant product markets should closely track the reference market price. Flows of liquidity between Uniswap v2 pools are studied in [17], while [18] shows the difficulty of earning significant returns by providing liquidity in Uniswap v3. Interestingly, [6] fully characterises the wealth of LTs and LPs in Uniswap v3, and shows that LPs suffer a "Predictable Loss".
Main Contributions.We divert from the available literature in many ways. To the best of our knowledge, this is the first study to systematically identify a set of pools that are necessary to be considered for a full view of Uniswap v3 dynamics. We assess both the pools' inner features and their interconnectedness, and present a workflow for extracting significant sub-universes of pools in time, which can be completely reproduced by the reader. Then, we leverage on this first point to cluster LTs and characterise their broad behaviour. Our final contribution is to propose an _ideal crypto law_ for liquidity pools, inspired by the ideal gas law from thermodynamics. We provide motivation for it and show that pools with high _cyptoness_, i.e. strongly adhering to the proposed law, are healthier crypto environments on which to trade, by their levels of liquidity and activity of agents. The strength of _cyptoness_ of a pool can evolve in time and hence it is important to track it, along with various metrics that quantify the risks associated to the respective pool.
Structure of the Paper.In Section 2, we identify the most important and interconnected liquidity pools for different time windows within 2022. Next, we cluster LTs according to their behaviour on the relevant sub-universes in Section 3. Due to the complexity of this ecosystem, we draw on intuition from Natural Language Processing (NLP) and graph embedding techniques to assess structural equivalence of trading behaviour in a novel way. Section 4 expands our investigations by proposing an _ideal crypto law_ to simultaneously model LTs, LPs and price dynamics for each pool under consideration. Finally, we summarise our thoughts and discuss future research directions in Section 5.
## 2 Systematic selection of Uniswap v3 pools of interest
### Empirical introduction to the ecosystem
At the time of writing, Uniswap v3 is the latest implementation of this DEX. It launched in May 2021, introduced the concept of concentrated liquidity and allowed multiple feeTiers. For each Uniswap version \(N=1,2,3\), the addresses of related liquidity pool smart contracts are stored in the respective "Uniswap\(\nabla N\)Factory" contracts. We access them on Etherscan1 by querying for transactions related to Uniswap\(\nabla N\)Factory addresses23 and filtering for methods "Create Exchange", "Create Pair" or "Create Pool", for the three versions respectively. We find total numbers of \(3,857\) and \(992\) and \(40\) respectively associated calls until 15 November 2022, which we agglomerate at daily level and plot in Fig. 1. The dates of transition from Uniswap v1 to v2, and v2 to v3, are also depicted. It is interesting to notice that the previous protocols remain active after the transitions, but their liquidity can be easily moved to the new Uniswap versions via "Migrator" contracts. In terms of pools, the main differences between v1, v2 and v3 are that the first protocol allows only pools where one token is ETH and feeTier \(\gamma=3000\), while the second one introduces the ability to create a pool between any two tokens. Then, v3 expands pools to feeTiers \(100\), \(500\) or \(10000\). Although there is
a total of \(4,889\) pools directly created with UniswapV\(N\)Factory contracts, the majority of them is the result of the 2020-21 cryptomania and inflated creation of new tokens. This translates to many pools not containing any relevant amount of liquidity locked, but which do not disappear due to the immutability of the blockchain. On the other hand, we also expect wrapped calls to the Factory contracts and thus refer to the above as a lower bound to the number of pools created.
As a last note, we show the evolution of Uniswap liquidity on its main Ethereum chain in Fig. 2. The data are downloaded via the Defi Llama5 API and we proxy liquidity by the total amount of USD locked on the protocol, i.e. Total Value Locked (TVL).
Footnote 5: [https://defillama.com/](https://defillama.com/)
### Data download and coarse refinement of pools
Each version of Uniswap has its own dedicated subgraph, which has a precise endpoint for querying data and a database schema to expose the available fields. Our terminology follows the Uniswap v3 schema6 and we download pools data via the related subgraph. Our first aim is to identify the pools most representative of the Uniswap ecosystem, which we interpret as having significant liquidity consumption and provision events, but also showing high interconnectedness. To this end, we download the latest summary data of all possible pools, full historical record of liquidity consumption operations, and full record of liquidity provision actions. Then, we develop a systematic approach that aims at increasingly discarding layers of pools with weakest features first and then weakest dynamics too, respectively in this subsection and in the next one. The final data set will be our starting point for the subsequent analyses, but aims at being useful to a wider group of researchers that desire to empirically investigate Uniswap v3.
Footnote 6: [https://github.com/Uniswap/v3-subgraph/blob/main/schema.graphql#L1](https://github.com/Uniswap/v3-subgraph/blob/main/schema.graphql#L1)
Download summary data of pools.As of 15 November 2022, we download the latest "Pool" data as described in Uniswap v3 subgraph schema. We download pools in descending transaction count (txnCount) order, since this variable is strictly increasing in time, while e.g. TVL is not. This allows us to select a universe of pools which have had a
Figure 1: Daily count of new pools created via UniswapV1Factory, UniswapV2Factory and UniswapV3Factory smart contracts. The two vertical orange lines depict the dates of official transition from Uniswap v1 to v2, and from v2 to v3.
Figure 2: Evolution in time of the TVL in USD on Uniswap main Ethereum chain. The higher the TVL, the more liquid the ecosystem is considered to be. In orange, we show the dates of transition from Uniswap v1 to v2, and from v2 to v3.
minimum number of transactions thus far. We apply this weak initial filtering on the first \(6,000\) pools by txnCount, and find that only \(1,344\) pools report at least \(1,000\) transactions by 15 November 2022. Then, we also restrict ourselves to pools where both exchanged tokens are traded in at least \(3\) pools (e.g. token \(T\) is traded against a stablecoin, against ETH and against ETH with different feeTier), in order to focus on interesting dynamics of the full ecosystem. The result is that we subset to a universe of \(696\) pools to consider in our study.
Download LP data.We download liquidity provision data for these \(696\) pools and find non-empty entries for \(629\) of them. Liquidity provision data are downloaded to have a historical record of all liquidity mint and burn operations on each pool, with related USD value. By computing the total cumulative sum of LPs activity, we proxy the TVL in USD that each pool contains at every moment in time and denote it as "proxyTVL". Unfortunately, we cannot simply rely on the "PoolDayData" values provided by the subgraph due to incoherences found when cross-checking with Ethereum blockchain data on Etherscan.
Uniswap v3 data start on 6 May 2021, when the transition from the previous version of the protocol successfully completed. While for the first months only the \(500\), \(3000\) and \(10000\) feeTiers were implemented, in November 2021 a fourth feeTier \(\gamma=100\) was activated. This generated structural flows, noise and adjustments that we prefer to exclude from our analyses. On top of that, we recognise that the transition of Uniswap's foundation blockchain (i.e. Ethereum) from Proof-of-Work to Proof-of-Stake in September 2022 could have triggered turbulences on the ecosystem too. Thus, we decide to focus our analyses on the six-months period from January 2022 to the end of June 2022, which we consider as the most representative of the actual DEX dynamics. We check for every pool if our proxyTVL passes the threshold of \(1,000,000\) USD (one million dollars) at any point before the end of June 2022. This is motivated by the aim to find pools that were liquid enough at some point in our time window to capture interesting behaviour of LTs and LPs. We find \(282\) pools that satisfy this further requirement. Of these, \(210\) had that much TVL already at some point before January 2022, and \(261\) at some point before April 2022. While some pools acquire relevance as time passes, other ones can also lose liquidity, as in the extreme case of pools related to the Terra-Luna crash of May 2022. As a final detail, we highlight that we additionally check whether a pool has at least one million USD in TVL for two consecutive points in time in order to avoid pools where a substantial amount of liquidity is minted and immediately burned by an agent, to likely take advantage of specific external information.
Download LT data.For the above \(282\) pools, we download related liquidity consumption data and find all non-empty data sets. Thus, we are left with a final set of \(282\) liquidity pools, for which we have a summary file, a LP database, and a LT database each. This completes our coarse filtering of pools, which implements the least invasive possible initial requirements, while still filtering down the universe of Uniswap v3 liquidity pools to a tractable number of instances. The diagram in Fig. 3 summarises the steps completed.
### Final refinements
Stronger TVL and txnCount constraints.From our first coarse filtration, we recover \(282\) pools to consider further in Uniswap v3 analyses. However, we shall be more strict about the minimum number of transactions taking place on a pool and its TVL in time, in order to lower the noise-to-signal ratio in the data. As already motivated, we wish to focus on the six-months window \([January,July)\) 2022, which we denote as our case \(A\). We also consider five sub-ranges, namely the two three-months windows \([January,April)\), \([April,July)\) that we denote as cases \(B1/B2\), and the three two-months windows \([January,March)\), \([March,May)\), \([May,July)\), that we call cases \(C1/C2/C3\). For each case and related time window \([start,end)\), we extract the pools with at least \(1,000\) transactions before \(start\) (where the number of transactions in time is calculated via the cumulative sum of both swap events and mint or burn operations) and
Figure 3: Summary diagram of the filtration steps pursued during our coarse refinement of pools. Stronger constraints on the TVL and txnCount of pools will follow in the next subsection, such as an attention to maximise the interconnectedness of the final sub-universe of pools.
that also had at least \(1,000,000\) USD in proxyTVL both at the \(start\) and \(end\) of the interval. Considering sub-ranges allows us to further account for the appearance of new pools that became significantly liquid or active after January 2022, or pools that lost the majority of their liquidity before July 2022, thus lowering survivorship/loser bias. For cases \(A/B1/B2/C1/C2/C3\) in order, we find respectively \(113/126/148/131/146/155\) pools that satisfy the above requirements, for which we save the related addresses and information. Taking the union of these sets of pools, we notice that we are considering \(177\) different pools overall. Of these, five pools belong to the \(100\) feeTier, \(28\) pools to the \(500\) feeTier, \(84\) pools to the \(3000\) feeTier and \(60\) pools to the \(10000\) feeTier.
To gain a brief insight into the most liquid and active venues, we consider the pools extracted for case \(A\) and plot in Fig. 3(a) the \(10\) pools with highest proxyTVL at the end of June 2022, and in Fig. 3(b) the \(10\) pools with highest total number of transactions over the six months of relevance. For the first pool in the ranking of both measures, we plot the related evolution of liquidity and daily number of transactions in Fig. 5. As a convention, we refer to pools with the format "SYMBOL1-SYMBOL2/freeTier", where we use the trading symbols of the two tokens exchanged by the pool. Stablecoins, wrapped Ether (WETH) and wrapped Bitcoin (WBTC) dominate the landscape of tokens swapped in the most liquid and active venues, which is expected since they are the oldest, most established, or safest cryptocurrencies that agents can trade and develop strategies onto. Then, it is interesting to observe how the DAI-USDC/100 pool is much younger than the USDC-WETH/500 one, but quickly gained strong liquidity levels due to its tokens being both stablecoins.
Figure 4: The \(10\) most liquid and active pools for case \(A\), i.e. over the time window between January and June 2022.
Figure 5: Evolution of liquidity, i.e. proxyTVL, and daily number of transactions for the pool with highest TVL at the end of June 2022 (DAI-USDC/100), and for the one with largest total number of transactions during the full six-months window of case \(A\) (USDC-WETH/500).
Filtering of pools by interconnectedness.Considering the full list of data sets of liquidity provision and consumption actions to pursue investigations of more than \(100\) pools becomes quickly computationally expensive. Thus, we further subset our pools of interest by requiring minimum levels of interconnection between them. This is also beneficial in assuring a focus on the joint dynamics that characterise the Uniswap ecosystem as a whole.
For each one of our cases \(A/B1/B2/C1/C2/C3\), we build a weighted graph \(G=(P,E)\). The set of nodes \(P\) denotes relevant pools, and edges \((p,q)\in E\) with \(p,q\in P\) have weights \(w_{pq}\) that encode a measure of similarity defined below. We start by considering two possible different measures of connection between pools:
1. Number of common LTs (_or_ LPs) active on both pools, which are identified by the entry "origin" in the Uniswap data.
2. Number of common smart contracts, i.e. "senders" in the Uniswap data, called by origins to execute swap transactions (_or_ to execute liquidity provision operations).
We separate between a focus on liquidity consumption or provision in the above measures, since the two dynamics differ substantially, and one might prefer to enhance the sub-universe under consideration to be more representative of one or the other. Of course, the intersection or union of the results can be then used to pursue broader analyses. To clarify some Uniswap terminology, every "swap action" is initiated by an _origin_\(O\), then it calls a smart contract referred to as _sender_\(S\), and ends to the _recipient_\(R\). In liquidity provision, only the origin and sender of operations are relevant. Figure 6 shows the distribution of number of origins, senders and recipients in each pool for both LT and LP data over the time window of case \(A\). We also show the distribution of the intersection between origins, senders and recipients' addresses.
We now proceed to studying the relationship between the size of each graph's giant component and a minimum threshold on the value of the measure used to create the link between each pair of pools. After fixing a threshold, we consider the pools in the related giant component as our relevant interconnected sub-universe. Figure 7 shows the variation in size of the giant component for case \(A\), when modifying the minimum number of common origins or senders for LT and LP data. We aim at considering the tails of the distributions for each case (i.e. time interval), which amounts to \(\sim 20-30\) pools in each instance, to retain the most significant connections and possible dynamics of the ecosystem. For case \(A\), this results in the choice of thresholds \(2,000\) and \(100\) for minimum common origins and common senders respectively, on the LT data. Similarly, we choose thresholds \(30\) and \(3\) for minimum common origins and common senders respectively, on the LP data. Finally, we consider the intersection of survival pools for the two graphs generated by LT data, and find \(27\) common pools (out of the \(34\) and \(36\) pools, respectively in each graph). For LP data, we find a number of \(19\) final relevant pools (from the intersection of \(25\) and \(30\) pools). The full pipeline is repeated for cases \(B1/B2/C1/C2/C3\).
For the interested reader, Fig. 8 depicts the distribution of origins, also called Externally Owned Accounts (EOAs), that are both LTs and LPs on each pool for case \(A\) before filtering by interconnectedness. This quantity is shown both as a ratio of the total number of LTs and LPs on each pool. We witness an extreme case for pool "WETH-sETH2/300", for which more than \(20\%\) of the total amount of LTs are also LPs. However, the number of LTs that also act as liquidity providers is a negligible minority. If we further count the total number of LTs, LPs and LTs acting also as LPs regardless of the pool, we find \(479,161\) and \(23,952\) and \(13,640\) such market participants, respectively. Approximately half of LPs also act as LTs, but LTs acting also as LPs are still a small minority.
Figure 6: The intersection of origins and senders is always zero since the former are wallets of users and the latter smart contracts. Recipients can instead be both, hinting to more complex patterns in the execution of transactions. To be precise, here we are specifically considering the sub-universe of pools relevant for case \(A\) at the end of all our refinements.
Final enhancement on pools for liquidity consumption analysis.Ideally, we should also consider the flow of funds across pools and find the related most interconnected graph. However, this is intractable if using only Uniswap data and not the full list of Ethereum blockchain transactions. Indeed, LTs are active across different DeFi protocols and can easily move liquidity from one venue to another and back. We propose an approximation to the problem taking advantage of the fact that each trader's transaction can include more actions, which happen "instantaneously but in order" when the full transaction is validated. Thus, if a LT executes two swaps of the form \(X\to Y,Y\to Z\) for tokens \(X,Y,Z\) in one same transaction, then we interpret \(Y\) as a _bridge_ between the action of selling \(X\) to buy \(Z\). We view this as an indication of the flow of (smart) money between pools and of possible arbitrage opportunities, relevant to the LT sub-universe. In summary, we consider the following steps:
1. Merge all LT data before the interconnectedness analyses, e.g. data for the \(113\) pools of case \(A\).
2. Keep all the transactions for which there are at least two inner actions, i.e. same "transaction id" but different "logIndex" in Uniswap terminology.
3. For each resulting transaction: 1. For each token that appears in the transaction actions, keep a _flow list_ of related buying (\(-1\)) or selling (\(+1\)) trades in all the related pools by looking at the sign of the amount swapped by the pool. 2. For each token, consider its flow list and find all the occasions when a \(-1\) is immediately followed by a \(+1\) (i.e. the token was first bought in a pool and then sold in another pool, acting as one of our bridges). 3. Save this occurrence of a flow between pools as a _bridge transaction_,
where we are approximating only jumps of length one. As an example, for a flow list of the form \([-1,+1,+1]\), we only consider the flow as one from the first pool to the second one. For more specific analyses, one could consider the specific amounts traded and check the relative proportions exchanged from the first pool to the second and third ones, but this is outside the scope of our current investigation.
We extract all bridge transactions between pools and create a directed graph for each one of our temporal cases. Nodes are pools as usual, and edges are built for each pair of pools that have at least some number of bridge transactions
Figure 8: Distribution of origins that act both as LTs and LPs on the same pool.
Figure 7: Evolution of the size of the giant component for graphs of pools in case \(A\), when varying the threshold of common origins and senders for (a) swap transactions, (b) liquidity provision operations.
between them. Of course, each pair of pools can have up to two edges between them according to the direction of related bridge transactions. Then, we keep the largest connected component from the undirected version of the graph and add the resultant set of nodes to the LTs pools saved from the previous interconnectedness analyses. For case \(A\), we require at least \(800\) bridge transactions between two pools to create the related edges. The resultant giant component (see Fig. (a)a for a visualisation) has \(22\) nodes, seven of which were not included in our LT set of pools from the previous analyses and are thus added. Figure (b)b further highlights the nodes with highest eigenvector centrality in the graph, where we can especially notice how several pools of WETH against a stablecoin are proposed. This is intuitively sensible, since LTs can take advantage of routing to complete specific re-balancing of tokens via more liquid and favourable pools, which tend to have stablecoins, WETH and WBTC as their tokens, as shown in the earlier analyses.
The final list of \(34\) pools that we propose to consider for LTs analyses is: DAI-WETH/3000, CEL-WETH/3000, USDC-UOS/1000, DAI-USDC/100, SPELL-WETH/3000, WETH-CRV/10000, USDC-USDT/500, DAI-FRAX/500, WETH-BTRFLY/10000, GALA-WETH/3000, WETH-USDT/3000, WBTC-USDC/3000, DAI-USDT/500, UNI-WETH/3000, WETH-ENS/3000, DAI-USDT/500, WBTC-WETH/500, MATIC-WETH/3000, DAI-WETH/500, WETH-USDT/500, UNI-WETH/500, WETH-USDT/500, WETH-WETH/3000, PASC-WETH/3000, FASK-WETH/10000, FRAX-USDC/500, USDC-WETH/3000, USDC-WETH/10000, LUSD-USDC/500, HEX-USDC/3000, USDC-NCR/500, SHIB-WETH/3000, DYDX-WETH/3000, USDC-USDT/100, HEX-WETH/3000.
Regarding LP pools, we have instead the following \(19\) pools: WETH-CRV/10000, MKR-WETH/3000, WETH-USDT/3000, WETH-USDT/3000, UNI-WETH/3000, WETH-ENS/3000, WBTC-WETH/500, MATIC-WETH/3000, DAI-WETH/500, WETH-USDT/500, USDC-WETH/500, LINK-WETH/3000, WBTC-WETH/3000, USDC-WETH/3000, SHIB-WETH/3000, WBTC-USDT/3000, USDC-USDT/100, USDC-USDT/500, SHIB-WETH/10000.
The final results for cases \(B1/B2/C1/C2/C3\) are then listed in Appendix A, for the benefit of the reader that can use these sub-universes of pools as starting point for their own investigations on Uniswap v3. In our next steps, we specifically focus on the pools extracted for longest cases \(A/B1/B2\).
Figure 9: Results from our bridges investigation for case \(A\), which covers the six-months window from January to June 2022. If a LT executes two swaps \(X\to Y,Y\to Z\) one after the other (for tokens \(X,Y,Z\)), then we interpret \(Y\) as a bridge between the action of selling \(X\) to buy \(Z\). We save all pairs of pools for which there is a common token that acts as a bridge, with the related number of occurrences of bridge transactions. Then, we create a directed graph where nodes are pools and edges are built for each pair of pools that have at least \(800\) bridge transactions between them.
## 3 Structural Investigation of the Uniswap v3 Ecosystem
### Clustering of Liquidity Takers
#### 3.1.1 Overview and pre-processing
The DeFi ecosystem has grown increasingly complex in the recent years. The first step to shed more light on its intrinsic features and dynamics is to better understand its own components, which is what motivates the following empirical investigation of LTs trading behaviour on Uniswap v3. This is a non-trivial task, for a number of reasons. First of all, agents can easily generate numerous crypto wallets, and hence in some sense, "multiply" their identities to hide or obfuscate their full behaviour. Their actions are then generally spread over a broad set of possible pools, vary significantly in size both within and across different types of pools, and also happen with evolving frequencies over time. Applying the usual initial clustering methodologies would indeed be difficult (i.e. defining a set of features that characterise pools to then perform dimensionality reduction, and finally compute similarity measures), due to the complexity of the ecosystem. Thus, we propose a novel method to express and cluster structural trading equivalence of agents on multiple environments by leveraging both network analysis and NLP techniques. A sample of possibly external features is then used to judge and characterise the groups unravelled and extract insights on the main types of agents present in the ecosystem.
We focus on the LT data for our three longest periods \(A/B1/B2\), which we defined and described in the previous section. For each case, we first look at the distribution of the total number of transactions performed by the different LTs over each full time window. As an example, the distribution for case \(A\) is represented by the blue bars in Fig. 10. We then require a minimum number of transactions completed by each LT, since considering only a very small sample of trades per agent does not provide meaningful structural information on their behaviour. Thus, we impose a lower bound of at least an average of \(10\) transactions per month that each LT must have completed. On the other hand, we manually define maximum thresholds to remove only extreme singular outliers from each distribution for computational purposes.The initial total distribution for case \(A\) is shown in Fig. 10, where we also highlight how it changes when requiring a minimum number of transactions equal to \(25\) (orange bars), and when we require our final minimum and maximum thresholds of \(60\) and \(15,000\) total number of transactions, respectively (green bars). For cases \(B1/B2\), we require the range \(30\) to \(5,000\) transactions for the former, and \(30\) to \(11,000\) transactions for the latter. Overall, we find a number of LTs approximately between \(3,500\) and \(5,000\) for all our periods \(A/B1/B2\). This altogether defines the final sets of LTs along with their transactions. Next, we proceed to computing their embeddings, which are subsequently used for the final clustering stage.
#### 3.1.2 Methodology
NLP background and _graph2vec._The field of Natural Language Processing (NLP) studies the development of algorithms for processing, analysing, and extracting meaningful insights from large amounts of natural language data. Examples of its myriad applications include sentiment analysis of news articles, text summary generation, topic extraction and speech recognition. One of the turning points in NLP was the development of the _word2vec
Figure 10: Distribution of total number of transactions (txns) performed by LTs during case \(A\). We show the full distribution, the result after requiring a minimum of at least \(25\) transactions, and the distribution after applying thresholds of minimum \(60\) and maximum \(15,000\) transactions. The latter scenario results in our final set for case \(A\), which comprises \(3,415\) LTs. A small cluster of LTs much more active than others is already discernible.
word embedding technique [19], which considers sentences as directed subgraphs with nodes as words, and uses a shallow two-layer neural network to map each word to a unique vector. The learned word representations capture meaningful syntactic and semantic regularities, and if pairs of words share a particular relation then they are related by the same constant offset in the embedding space. As an example, the authors observe that the singular/plural relation is captured, e.g. \(x_{apple}-x_{apples}\approx x_{car}-x_{cars}\), where we denote the vector for word \(i\) as \(x_{i}\). Words sharing a common context in the corpus of sentences also lie closer to each other, and therefore, relationships such as \(x_{king}-x_{man}+x_{woman}\approx x_{queen}\) are satisfied with the analogies indeed predicted by the model.
Taking inspiration from this idea of preserving knowledge of the context window of a word in its embedding, the _node2vec_ algorithm [20] learns a mapping of nodes in a graph to a low-dimensional space of features by maximising the likelihood of preserving network neighbourhoods of nodes. The optimisation problem is given by
\[\max_{f}\sum_{s\in S}\log Pr(N_{L}(s)|f(s)), \tag{3}\]
where \(G=(S,T)\) is a graph with nodes \(S\) and edges \(T\), \(f\) is the mapping function for nodes to \(n\)-dimensional vectors that we aim to learn, and \(N_{L}(s)\subset S\) is the network neighbourhood of node \(s\) generated with sampling strategy \(L\). The latter is designed by the authors of _node2vec_ as a biased random walk procedure, which can be tuned to either focus on sampling a broader set of immediate neighbours, or a sequence of deeper nodes at increasing distances. Then, Problem (3) is solved for \(f\) by simulating several random walks from each node and using stochastic gradient descent (SGD) and backpropagation.
By taking a further step towards general language representations, [21] proposes the unsupervised algorithm _Paragraph Vector_ (also known as _doc2vec_), which learns continuous fixed-length vector embeddings from variable-length pieces of text, i.e. sentences, paragraphs and documents. The vector representation is trained to predict the next word of a paragraph from a sample of the previous couple of sentences. Both word vectors and paragraph vectors need to be trained, which is again performed via SGD and backpropagation.
As _doc2vec_ extends _word2vec_, _graph2vec_[22] is a neural embedding framework that aims to learn data-driven distributed representations of an ensemble of arbitrary sized graphs. The authors propose to view an entire graph as a document, and to consider the rooted subgraphs around every node in the graph as words that compose the document, in order to finally apply _doc2vec_. This approach is able to consider non-linear substructures and has thus the advantage to preserve and capture structural equivalences. One necessary requirement to pursue this analogy is for nodes to have labels, since differently labelled nodes can be then considered as different words. These labels can be decided by the user, or can be simply initiated with the degree of each node. Thus, _doc2vec_ considers a set of graphs \(\mathcal{G}=\{G_{1},G_{2}...\}\), where the nodes \(S\) of each graph \(G=(S,T,\lambda)\) can be labelled via the mapping function \(\lambda:S\rightarrow\mathcal{L}\) to the alphabet \(\mathcal{L}\). The algorithm begins by randomly initialising the embeddings for all graphs in the set \(\mathcal{G}\), then proceeds with extracting rooted subgraphs around every node in each one of the graphs, and finally iteratively refines the corresponding graph embedding in several epochs via SGD and backpropagation, in the spirit of _doc2vec_. The rooted subgraphs act as the context words, which are used to train the paragraph (i.e. graph) vector representations. Subgraphs are extracted following the Weisfeiler-Lehman (WL) relabeling process [23]. The intuition is that, for each node in a graph, all its (breadth-first) neighbours are extracted up to some depth \(d\). Labels are then propagated from the furthest nodes to the root one, and concatenated at each step. In this way, a unique identifier for each node is identified from its "context" and the full set can be used to train an embedding for the graph. The optimisation problem thus becomes
\[\max_{f^{\prime}}\sum_{G\in\mathcal{G}}\sum_{s\in S}\log Pr(g^{d}_{WL}(s)|f^{ \prime}(G)), \tag{4}\]
where the aim is to maximise the probability of the WL subgraphs given the current vector representation of the graph. Here, \(f^{\prime}\) is a mapping function of graphs to \(n\)-dimensional representations, and \(g^{d}_{WL}\) are WL subgraphs with depth \(d\).
A modification of _graph2vec_ for LTs embedding.For each one of our cases \(A/B1/B2\), we consider all the related LTs and their full set of transactions on the sub-universe of LTs' pools of relevance. We then introduce the concept of a _transaction graph_\(G_{txn}\), which we use to represent the behaviour of each active agent.
**Definition 3.1** (Transaction graph).: A _transaction graph_\(G_{txn}=(S,T,W)\) is the complete weighted graph where nodes \(S\) are the swap actions that the LT under consideration has executed, and edges \((s,r)\in T\) with \(s,r\in S\) are built between every pair of nodes. Each edge has a weight \(w_{sr}\in W\), which encodes the amount of time \(\Delta t\) (in seconds) elapsed between the two transactions \(s,r\). Each node \(s\in S\) has a label \(l_{s}\) from the alphabet \(\mathcal{L}\), which uniquely identifies the pool that the swap was executed into. Importantly, \(\mathcal{L}\) is shared among the full set of LTs and related transaction graphs.
Labels in the alphabet \(\mathcal{L}\) differentiate between swaps executed on different pools, i.e. pools with unique combination of tokens exchanged and feeTier implemented. This implies that the algorithm receives as input only general identifiers
of pools. Thus, we we can consider intuitive differences (e.g. expected volatility of the exchange rate on pools of stablecoins versus on pools of more exotic tokens) only afterwards, when assessing and investigating the meaningfulness and interpretability of the extracted clusters.
We now have a set of graphs representing LTs, and our aim is to find a \(n\)-dimensional vector representation of each one of its elements. We cannot plainly apply the _graph2vec_ algorithm, since the concept of neighbours of a node is irrelevant in a complete graph. Thus, we modify its mechanism to take advantage of the weight that the different links between nodes have, while maintaining the overall intuition. For each node \(s\in S\) of a graph \(G_{txn}\), we sample a set of neighbours \(N_{txn}(s)\) by generating random numbers from a uniform distribution between \([0,1]\) and comparing them to the _cut-value_ of the edges between the node and possible neighbours. If the value is below the cut-value, then the link is kept and the associated node added to \(N_{txn}(s)\). In this way, the probability of an edge to be chosen is inversely proportional to its weight \(\Delta t\), and the sub-structures kept represent clustered activity in time.
**Definition 3.2** (Cut-value).: The _cut-value_\(C(w_{sr})\) of an edge \((s,r)\in T\) with weight \(w_{sr}\in W\) in graph \(G_{txn}=(S,T,W)\) is computed as
\[C(w_{sr}) =\frac{H(f^{scal}(w_{sr}))}{H(f^{scal}(\min W))}, \tag{5}\] \[\text{with }\quad H(w_{sr}) =\sqrt{\frac{2}{\pi}}\exp\frac{-w_{sr}^{2}}{2},w_{sr}\geq 0,\] \[f^{scal}(w_{sr}) =\frac{w_{sr}-\min W}{(\max W)/|S|},\]
where we are using a half-norm that is shifted and scaled to adapt to each LT's extreme features, i.e. \(\min W\) and \(\max W\). The final cut-value is also normalised to impose a value of \(C(\min W)=1\), meaning that the shortest link(s) in the graph is chosen with probability \(1\) (of course, only if it is involved in the current node under consideration).
After having generated the set of \(N_{txn}(s)\), \(\forall s\in S\), we perform WL relabeling and proceed as in the vanilla version of the _graph2vec_ algorithm. We set all the hyperparameters to their default values, i.e. number of workers = 4, number of epochs = 10, minimal structural feature count = 5, initial learning rate = 0.025, and down sampling rate of features = 0.0001. The only exception is the number of WL iterations, which in our case must be set to \(1\) instead of \(2\). The result is an embedding for each graph in our set of transaction graphs, which becomes a set that we can subsequently cluster via the popular k-means++ methodology. Importantly, we want to underline that our embeddings and clusters do not depend on the real magnitudes of weights \(\Delta t\), since the sampling is adjusted on that. In addition, they also have no notion of the amount of USD traded, thus being agnostic to the transaction value. As a final note, we refer the reader to [24] for a version of _graph2vec_ that uses edge labels. However, the algorithm creates the dual version of the graph and would not be effective in our case, thus providing ground for our proposed extension.
An illustrative example.To clarify our approach, we describe a simple example. Consider an agent that executes \(20\) transactions. She executes the first \(10\) transactions shortly clustered in time, waiting only \(60\) seconds one after the other. Then, she waits \(42\) minutes to action on the final \(10\) transactions with, again, a frequency of one minute. Her behaviour is plotted in the transaction graph \(G_{txn}\) of Fig. 11a, where we assume for simplicity that each transaction is performed on the same pool and thus colour-code nodes all the same. We also number nodes to show the order in which the related transactions are executed. We do not draw all the edges of this complete graph for clarity and ease of visualisation, but hint with the green dashed lines that indeed there are more connections to be remembered. In this example, the minimum time between transactions is \(60\) seconds (light blue edges) and the maximum one is one hour, i.e. \(3,600\) seconds (light grey edge). Some intermediate times are depicted as edges with the same colour for the same weight. The resultant cut-value function \(C(w)\) that defines our sampling probabilities to choose edges is shown in Fig. 11b. As intuitively desired, we aim at always keeping the shortest edges and indeed these have probability \(1\). Then, we also aim to keep the most clustered "communities", and indeed, we observe from the plot that transactions five minutes away are still chosen with \(40\%\) probability, but longer times are very easily dropped.
#### 3.1.3 Analysis of results
For each case \(A/B1/B2\), we study the structural equivalence of LTs' trading activity by clustering the representations generated via our modified _graph2vec_ algorithm. Focusing first on case \(A\), we compute embeddings for dimensions \(n\in\{8,16,32,64\}\), and confirm with Principal Component Analysis (PCA) that the proportions of data's variance captured by different dimensions are well-distributed. For each \(n\)-dimensional set of vectors, we then group LTs by performing a series of k-means++ clusterings with different number of desired groups. We compute the inertia of each partition found, where inertia is defined as the sum of squared distances of samples to their closest cluster center. Then,
we choose the optimal clustering via the _elbow method_, i.e. by picking the point where the marginal decrease in the loss function (inertia) is not worth the additional cost of creating another cluster. The similarity between optimal clusterings for different dimensions is then computed, in order to investigate the stability of results across representations of increasing dimensionality. We achieve this by computing the Adjusted Rank Index (ARI) [25], which is a measure of similarity between two data clusterings in the range \([-1,1]\), adjusted for the chance of grouping of elements. We find ARIs for clusterings on \(8\)-vs-\(\{16,32,64\}\) dimensional data around \(0.75\), while clusterings on \(16\)-vs-\(32\), \(16\)-vs-\(64\) and \(32\)-vs-\(64\) dimensional data reach approximately the value of \(0.90\). Therefore, we conclude that there is a high stability of results when our data are embedded at least in \(16\) dimensions, and use the related \(16\)-dimensional vector representations for our final analyses. The related optimal number of clusters of LTs for case \(A\) is seven. Similar results arise for cases \(B1/B2\) too, and the related optimal numbers of clusters of LTs are six and seven, respectively.
Each extracted clustering is based on the structural similarity of LTs' trading behaviour. To judge the goodness of our modified algorithm and assess the results, we investigate whether there are specific features or trends that are highly representative of only some of the groups. Thus, we proceed to computing a set of summary statistics for each LT, and calculate the average of these results over the LTs belonging to each different group. The features that we consider are:
* average and median USD traded,
* average and median time \(\Delta t\) in seconds between transactions,
* proportion of transactions done in "SS", "EXOTIC" or "ECOSYS" pools, and related entropy,
* proportion of transactions done in pools with a specific feeTier, and related entropy,
* proportions of trades on days when the SP LargeCap Crypto Index7 increased or decreased in value, or when the market was closed, due to weekends and bank holidays. Footnote 7: [https://www.spglobal.com/spdji/en/indices/digital-assets/sp-cryptocurrency-largecap-index/](https://www.spglobal.com/spdji/en/indices/digital-assets/sp-cryptocurrency-largecap-index/) #overview
The distinction between "SS", "EXOTIC" or "ECOSYS" pools is inspired by the classification in [17], where the authors introduce a notion of normal pools, stable pools and exotic pools. For them, stable pools exchange tokens that are both stablecoins. Normal pools trade instead tokens that are both recognised in the crypto ecosystem, while exotic pools deal with at least one token that is extremely volatile in price (e.g. YAM, MOON and KIMCHI). We slightly divert from this classification and define "SS" pools as pools whose tokens are both stablecoins, "ECOSYS" pools as pools that exchange only tokens that are either stablecoins or pegged to the most established BTC and ETH coins, and "EXOTIC" pools as the remaining ones. ECOSYS pools can be seen as the venues carrying the "safest" opportunity for profit for a novice crypto investor, since they trade volatile tokens though directly related to the most established blockchains that are the true foundations of the whole DeFi environment.
The average magnitude of features computed over the LTs belonging to each different cluster for case \(A\), i.e. over Jan-June 2022, is reported in Fig. 11(a). We focus on the groups found specifically for this period because it is the longest one and thus, it provides us the most general results and insights. Cases \(B1/B2\) will be later described too, in order to assess the overall stability of recovered _species_ of LTs and highlight any specific variations due to
Figure 11: For our illustrative example, we show in (a) a simplified representation of the LT’s transaction graph, and in (b) the cut-value that defines probabilities of keeping edges as neighbours.
different sub-periods in time and related pools of relevance considered. The seven clusters of LTs found have sizes of \(304/142/512/978/379/186/914\) agents respectively, which means that we are able to find a well-balanced distribution of cluster sizes without any dominant clusters in terms of size. Thanks to the heatmap in Fig. 11(a), we also easily confirm that our methodology is able to extract different groups of LTs that have significant variation of behaviour with respect to the outer features defined. However, a few columns had to be dropped due to non-significance of their results. Importantly, we also recall that inner biases on ratios are present (e.g. when considering that our sub-universe does not have a uniform distribution of numbers of pools with specific feeTier), and thus we can expect more/less transactions of some type on average. For visualisation purposes, we also embed the \(16\)-dimensional representations of LTs into a \(2\)-dimensional view via t-SNE, and plot them with perplexity \(=15\) in Fig. 11(b). LTs are colour-coded according to the cluster they belong to, and we indeed observe that different groups lie on different parts of the plane.
Focusing on Fig. 11(a), one can draw the following high-level remarks.
* **Groups 0 and 1** have a strong focus on trading exotic cryptocurrencies. The former set of LTs mainly uses feeTier \(3000\) for the purpose, and shows slightly higher than average tendency to trade when the market is closed. The latter group uses significantly both the \(3000\) and \(10000\) feeTiers, meaning that the related LTs
Figure 12: Clustering of LTs for case \(A\), i.e. over the six-months time window between January and June 2022. In (a), each row represents one of the recovered clusters and columns are the different features computed to characterise species of LTs. The color-code employed applies to each column separately to be able to quickly identify the related smallest and biggest values in magnitude, and judge the general distribution. It is essential to always check the magnitudes of cells per se too, due to highly variable variance between columns. In (b), the t-SNE plot of embeddings of LTs is reported with perplexity \(=15\) and points are color-coded according to their cluster of membership.
are willing to accept also extremely high transaction costs. This behaviour could indicate that they have high confidence on their intentions and possibly urgency.
* On the other hand, **groups 2 and 3** trade stablecoins more than usual. The former cluster could point to an enhanced use of SS pools to take advantage of optimised routing, while the latter has a non-negligible proportion of trades in exotic pools with feeTier \(10000\). Likely, group 3 isolates a set of LTs that are interested in niche exotic tokens, which are only proposed in pools against stablecoins that do not overlap. Diverting funds between two of these exotic tokens requires an exchange between the two related stablecoins too, which motivates the recovered statistics. We also witness strong usage of the feeTier \(100\), which hints to traders trying to compensate the high costs suffered in pools with feeTier \(10000\) by paying the lowest possible fees on the SS pools.
* **Groups 4 and 6** are more active than average on ECOSYS pools. The two groups differ noticeably from their opposite relative strength of USD traded and time between operations. Overall, group 6 trades less money and waits longer, mainly using pools with low feeTier \(500\). These features can be interpreted as characteristics of cautious retail traders that invest in less risky and highly well-known crypto possibilities. And indeed, we also find that this group is one of the largest in size. Then, group 4 also relates to ECOSYS pools. However, these users tend to trade more USD with higher frequency, and this is also the cluster with much higher than average proportion of LTs that also act as LPs (\(\sim 16\%\)). Therefore, we identify here a group of more professional investors.
* Finally, **group 5** shows a significant usage of all the three types of liquidity pools, but trades are concentrated in pools with cheap feeTier \(500\). These agents trade often, and indeed show the smallest median time between transactions. These eclectic, active and thrifty LTs are probably our group of smartest investors.
Our results confirm that the proposed algorithm is able to recognise variance in the data, and allow us to extract interesting insights into the behaviour of different types of species of LTs. In particular, we observe how the type of pools on which LTs are active plays a primary role in the definition of their trading behaviours. This is especially interesting since no full notion of tokens and feeTier is used in the generation of the embeddings. Indeed, only a unique label per pool is provided as input to our algorithm, e.g. USDC-WETH/500 could be pool "P1", USDC-WETH/3000 pool "P2" and FXS-WETH/10000 pool "P3". Thus, these pools would be considered equally different if no structural discernible pattern was recognised by the methodology, providing some further evidence of the strength of our proposed extension to _graph2vec_.
Stability analyses.As already motivated, we now pursue the same analyses described above but for cases \(B1/B2\). We cluster the \(n\)-dimensional embeddings for \(n\in\{8,16,32,64\}\) and compute the ARIs between each pair of resultant sets of LT groups. We confirm that at least a \(16\)-dimensional embedding is required in order to have a stability of clusters in case \(B1\), while only eight dimensions suffice for the case \(B2\). For simplicity, we use the \(16\)-dimensional representations consistently in all cases. We recover six groups of LTs in case \(B1\), and seven in case \(B2\). In both cases, we find two clusters with same characteristics as groups \(4\) and \(6\) of case \(A\), i.e. traders mainly active on ECOSYS pools. We also recover the eclectic traders of group \(5\). Therefore, we observe several stable and persistent types of LTs. Small perturbations happen instead on the groups trading on SS or EXOTIC pools, as one could expect from the mere evolution of time and external market conditions, and consequently generation of different behaviours. In particular, all case \(A\) species, except group \(1\), are also found in case \(B1\). On the other hand, case \(B2\) shows less intensity on group \(3\), probably due to investors diversifying more during the crypto turmoils of the second quarter of 2022. Overall, we observe general agreement on the groups and main features recovered during cases \(A/B1/B2\), and we can thus rely on our species of LTs found for the longest duration case \(A\) as descriptors of the ecosystem.
The above stability-related findings are of interest in themselves, first of all, since central banks started hiking interest rates in March 2022. This consequently stopped a strong influx of liquidity into the crypto ecosystem and accentuated a period of significant underperformance, that could have indeed weakened the stability of results. On top of that, the Terra-Luna crash happened in May 2022 and it could have in theory enhanced noise and instabilities especially in the structural clustering on case \(B2\). As a very last remark, we notice that only \(\sim 20\%\) addresses are present in all cases \(A/B1/B2\). Therefore, we are either recovering similar behaviour but for different people, or in some cases it could be the same person simply employing a new wallet to better hide their trading behaviour.
We have mainly focused on the liquidity consumption component of the crypto ecosystem thus far. In the next step of our investigation, we shift the focus from LTs to pools. We first aim to perform a clustering of pools based on features built from simple statistics that consider both liquidity consumption and liquidity provision. This will allow us to assess whether the SS, ECOSYS and EXOTIC classification is beneficial for describing the crypto ecosystem or is only useful for LTs characterisation.
### Clustering of pools
#### 3.2.1 Motivation and Methodology
The above analyses revealed a characterisation of the main types of LTs structural trading behaviour. While the importance of different types of pools in the ecosystem seems to be also clear, we stress that a full understanding of liquidity pools goes beyond the mere liquidity consumption mechanism (i.e. it needs to further account for both liquidity provision and price evolution). Thus, we now pursue an intuitive initial investigation of the similarity of pools themselves, in order to gain additional insights on the entire ecosystem.
We focus on case \(A\), as it covers the longest period in time. We consider the intersection of pools relevant for both LTs and LPs to properly account for both mechanisms, and find a resulting set of \(16\) pools. For each pool, we compute the following \(13\) features:
* "SdailyLT" and "LdailyLP" respectively,
* "SavgUSD", "LavgUSDmint" and "LavgUSDburn",
* "SdailyVol", "LdailyVolMint" and "LdailyVolBurn",
* "SdailyTxn" and "LdailyTxn",
* "Sdaily1txn". This measure is computed to gauge the tendency of external smart investors to hide their behavior by creating several different wallets on the pool.
For the above features, we create related labels for ease of reference, which start with letter "S" if the quantity is computed from swap operations, or letter "L" if the quantity is computed from liquidity provision operations. In Fig. 13, we show the heatmap of Spearman correlations between the above attributes plus feeTier ("SfeeTier") for our pools. There are significant positive correlations, especially among features developed from LT data and LP data, respectively. Thus, we standardise entries and employ linear PCA and kernel PCA (with both "rbf" and "cosine" kernel in the latter) to reduce the dimensionality of our data. The eigenvalue decay for all three mentioned cases is shown in Fig. 14, where only the first seven eigenvalues are depicted for clarity of visualisation. The cosine kernel PCA is seen to capture more variance in fewer dimensions, and thus we embed the data by projecting on its related first three components. The resulting 3D embedding is shown in Fig. 15 from three different angles, where we color-code pools according to their feeTier. In particular, green relates to feeTier \(100\), blue to \(500\), orange to \(3000\), and red to \(10000\).
Figure 13: Spearman correlation between the computed features for pools, with the addition of feeTier, for our case \(A\).
#### 3.2.2 Discussion
From the projections shown in Fig. 15 and initial trials of clustering, it is clear that the division between SS, ECOSYS and EXOTIC pools does not hold when considering the full set of dynamics on pools (while it is indeed suitable in connection to LTs' behaviour specifically). Similarly, we do not witness strong proximity of pools with same feeTier. Liquidity consumption, provision and price evolution are all essential mechanisms to consider for a full description of the Uniswap ecosystem, and our intuition is that certain combinations of tokens and feeTiers are more similar and suitable for trading at different moments in time. LPs are more incentivised to enhance liquidity on pools with strong LTs activity, low volatility of the exchange rate to avoid predictable loss, and possibly high feeTier from which they indeed mainly profit. In parallel, LTs are more interested in pools with low fees but high volatility of the price of tokens in order to extract gains from trading opportunities, and high liquidity to diminish the market impact of their trades. Thus, different adjustments of these mechanisms can result in the proximity or not of our projections of pools. In the next Section of our work, we propose a model to judge the health of each pool's combination of mechanisms and characterise the best venues for market participants (i.e. both LTs and LPs) to be active on.
## 4 The _ideal crypto law_ and a _cryptoness_ measure of a liquidity pool
### Model: from the ideal gas law of thermodynamics to the ideal crypto law for pools
In Physics, an ideal gas is a theoretical gas composed of many randomly moving particles with negligible volume that are not subject to interparticle interactions. On the other hand, real gases occupy space and molecules interact between themselves. The field of thermodynamics consists of laws that govern the behaviour of macroscopic properties of matter (e.g. of the ideal gas just mentioned above), while the aim of statistical mechanics is to derive such macroscopic properties from the laws characterising the microscopic variables and interactions of individual particles.
In thermodynamics, the system of interest is often an ideal gas. The state of this system can be specified by the pressure \(P\) and volume \(V\) that its particles respectively exert and fill, while its empirical temperature \(T\) is measured via a test
Figure 14: Decay of eigenvalues for the first seven out of \(13\) eigenvalues for different PCA kernels.
Figure 15: Projection on a 3D space of the vectors encoding different features of pools, from the application of PCA with cosine kernel. Views from different angles are reported for a better judgment of the results, and pools are color-coded according to their feeTier (green relates to feeTier \(100\), blue to \(500\), orange to \(3000\) and red to \(10000\)).
system kept in thermal equilibrium. Experimentally, a number of different systems are found to have an equation of state of the form
\[PV=\phi(T), \tag{6}\]
where \(\phi\) is a general function with temperature \(T\) as input variable [26]. If the system is an ideal gas, then Eq. (6) unfolds into
\[PV=nRT, \tag{7}\]
which constraints the mutual evolution of variables \(P,V\) and \(T\), and is denoted as the _ideal gas law_. The constant \(n\) specifies the number of moles of particles in the considered closed system and \(R\approx 8.31\frac{J}{K\cdot mol}\) is the gas constant. For each fixed value of empirical temperature \(T\), there is a well-defined set of possible states \((P,V)\) for the system in analysis. These states form a family of curves in the \((P,V)\) plane called _isotherms_, which are rectangular hyperbolae as shown in Fig. (a)a. Recalling the AMM Eq. (1), we notice how the trading function of a Constant Product Market Maker (CPMM) such as Uniswap v2 also defines a family of rectangular hyperbolae. These are proposed in Fig. (b)b, where we sample curves by fixing different values of \(k\) (i.e. liquidity) instead of temperature \(T\). We have referred to Uniswap v2 for this comparison, since the concentrated liquidity mechanism of Uniswap v3 would require the introduction and discussion of a series of further details.
The similarity between fundamental relationships of the two systems mentioned, which is shown in Fig. 16, is the starting consideration that inspires our model. Indeed, we propose an analogy in which each liquidity pool is a gas. Our intuition is that the amounts of reserves \((x,y)\), which define the state of a pool given some level of liquidity, can be enclosed into broader variables that govern all the dynamics relevant to a CPMM. Then, liquidity is approximated by computing the full TVL in each pool (i.e. our proxyTVL), since our data relate to Uniswap v3. The view that we adopt is mainly macroscopic, and especially based on resemblances between the thermodynamic explanation of the behaviour of gases and features of our crypto ecosystem. We have not developed a concrete full physical formulation of liquidity pools yet, but this is an in-progress extension of the current work.
Despite being very simple and elegant, the ideal gas law of Eq. (7) properly captures rich and interesting dynamics. To define our analogy and the similitude between variables (summarised in Table 1), we follow how the ideal gas law was discovered and reason about both the possible meaning of variables and expected relationships. First of all, \(P\) can be intuitively compared to the USD volume traded by active LTs over e.g. a day. If everything else but \(T\) is kept constant, then we expect \(T\) to increase with higher \(P\). Therefore, we can indeed interpret \(T\) as the liquidity of the pool, i.e. the value of our proxyTVL in USD for the pool at that date. Indeed, the evolution of liquidity of a pool accounts for the behaviour of LPs, and more LPs should execute mini operations when there are more LTs active, in order to collect higher profit from the fees that the latter pay for each swap transaction. Clearly, we also expect a stronger overall volume traded by LTs with more liquidity, due to more convenient, smaller price impact. Variable \(V\) is then the volume of the gas in the thermodynamics interpretation, i.e. ideal gas law. Despite being a more subtle relationship, we find that it is reasonable to consider \(V\) as the stability of the exchange rate between the two tokens in the pool, i.e. \(\texttt{std}(\text{Z})^{-1}\), where \(Z\) is the exchange rate. Keeping everything else constant, one would expect that, with higher liquidity, the price is more stable. Similarly, a compressed gas with small volume will be less stable than an expanded gas. In addition to that, the _concentrated liquidity_ mechanism of Uniswap v3 implies that LPs encounter the risk of no gains from LTs' fees if the exchange rate moves outside the range of prices over which they are actively providing liquidity. Thus, a more
Figure 16: Samples of rectangular hyperbolae that define the set of possible states for different constant values of (a) temperature \(T\) if the system is an ideal gas, (b) liquidity \(k\) if we are considering a liquidity pool in Uniswap v2.
stable \(Z\) for the same USD volume traded by LTs (i.e. \(P\)) is likely to attract more minting operations, especially close to the current rate. Finally, a higher \(P\) with the same level of liquidity is indeed likely to cause a less stable relative price of the tokens, due to the impact of surplus swap operations.
Keeping the above in mind, Eq. (7) thus becomes
\[\begin{split} P_{vol}\cdot V_{stab}&=n_{fee}\cdot R _{pool}\cdot T_{liq}\\ \Rightarrow P_{vol}\cdot\textsc{std(Z)}^{-1}&=feeTier^ {-1}\cdot R_{pool}\cdot T_{liq},\end{split} \tag{8}\]
allowing us to bring together into a single formula, all the variables that govern the three mechanisms of liquidity consumption, provision, and exchange rate evolution. Constant \(R_{pool}\) is instead an invariant characteristic of each pool, inferred from data. Here, \(n\) is the fixed number of moles (molecules), thus a constant. A higher \(n\) means more interactions per unit of time in the physical description, which we relate to lower feeTier and stronger activity of the LTs, that we know dominate LPs. Thus, we express \(n=feTier^{-1}\), which is indeed constant too.
Real gases and van der Waals forces.Our above model is based on the thermodynamics law for ideal gases. However, it is worth mentioning that there also exists a law for real gases that interact via van der Waals forces. This is governed by the Van der Waals equation
\[\left(P+a\frac{n^{2}}{V^{2}}\right)\cdot(V-nb)=nRT, \tag{9}\]
where the variables have same meaning as before. In addition, \(a\) is a constant whose value depends on the gas and represents intermolecular forces, while \(b\) is the volume occupied by the molecules of one mole of the gas. Based on a preliminary analysis, we believe it could be of interest to expand the ideal crypto law in this direction. However, this is beyond the scope of our current work, and we leave this for future investigation.
### Regression analysis and interpretation of results
As a first step, we test for empirical instances that support the validity of our ideal crypto law given by Eq. (8). We focus on case \(A\) and consider the intersection of pools that are relevant for both liquidity provision and liquidity consumption. This results in a set of \(16\) pools, namely USDC-USDT/100, WBTC-WETH/500, WETH-USDT/3000, WETH-ENS/3000, MATIC-WETH/3000, WBTC-USDC/3000, WBTC-WETH/3000, DAI-WETH/500, UNI-WETH/3000, SHIB-WETH/3000, USDC-USDT/500, USDC-WETH/500, WETH-CRV/10000, LINK-WETH/3000, USDC-WETH/3000, WETH-USDT/500.
For each individual pool, we compute the quantities \(P_{vol},V_{stab},T_{liq}\) reported in Table 1 at daily scale for the full six-months time window between January and June 2022. We suggest not to use higher frequencies due to significant general low rate of activity of LPs. Observations with z-score \(>3\) in absolute value are considered outliers and discarded. Then, the remaining daily realisations for each combination of pairs of variables are scatter-plotted to allow a critical analysis of the results. Figure 17 shows a few representative examples, where we present data points related to isotherms on the \((P_{vol},V_{stab})\)-plane for four pools. We approximate data points at constant liquidity by measurements lying within one same small sub-shape of proxyTVL, where these bins of proxyTVL are built to have equal width. Two isotherms per pool are plotted to check for an upper shift in the curve related to a higher value of \(T_{liq}\). We observe hints that the ansatz relationships are indeed satisfied, apart for the case of pools whose tokens are both stablecoins. The latter dissimilarity is perhaps not surprising, since our ideal crypto law aims at encompassing the full ensemble of crypto mechanisms, while the behaviour of LTs and price variation on pools of stablecoins have lower relevance.
We broaden our sub-universe of venues of interest by taking the union of pools significant to LTs' and LPs' behaviour in case \(A\). After filtering for relevance of the samples in the daily frequency, we are left with a set of \(32\) pools. For
\begin{table}
\begin{tabular}{|l||l||l|l|} \hline \multicolumn{2}{|c||}{} & Ideal gas law & \multicolumn{2}{c|}{Ideal crypto law} \\ \hline Symbol & Meaning & Symbol & Meaning \\ \hline \hline \(P\) & pressure & \(P_{vol}\) & daily USD volume traded by LTs \\ \hline \(V\) & volume & \(V_{stab}=\textsc{std(Z)}^{-1}\) & daily stability of the exchange rate \(Z\) \\ \hline \(n\) & moles of particles & \(n_{fee}=feeTier^{-1}\) & stimulus to LTs’ activation \\ \hline \(R\) & gas constant & \(R_{pool}\) & pool crypto constant \\ \hline \(T\) & temperature & \(T_{liq}\) & daily liquidity, i.e. proxyTVL value \\ \hline \end{tabular}
\end{table}
Table 1: Parallelism drawn between the ideal gas law and our ideal crypto law.
each pool, we perform a linear regression over the available six months of daily values, and thus estimate \(R_{pool}\) by rearranging Eq. (8) to
\[\begin{split} P_{vol}&=R_{pool}\cdot\Big{(}\frac{n_{ free}\cdot T_{liq}}{V_{stab}}\Big{)}\\ \Rightarrow y_{pool}&=R_{pool}\cdot x_{pool},\end{split} \tag{10}\]
where the intercept is zero. We compute the coefficient of determination \(R^{2}\) of the regression, which we refer to as the _cryptonsess_\(\xi\). Thus, pools with high cryptons are meant to adhere well to our proposed ideal crypto law model. We compute the average \(\xi_{SS}\) over the seven pools found to exchange only stablecoins in our sub-universe, and compare it to the average \(\xi_{notSS}\) of all other pools. We find values of \(\xi_{SS}=-0.44\) and \(\xi_{notSS}=-0.21\), meaning that SS pools should not be considered in our model, as expected and already motivated. For the remaining pools, we show in Fig. 18a the ones that have \(\xi>0\). We notice that different feeTiers appear to be relevant and that there is one interesting occurrence of two pools, both with high \(\xi\), that exchange the same tokens, i.e. WBTC-WETH/3000 and WBTC-WETH/500. This latter result raises the question of whether these two pools are generally adhering to our ideal crypto law, or if they might follow it at disjoint periods of time that however influence the overall cryptotness values and render them both significant. Before investigating this idea further, it is worth highlighting the fact that several pools with high \(\xi\) are seen to exchange exotic tokens. Examples are pools trading CKV and UOS, which are respectively the tokens of the well-know DEX Curve.fi, and of the blockchain-based gaming company Ultra. The general pattern that we read is that pools exchanging digital assets linked to a tangible and established business idea might adhere better to our ideal crypto law. We think of such pools, and companies, as being strongly embedded in the crypto ecosystem, at least over the time window of reference in our analyses.
Figure 17: From the proposed ideal crypto law, we plot related empirical isotherms on the \((P_{vol},V_{stab})\)-plane for a sample of pools. We approximate data points at constant liquidity by measurements lying within one same small sub-range of proxyTVL, where these bins of proxyTVL are built to have equal width. We plot two isotherms per pool, in order to check for an upper shift in the curve related to a higher value of \(T_{liq}\). Clearly, the scatter plots in (a), (b) and (c) do resemble the desired rectangular hyperbolae depicted in Fig. 16a, while (d) has no trace of the tested relationship. The latter result agrees with our intuition, since (d) concerns a pool of only stablecoins.
For the sake of clarity, we explicitly depict in Fig. 18 the linear regression pursued for the pool with highest cryptons, i.e. WETH-CRV/10000, where occurrences that happened more recently in time are color-coded darker. We also mention that pools might exhibit a less good fit to our ideal crypto law due to noise introduced by the characteristic frequency of executed swaps, mint and burn operations on each pool. Liquidity provision events are generally rare, as reported in Table 2. Thus, it might become necessary to define ad-hoc frequencies, possibly dynamic, to be used in the regression for each pool, in order to better investigate the related evolution of behaviours.
Dynamic analyses.We now analyse the evolution of our proposed cryptoness metric for the set of pools, over the six-months time window of case \(A\), i.e. Jan-June 2022. We again compute the regression in Eq. (10) and record the related coefficient of determination as the cryptonses value \(\xi\), but we now use the observations contained in a \(30\)-day window in time, sliding every day. This allows us to recognise pools that adhere more or less to our ideal crypto law at different points in time, and investigate the patterns generated. Interesting results for specific subsets of pools are shown in Fig. 19, where we threshold all the irrelevant \(\xi<0\) to \(\xi=0\) for ease of visualisation. In reference to our ranking of Fig. 18a and related discussion, we consider in Fig. 19a the evolution of \(\xi\) for a subset of pools exchanging exotic tokens, which relate to well-established blockchain-based companies. As expected, we observe \(\xi\)s that are strongly significant for almost the entire six-months time range under consideration. Thus, we confirm that the exotic tokens depicted are deemed by our model to have solid associated companies by the dynamics of market participants and price, and we construe the related pools as _healthy_ venues for trading.
Our intuition then suggests that there should exist _only one ideal feeTier per point in time, for the exchange of two same tokens_. This feeTier would be the key that balances the set of mechanisms we are modeling and the beliefs of the market participants. As a first related example, Fig. 19b shows some association between the drops in cryptons of the liquidity pool SHIB-WETH/10000, and the peaks in cryptons of SHIB-WETH/3000, allowing us to consider our cryptoness measure as an indicator of the described tendency. With a similar view, we also expect the redundancy of
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{Liquidity consumption} & \multicolumn{4}{c|}{Liquidity provision} \\ \hline Pool & Daily & Pool & Daily & Pool & Daily \\ & Swap & & Mint & & Burn \\ \hline \hline USDC-WETH/500 & 6181 & USDC-WETH/300 & 95 & USDC-WETH/3000 & 70 \\ \hline WETH/500 & 2395 & USDC-WETH/500 & 65 & USDC-WETH/500 & 70 \\ \hline PAL-WETH/500 & 817 & WETC-WETH/500 & 19 & WETC-WETH/3000 & 26 \\ \hline WETH-CRV/10000 & 55 & DYDX-WETH/3000 & 1.0 & EXS-WETH/10000 & 1.4 \\ \hline USDC-UOS/10000 & 39 & USDC-UOS/10000 & 0.6 & USDC-UOS/10000 & 1.0 \\ \hline SHIB-WETH/10000 & 32 & CEL-WETH/3000 & 04 & CEL-WETH/3000 & 0.8 \\ \hline \end{tabular}
\end{table}
Table 2: Average daily frequency of operations on different pools for the time window of case \(A\). We report the first and last three values when sorting by magnitude.
Figure 18: For each pool under analysis, the linear regression between \(x_{pool}\), \(y_{pool}\) is computed for daily values over the six months of case \(A\). Each related coefficient of determination provides the cryptoness \(\xi\) of the pool.
pools exchanging the same token against different stablecoins to reduce the overall health of single pools, due to the related fragmentation of market participants' dynamics. Thus, we believe that a natural stabilisation of each token to one specific stablecoin per period in time should also be reached with the stronger establishment of crypto ecosystems.
The results of this section provide supporting evidence towards the above concepts. In particular, we show that our cryptotness measure signals the healthiest venues (i.e. liquidity pools) where agents should be active on, by comparing simultaneous changes in \(\xi\), versus variations in agents' activity and TVL. This is performed for the set of relevant pools that exchange WETH against stablecoins, i.e. DAI-WETH, WETH-USDT and USDC-WETH, also with different feeTiers, i.e. 500 and 3000. Figure 19c shows the related evolutions of the cryptotness measure, where we can clearly see that feeTier 3000 is the relevant one for each pair DAI-WETH, WETH-USDT and USDC-WETH for the entire time window, except during April 2022. Interestingly, over this month, no liquidity pools exhibit strong cryptotness, except for DAI-WETH/500, which we thus claim to be the only healthy venue for trading at that point in time. To support our claim, we compute the average daily number of swap actions, mint operations and burn operations, for each one of our six pools under analysis, as related measures of activity and good usage. In particular, we compute average quantities over the month of April, and then over the time window Jan-June 2022 but excluding April, and then calculate the percentage change. This results in the set of values
\[opChange=\frac{opApril-opNotApril}{opNotApril}, \tag{11}\]
where "op" indicates an operation between swap, mint and burn. The actual computations are reported in Fig. 19(a), where we further include _avgChange_ as the average of _swapChange_, _mintChange_ and _burnChange_. In addition, we also plot in Fig. 19b the evolution of TVL in USD over the pools of interest, in order to be able to compare occurrences of drops in liquidity over time. Figure 19a reveals that DAI-WETH/500 is indeed the liquidity pool with the least damage of activity during April 2022. While there exists another pool which performs better than others, namely USDC-WETH/500, it suffered from a significant drop in liquidity in April 2022. On the other hand, DAI-WETH/500 had constant TVL during this month, as clearly shown in Fig. 19b. We conclude that the only pool with a significant cryptotness \(\xi\) score during April 2022 is indeed the healthiest and preferred trading venue during the month of relevance. This provides further empirical motivation and utility to our proposed ideal crypto law.
Our measure of cryptotness of liquidity pools aims at becoming a useful tool for market participants, allowing them to assess the health of trading venues and decide the best environments on which to be active. Ideally, we would
Figure 19: Evolution of cryptotness \(\xi\) for different subsets of pools, where \(\xi\) is now repeatedly computed from the regression over a 30-days window of instances, sliding one day each time.
like to witness smooth dynamics of the evolution of cryptoteness in the future, with new pools adhering better to the ideal crypto law once they become well-rooted components of the crypto ecosystem. From a regulatory point of view, dynamic requirements could then be imposed, such as less over-collateral required for trades on pools with cryptotness above a certain threshold. This metric could thus be employed both by regulators and practitioners for developing pool health monitoring tools, and establishing minimum levels of requirement. Finally, we believe that it would also be interesting to investigate and interpret variations in the characteristic constant \(R_{pool}\) for specific pools, especially after drops in cryptotness. As a basic motivating example, Figs. 11(a) and 11(b) show two regressions over the first four and final two months of case \(A\), for pools SHIB/WETH-10000 and SHIB/WETH-3000. From the plots, it is indeed clear that the slope of regression lines varies significantly. The related distributions of recovered \(R_{pool}\) values for a sliding window long \(30\)-days are then shown in Fig. 10(c), where we are leveraging on the results from the dynamic analyses on SHIB/WETH pools that generated Fig. 0(b) too. We keep only \(R_{pool}\) values related to regressions with \(\xi>0.3\), in order to lower noise. Interestingly, the two pools have \(R_{pool}\) values with same order of magnitude, which is a general feature recovered from pools exchanging same tokens.
## 5 Conclusions
Blockchain, DeFi and DEXs are recent concepts that just started taking roots in the common language and knowledge of both practitioners and academics. However, a real comprehension of the characteristic dynamics of these protocols is
Figure 21: Evolution in time of the characteristic coefficient \(R_{pool}\) for two pools exchanging the same tokens SHIB and WETH, but with feeTier \(10000\) and \(3000\). In (a) and (b), the yellow (green) line is generated from the regression over data from the first four months (last two months) of case study \(A\). In (c), we propose the distribution of \(R_{pool}\) values recovered from the dynamic analyses with reference to Fig. 0(b). To lower the noise level, we keep only values of \(R_{pool}\) when the related cryptotness is \(\xi>0.3\).
Figure 20: Comparison of activity variation and liquidity evolution between our six pools exchanging WETH against a stablecoin, over the six months of case \(A\). We discuss changes in average activity over April 2022 versus on the remaining five months of case \(A\), and similarly compare disequilibria of minting versus burning operations in USD traded. Pool DAI-WETH/500 is the only venue of the subset with significant cryptotness in April 2022 and indeed, it reports both the least damage in activity and no drop in proxyTVL over that month.
still far away. Similarly, academic research is at its dawn on the topic, despite having strongly accelerated in the past year. To this end, our investigations aim at being a stepping stone towards a deeper understanding of the crypto ecosystem, and we achieve this task by empirically studying and characterising Uniswap v3 DEX. We build a workflow to define the most relevant liquidity pools over time by assessing the inner features of pools along with their interconnectedness, and provide related lists of liquidity pools significant for six different windows in time, i.e. cases \(A/B1/B2/C1/C2/C3\), that can be directly used for future research studies. We then focus on LTs and show the existence of seven "species" of traders with interpretable features. These clusters are recovered by assessing the equivalence of LTs structural trading behaviour, and suggest a connection between patterns in swap transactions and specific types of pools on which these operations are indeed executed. Finally, we also propose a novel metric that could aid practitioners and regulators in the challenge of assessing the "health" of different trading venues, i.e. liquidity pools, by proposing an ideal crypto law and proving the efficacy of the related cryptotness goodness measure.
Future work.Regarding future directions of research, there are three main threads we aim to pursue. The first one is a detailed investigation into the behaviour of LPs, along with the corresponding clustering of species, also leveraging on a strongly data-driven approach. Secondly, we are considering the development of a broader _integrated crypto market indicator_. This would be based on an aggregated cryptotness measure over a set of pools, with different associated weights quantifying the contribution to the index, proportional to the evolving total value locked in each liquidity pool. And as a last point, we plan to leverage the entire data on the Ethereum blockchain to track the flow of funds over multiple DEXs and active protocols. This will enable us to gain a better understanding of LTs, as we will then be able to approximate their profit and loss (PnL). Indeed, the crypto ecosystem is highly interconnected, and agents easily trade between different exchanges on the same blockchain, also with the possibility to enhance their positions via borrowing. In parallel, one could also investigate the "optimal routing problem" [27] on the Ethereum blockchain, which is formulated as the problem of optimally executing an order involving multiple crypto assets on a network of multiple constant function market makers.
## Acknowledgements
We are grateful to Alvaro Cartea, Faycal Drissi, Marcello Monga and Andrea Pizzoferrato for insightful discussions. Deborah Miori acknowledges financial support from the _EPSRC CDT in Mathematics of Random Systems_ (EPSRC Grant EP/S023925/1).
## Statements and Declarations
The authors have no competing interests to declare that are relevant to the content of this article.
|
2309.00131 | Effects of data distribution and granularity on color semantics for
colormap data visualizations | To create effective data visualizations, it helps to represent data using
visual features in intuitive ways. When visualization designs match observer
expectations, visualizations are easier to interpret. Prior work suggests that
several factors influence such expectations. For example, the dark-is-more bias
leads observers to infer that darker colors map to larger quantities, and the
opaque-is-more bias leads them to infer that regions appearing more opaque
(given the background color) map to larger quantities. Previous work suggested
that the background color only plays a role if visualizations appear to vary in
opacity. The present study challenges this claim. We hypothesized that the
background color modulate inferred mappings for colormaps that should not
appear to vary in opacity (by previous measures) if the visualization appeared
to have a "hole" that revealed the background behind the map (hole hypothesis).
We found that spatial aspects of the map contributed to inferred mappings,
though the effects were inconsistent with the hole hypothesis. Our work raises
new questions about how spatial distributions of data influence color semantics
in colormap data visualizations. | Clementine Zimnicki, Chin Tseng, Danielle Albers Szafir, Karen B. Schloss | 2023-08-31T20:48:38Z | http://arxiv.org/abs/2309.00131v1 | # Effects of data distribution and granularity on color semantics for colormap data visualizations
###### Abstract
To create effective data visualizations, it helps to represent data using visual features in intuitive ways. When visualization designs match observer expectations, visualizations are easier to interpret. Prior work suggests that several factors influence such expectations. For example, the dark-is-more bias leads observers to infer that darker colors map to larger quantities, and the opaque-is-more bias leads them to infer that regions appearing more opaque (given the background color) map to larger quantities. Previous work suggested that the background color only plays a role if visualizations appear to vary in opacity. The present study challenges this claim. We hypothesized that the background color _would_ modulate inferred mappings for colormaps that should not appear to vary in opacity (by previous measures) if the visualization appeared to have a "hole" that revealed the background behind the map (hole hypothesis). We found that spatial aspects of the map contributed to inferred mappings, though the effects were inconsistent with the hole hypothesis. Our work raises new questions about how spatial distributions of data influence color semantics in colormap data visualizations.
**Index Terms:** Visual reasoning--Information visualization--Colormap data visualizations--Color cognition
## 1 Introduction
When creating data visualizations, it is helpful to represent data in a way that is intuitive to observers. These intuitions stem from observer expectations about how visual features should map to concepts, called inferred mappings [10, 21, 22, 23, 5, 30, 9, 24]. Understanding inferred mappings is crucial for effective visual communication: when inferred mappings match the encoded mapping specified by the visualization designer, the visualization becomes easier to interpret [22, 24, 32, 33, 9, 12, 33].
In this paper, we focus on inferred mappings for colormap data visualizations ("colormaps" for short), which represent quantity using a gradation of color ("color scale").1 To design colormaps that match observers' inferred mappings, a key consideration is deciding which endpoint of the color scale should map to larger quantities in the data. We focus solely on color scales that increase monotonically in lightness. We do not consider divergent color scales (light and dark at both endpoints) or rainbow color scales [31, 35, 11, 17, 13]. For color scales that vary monotonically in lightness, previous work suggests that several factors combine to influence whether observers infer that darker vs. lighter colors map to larger quantities.
Footnote 1: Various terms are used for visualizations that represent continuous data using gradations of color. Here, “color scale” refers to gradations of color used to construct a colormap (also known as “ramps” [29], and “colormap” refers to a data visualization that represents quantities using gradations of color, such as in maps of weather patterns or visualizations of neural activity.
**Dark-is-more bias.** The dark-is-more bias leads to inferences that darker colors map to larger quantities [10, 21, 5, 28]. CWC [5] provided early evidence for this: when asked to interpret colormaps with no legends, participants inferred that darker colors represented "more." McGranaghan [10] investigated whether the dark-is-more bias was actually a special case of a contrast-is-more bias. If so, the dark-is-more effect observed when the background is light should reverse when the background is dark. McGranaghan [10] found that participants consistently inferred darker colors represented more on both light and dark backgrounds, though less so on the dark backgrounds. This challenged the notion that the dark-is-more bias was a special case of a contrast-is-more bias. But, contrast effects have been observed in other domains, such as in visual search [18].
**Opaque-is-more bias.** Schloss et al. [21] studied the effects of background for colormaps made using different color scales. They identified the opaque-is-more bias, which leads to the inference that regions appearing more opaque map to larger quantities (i.e., dark colors on light background and light colors on dark background). The strength of this bias (and effect of background lightness) depended on the degree to which colormaps appeared to vary in opacity (quantified using Opacity Variation Index). The strength of apparent opacity variation depends on how closely the color scale interpolates linearly with the background color ("value-by alpha" maps [19]).
When both dark-is-more and opaque-is-more biases are activated,
they work together on light backgrounds (darker regions appear more opaque) and conflict on dark backgrounds (lighter regions appear more opaque). Under such conflicts, the opaque-is-more bias can reduce, or even override the dark-is-more bias, resulting in inferences that lighter colors map to more [1, 21]. Critically, when colormaps did not appear to vary in opacity, background lightness had no effect on inferred mappings (i.e., no contrast-is-more bias).
**Hotspot-is-more bias.** Schott [25] suggested people expect regions in hotspots--concentric rings of data like those found in weather maps--map to more. Sibrel et al. [28] tested whether a hotspot-is-more bias exists, and if it could override the dark-is-more bias. They found that hotspot-is-more and dark-is-more biases worked together when hotspots were dark but conflicted when hotspots were light; under such conflicts, the dark-is-more bias dominated over the hotspot-is-more bias (leading to inferences that darker colors mean more) [28]. However, when hotspots were highly salient, the hotspot-is-more bias dominated--observers inferred that lighter colors mapped to more. Thus, spatial structure can impact inferred mappings.
**Direct associations.** The previous biases focused on perceptual factors, but another factor concerns the direct color-concept associations with the data domain (e.g., foliage, wildfire, sunshine) [20, 24]. When lighter colors in the colormap are strongly associated with the data domain (e.g., light yellow is strongly associated with sunshine), those direct associations can override the dark-is-more bias [24].
## 1 Motivation
From previous work, it follows people will infer darker colors map to larger quantities, regardless of background lightness, if the following conditions are met: (1) the color scale does not appear to vary in opacity, (2) the colormap does not have salient light-colored hotspots, and (3) the colormap does not represent data about something with strong direct associations with light colors within the colormap.
In this study we considered that cases might exist in which these conditions are met, yet observers will infer that lighter colors map to more. This idea stems from Bartel et al. [1], who investigated inferences about the meaning of colors in Venn diagrams, which are systems for visually representing logical propositions [34]. Traditionally, shaded regions in Venn diagrams indicate "non-existence" of an entity represented by that region [27, 34]. However, Bartel et al. [1] proposed the _hole hypothesis_, which predicts people infer the opposite: regions appearing as "holes" in Venn diagrams map to non-existence. Regions appear as holes when their surface properties (e.g., color, texture) match the surface properties of the background of the display, resulting in the appearance that these regions are "owned" by the background [2, 13, 15]. Supporting the hole hypothesis and contrary to Venn diagram conventions, Bartel et al. [1] found that participants interpreted Venn diagrams consistently with regions appearing as holes that represented non-existence.
We hypothesized that if a colormap has a large black region on a black background, the region will appear like a hole, and this could activate the opaque-is-more bias even if the color scale used to construct the colormap does not appear to vary in opacity (hole hypothesis). For example Fig. 2 shows two colormaps; Fig. 2A does not appear to have a hole as none of the colors match the background, whereas (Fig. 2B) appears to have a hole because the large black region matches the background. We focus here on _perceptual holes_ as described by [13], and distinguish these from regions indicating a lack of data (e.g., [37]). If the hole hypothesis is supported, the probability of responding that darker colors represent larger values ("dark-more responses") would be reduced for maps presented on a black background compared to on a white background. A sufficiently strong effect could override the dark-is-more bias, leading participants to make _light-more_ responses for colormaps on a black background. Thus, we would identify a condition in which people infer light is more, while the aforementioned conditions are met: (1) the color scale does not appear to vary in opacity, (2) the colormap does not have salient light-colored hotspots, and (3) the colormap does not represent data about something with strong direct associations with light colors within the colormap.
To test this hypothesis, we conducted an experiment assessing inferred mappings for colormaps that varied in _data distribution_, background, and color scale, to create colormaps varying in how much they appeared to have holes (Fig. 1). We also varied _granularity_ (grid or continuous) to test if effects might be stronger for colormaps with smooth gradations rather than sharp edges.
**Contributions.** Our results make the following contributions. First, we identified spatial factors, data distribution and granularity, contributing to inferences about the meaning of colors in colormap data visualizations. Second, we found that these factors were modulated by color scale and background, but not in ways we would expect based on prior work. Though our results raise unresolved questions concerning the role of spatial factors in inferred mappings, we have shown that it is crucial to account for the spatial structure of data when considering color semantics in data visualizations.
## 2 Methods
**Participants.** Our target sample size was \(n=1600\) (\(n=20\) per condition). This sample size was determined by simulating expected error rates and standard deviation based on results from a study with a comparable design. We randomly sampled responses with replacement for \(n=5-22\) in each condition and calculated the standard deviation at each \(n\) for 20 sets of hypothetical trials. We set \(n=20\) subjects per condition based on this sampling to keep the predicted standard deviation below 2% for all conditions.
Participants were recruited through Amazon Mechanical Turk/Cloud Research, restricted to workers in the US with at least a 90% approval rating. The experiment took \(<5\) min. and participants were compensated with $0.60. We collected data in batches until there were least 20 participants in each of 20 conditions who passed the exclusion criteria. Due to random assignment and participants sometimes dropping out, sample sizes ranged from 20-27 per condition. We analyzed data from 1723 participants (1,038 women, 665 men, 11 nonbinary, 1 agender, 1 demigirl, 1 queer, 1 genderqueer, 2 genderfluid, 1 transmasculine, 1 none, and 1 preferred not to say).
Participants were excluded if they did not complete the task and submit to mTurk, or if they did not pass the color vision test. The color vision test had two parts. Participants typed the number that they saw in each of 11 Ishihara plates (or typed "none" if they did not see a number). Then, participants were asked: (a) Do you have difficulty seeing colors or noticing differences between colors, compared to the average person? (b) Do you consider yourself to be color blind? Participants were excluded if they made errors on \(>2\) plates or answered yes to at least one color vision question.
All participants gave informed consent and the UW-Madison IRB approved the protocol. Data, code, and color coordinates can be found at [https://github.com/SchlossWRL/spatial_maps](https://github.com/SchlossWRL/spatial_maps).
**Design, Displays, and Procedure.** Participants were told that
Figure 2: Shifted colormaps created using Plasma/Magma+ [8] color scales. (A) should not elicit the percept of a hole because the large dark area does not match the background. (B) should elicit the percept of a hole, as the large dark area matches the background [13].
they would see colormap data visualizations representing "measured data." As in [10], they were provided no details about the source of the data to avoid effects of direct associations [24]. The maps would be displayed one at a time, and their task was to indicate whether the measured numbers were larger on the left or right side of the colormap (Fig. 3A-B). For each colormap, one side was biased to be lighter and one was biased to be darker (left/right balanced over trials). Below each side was a horizontal line and label indicating whether the side was "left" or "right." Colormaps were presented without legends so we could probe inferred mappings directly, as in [10, 5, 24]. This method is ecologically valid, as colormaps are often presented without legends in the wild [4].
Participants were randomly assigned to one of 80 between-subject conditions, which included 2 granularity levels (coarse vs. fine) \(\times\) 2 data distributions (dark-shifted, unshifted) \(\times\) 2 background colors (white, black) \(\times\) 10 color scales (Autumn, ColorBrewer Blue, ColorBrewer Red, Gray, Hot, Magma+, Mako+, Plasma, Viridis, Winter) (see Fig. 1). In each condition, participants judged 10 colormap visualizations in random order. The 10 colormaps were constructed from five underlying datasets, which were left/right reflected, balancing which side was darker. We also collected data for light-shifted colormaps, but focus on the unshifted and dark-shifted conditions to test our key hypothesis. We will analyze the full dataset in a subsequent paper.
_Data distribution._ To generate the colormaps in the unshifted condition, we used a Sigmoid function \(S(x)=(L/1+e^{(Kax)})+d\) where \(\lim_{x\to 0}S(x)=0.8\) and \(\lim_{x\to 1}S(x)=0.2\) (Fig. 3C). We shifted the original Sigmoid curve along the x-axis such that \(\lim_{x\to 0.7}S(x)=0\) and \(\lim_{x\to 0}S(x)\approx 0.8\) for the shifted conditions shown as Fig. 3D.
_Granularity._ The colormap visualizations with coarse granularity appeared as \(8\times 8\) grids (based on stimuli from [21, 24]). They were generated by sampling from a Sigmoid curve with normally-distributed noise applied. To generate the value for each cell in the grid, we first applied linear interpolation to sample 8 points \(x_{i}\) in \(x\colon[0,1]\), and calculated the corresponding \(y_{i}\) with a Sigmoid function. After retrieving \(y_{i}\), which we used as means in the probability density function of the normal distribution with \(\sigma=0.1\) to randomly sample 8 points as the column values. We clipped values that fell out of the range of [0,1] so they fit within the range. The colormaps with fine granularity were constructed as \(200\times 200\) grids that appeared as continuous data. They were constructed by generating a \(50\times 50\) grid using the same strategy as the coarse granularity above, then interpolating it to target size (\(200\times 200\)) by applying random normal distribution with the mean of 4 adjacent point values and \(\sigma=0.03\). Finally, we applied median blur with a kernel (n=5) to smooth the colormaps. All colormaps were \(627\times 627\) px.
_Background._ The backgrounds of the colormaps were black ([R = 0, G = 0, B = 0]) or white ([R = 255, G = 255, B = 255]), and filled the monitor display. When the background was black, the text, map border, and lines below the visualization were white. When the background was white, these components were black.
_Color scales_. We applied 10 different color scales to each underlying dataset. We chose color scales that fell into five different "families": monochromatic ("ColorBrewer Blue," "ColorBrewer Red"), achromatic ("Gray,"), hue spiral with black and white end points ("Hot," "Magna+," "Mako+), hue spiral with chromatic endpoints ("Plasma," "Viridis"), and hue segment ("Autumn," "Winter").
Plasma, Magma, and Viridis were created by van der Walt and Smith for the Matplotlib library in Python [8]. ColorBrewer Blue and Red were created by Harnowen and Brewer (2003) [6]. Gray, Hot, Autumn, and Winter are native to MATLAB. "Magna+" and "Mako+" are adapted from Magma [8] and Mako (from the Seaborn library for Python [36]) respectively, but we extended the light endpoints so that the lightest values were white. To make Magma+, we (1) remove the 30 darkest colors, (2) appended 20 steps to the lightest side of the scale by interpolating the lightest color (L*=97.850; a*=9.918; b*=29.506) with white (L*=100; a*=0, b*=0), and (3) appended 10 steps to the darkest side by interpolating the darkest color (L*=8.397, a*=19.905, b*=-28.862) black (L*=0; a*=0; b*=0). To make Mako+, we (1) appended 10 steps to the lightest side by interpolating the lightest color (L*=93.383, a*=-10.979, b*=5.533) with white, and (2) 5 steps to the darkest side by interpolating the darkest color (L*=10.359, a*=11.442, b*=-10.144) with black.
## 4 Results and Discussion
When designing this experiment, we chose color scales that fell into five different "families" (see Section 3). To test whether this structure was reflected in the data, we used hierarchical clustering to group the color scales according to patterns of responses [16]. The clustering was computed over the proportion of times participants indicated the darker side of the colormap represented larger quantities (averaged over repetitions and participants) for each of the eight conditions within each color scale: 2 shift conditions (shifted/unshifted) \(\times\) 2 granularities (grid/continuous) \(\times\) 2 backgrounds (white/black) (see Fig. 1). Results of the hierarchical clustering are shown in Fig. 4.
The color scales fell into three main clusters, largely aligning with our initial family classification. One cluster included the monochromatic color scales (ColorBrewer Blue and ColorBrewer Red), plus Autumn. We call this cluster _Monochromatic_ (_Mono_) because 2/3 color scales are monochromatic, acknowledging Autumn does not fit the description. A second cluster included Gray, Mako+, Hot, Magma+, and Winter. We call this _Black and white endpoints (\(B\&W\))_ because 4/5 scales have black/white endpoints, acknowledging Winter does not fit this description. The third cluster included the hue spiral color scales with chromatic endpoints (Plasma, Viridis) so we call it _Spiral_. Work is needed to understand why Autumn and Winter joined other clusters rather than forming their own as expected.
Figure 5 shows the mean proportion of times participants indicated the darker side of the colormap mean larger quantities, averaged over all color scales within each cluster: Mono, B&W, Spiral (see Supplementary Material Figure S.2 for results separated by color scale). The data are plotted as a function of data distribution (shifted vs. unshifted) for colormaps presented on a white background or black background, separated by granularity (grid vs. continuous). We analyzed the data using a mixed-effect logistic regression model predicting whether participants chose the lighter (0) or darker (+1) region on each trial. The fixed effects were data distribution, background, granularity, two orthogonal contrasts coding for color scale cluster, and all possible interactions. One contrast
Figure 3: Example experimental trials for the (A) unshifted and (B) shifted grid colormaps, and corresponding Sigmoid curves for the (C) unshifted and (D) shifted conditions used to sample the values at each point for generating the colormap images.
(Scale1) compared Spiral (+2) vs. Mono (-1) and B&W (-1) and the other (Scale2) compared B&W (+1) with Mono (-1) with Spiral coded as 0. The model also included random by-subject intercepts.
Full model results are shown in Supplementary Material Table S.1, with main findings summarized here. Participants were overall more likely than chance to choose the darker side (positive intercept; \(\beta=7.415\), \(SE=0.38\), \(z=19.722\), \(p<.001\)). The probability of dark-more judgments decreased for shifted colormaps (\(\beta=0.818\), \(SE=0.38\), \(z=2.18\), \(p=.029\)). This effect depended on granularity, with a larger decrease in dark-more responses for shifted datasets when colormaps were continuous vs. grids (shift \(\times\) granularity interaction; \(\beta=1.00\), \(SE=0.38\), \(z=2.66\), \(p=.008\)). This two-way interaction was part of a larger interaction with background and Scale1 (Spiral vs. Mono and B&W scales; \(\beta=0.915\), \(SE=0.34\), \(z=2.72\), \(p=.007\)).
To understand this 4-way interaction, we conducted separate versions of the model within each color scale cluster (see Supplementary Table S.2 for the full output of each model). We also conducted intercept-only models for each condition to test whether responses for each condition in Fig. 5 differed from chance after applying the Holm-Bonferroni correction for multiple comparisons. Results are shown as asterisks in Fig. 5, and the model output is in Supplementary Material Table S.3.
**Monochromatic.** Only the intercept was significant in this model (\(\beta=8.867\), \(SE=0.47\), \(z=18.850\), \(p<.001\))--participants consistently made dark-more responses for all eight conditions, regardless of shift, background, or granularity (Figure 5).
**Black and white endpoint.** In this cluster, the hole hypothesis predicted an interaction between data distribution and background, with reduced dark-more responses for the shifted condition on the black background due to a large region of map appearing as a hole. Overall, dark-more responses were reduced on for the shifted than unshifted condition (main effect of shift; \(\beta=0.654\), \(SE=0.266\), \(z=2.461\), \(p=.014\)), but this interaction with background was not significant (\(\beta=-0.015\), \(SE=0.254\), \(z=-0.062\), \(p=.951\); see Figure 5). Tests against chance indicated that participants consistently made dark-more responses for all conditions except the shifted continuous condition, where responses did not differ from chance. It is unclear why this effect of shift occurred on both white and black backgrounds, and future work will address this question.
**Spiral.** We expected responses for the Spiral color scales to be similar to the Monochromatic color scales, but they were drastically different. There was an unexpected 3-way interaction between data distribution, background, and granularity (\(\beta=-2.267\), \(SE=0.420\), \(z=-5.402\), \(p<.001\)). For colourmaps, participants made dark-more responses more often than chance for all conditions except dark background--unshifted. For continuous maps, participants made dark-more responses more often than chance for the shifted condition on the light background, but unexpectedly made _light-more responses_ more often than chance on the dark background (Figure 5, Table S.3). Thus, we found a condition in which participants consistently made light-more responses, but it was not the condition we expected. Future work is needed to understand this effect.
## 5 Conclusion
We hypothesized that the percept of a hole in a colormap would activate the opaque-is-more bias even if the color scale did not appear to vary in opacity in isolation (Hot, Mako+, Magma+ color scales, see Black and White endpoint condition). Activation of the opaque-is-more bias would lead to reduced dark-more responses for shifted maps on a black background, relative to the same maps on a white background (hole hypothesis). In the extreme, this effect could have led to _light-more_ responses, inferring that lighter colors mapped to larger quantities.
Yet contrary to the hole hypothesis, background did not significantly modulate responses for the B&W scales that had strong perceptual evidence for a hole, but did modulate responses for the Spiral scales that had weak perceptual evidence for a hole. For Spiral scales on a black background, participants were more likely than chance to make _light-more_ responses. Based on the Opacity Variation Index [21], Spiral color scales should not appear to vary in opacity and should not appear to have a "hole." The reason for these results is unknown.
Our findings raise new questions about the contributions of spatial configuration and granularity to inferences about colormap data visualizations. These variables have an effect on inferred mappings and they interact with background color and color scales in unexpected ways. Although future work is needed to explain these effects, our study has shown that it is crucial to account for the spatial distribution of data when considering color semantics in data visualizations.
## Acknowledgments
We thank Kushin Mukherjee, Melissa Schoenlein, and Seth Gorelik for feedback. This project was supported by the UW-Madison Office of the Vice Chancellor for Research and Graduate Education, Wisconsin Alumni Research Foundation, McPherson Eye Research Institute, and NSF BCS-1945303 to KBS and IIS-2320920 to DAS.
Figure 4: Dendrogram showing hierarchical clustering of color scales according to responses for the 8 conditions within each color scale.
Figure 5: Mean proportion of dark-more responses, averaged over participants and color scales within each cluster (Mono, B&W, and Spiral). Shift is on the x-axis and background color is using mark fill (black/white marks indicate black/white backgrounds). Error bars represent +/- standard errors of the means. Significant difference from chance (Bonferroni-Holm corrected) is indicated near each point (\(\ast\ast\)\(\ast\)\(<.001\), \(\ast\)\(<.05\), ms = not significant. Single indicators of significance placed at one x-axis location apply for both backgrounds. |
2309.07886 | Vortex Lattices in Active Nematics with Periodic Obstacle Arrays | We numerically model a two-dimensional active nematic confined by a periodic
array of fixed obstacles. Even in the passive nematic, the appearance of
topological defects is unavoidable due to planar anchoring by the obstacle
surfaces. We show that a vortex lattice state emerges as activity is increased,
and that this lattice may be tuned from ``ferromagnetic'' to
``antiferromagnetic'' by varying the gap size between obstacles. We map the
rich variety of states exhibited by the system as a function of distance
between obstacles and activity, including a pinned defect state, motile
defects, the vortex lattice, and active turbulence. We demonstrate that the
flows in the active turbulent phase can be tuned by the presence of obstacles,
and explore the effects of a frustrated lattice geometry on the vortex lattice
phase. | Cody D. Schimming, C. J. O. Reichhardt, C. Reichhardt | 2023-09-14T17:33:29Z | http://arxiv.org/abs/2309.07886v1 | # Vortex Lattices in Active Nematics with Periodic Obstacle Arrays
###### Abstract
We numerically model a two-dimensional active nematic confined by a periodic array of fixed obstacles. Even in the passive nematic, the appearance of topological defects is unavoidable due to planar anchoring by the obstacle surfaces. We show that a vortex lattice state emerges as activity is increased, and that this lattice may be tuned from "ferromagnetic" to "antiferromagnetic" by varying the gap size between obstacles. We map the rich variety of states exhibited by the system as a function of distance between obstacles and activity, including a pinned defect state, motile defects, the vortex lattice, and active turbulence. We demonstrate that the flows in the active turbulent phase can be tuned by the presence of obstacles, and explore the effects of a frustrated lattice geometry on the vortex lattice phase.
Active nematics are a class of active fluids whose microscopic constituents are anisotropic and exert dipolar forces [1; 2]. Examples of such systems include cytoskeletal filaments with molecular motors [3], cellular tissues [4], suspensions of bacteria in nematic liquid crystals [5; 6], and soil bacteria [7]. One of the key features of active nematics is that in many cases the dynamics can be described by the motion of defects with different topological charges [3; 8; 9; 10]. There is growing interest in developing methods to control the defect dynamics and flows produced by active nematics, such as by applying external fields [11], introducing anisotropic substrates [12; 13] or imposing a geometric confinement of the material [14; 15].
In systems with well-defined length scales such as the average distance between topological defects, new ordered phases can arise upon coupling to a periodic substrate [16], as observed for vortices in superconductors [17] and Bose-Einstein condensates [18] coupled to periodic arrays, ordering of surfaces [19], cold atom systems on optical substrates [20] and colloids coupled to ordered substrates [21; 22]. Active nematics are another system in which commensuration effects can arise; however, due to their nonequilibrium nature, it should be possible for dynamical commensuration effects to appear as well. Here, we examine the effects of geometric confinement on active nematics induced by a periodic array of fixed obstacles for a varied range of activity levels and obstacle sizes. Previous theoretical and experimental studies of active nematic confinement have typically employed an external boundary as a confining structure, resulting in a channel, circular disk, or annular geometry [23; 24; 25; 26; 27; 28; 29; 30]. As the system size and active force magnitude is varied, such systems can exhibit anomalous flow states such as dancing topological defects or system-wide circulation [24; 25; 26; 14]. Relatively few studies have addressed the effects of fixed, embedded obstacles on the active nematic flow state; however, recent experiments have successfully produced active nematics in obstacle laden environments [31; 32].
Motivated by the recent experimental work, we use a minimal, active nematic continuum model to simulate an array of obstacles that separates the system into interacting circular domains where the obstacle shape makes the formation of topological defects unavoidable. We show that a variety of phases appear, including a low active force state where the defects remain pinned to the obstacles, a state where the defects are motile, and an intermediate activity state where the flow organizes into a lattice of vortices. By varying the size of the obstacles, we can tune the vortex lattice from "ferromagnetic," in which the vortices are all of the same chirality, to "antiferromagnetic," in which each nearest neighbor vortex pair is of opposite chirality. To our knowledge, this is the first report of a ferromagnetic vortex lattice state in active nematics. We compare the active turbulent phase in systems with and without obstacles and find that the fluid flow slows as the obstacles increase in size, while the directional distribution of the fluid velocity becomes peaked along diagonal lattice directions. Finally, we explore the effects of lattice frustration by simulating the active nematic on a triangular lattice. Our findings provide an experimentally viable method for controlling and tuning vortex lattices and flows in active nematics.
We model a two-dimensional active nematic using a well-documented continuum model that has been shown to capture the key features of experimental active nematics [33; 2]. In our dimensionless equations, presented in detail in the supplemental material [34], lengths are measured in units of the nematic correlation length \(\xi\) and times are measured in units of the nematic relaxation time \(\sigma\). The nematic state is captured by the tensor order parameter \(\mathbf{Q}=S\left[\mathbf{n}\otimes\mathbf{n}-(1/2)\,\mathbf{I}\right]\) where \(S\) is the scalar order parameter indicating the local degree of alignment and \(\mathbf{n}\) is the director, giving the local direction of orientation. The evolution of \(\mathbf{Q}\) is given by
\[\frac{\partial\mathbf{Q}}{\partial t}+\left(\mathbf{v}\cdot\nabla\right) \mathbf{Q}-\mathbf{S}=-\frac{\delta F}{\delta\mathbf{Q}} \tag{1}\]
where \(F\) is the Landau-de Gennes free energy with one elastic constant [35], \(\mathbf{S}\) is a generalized tensor advection [36], and \(\mathbf{v}\) is the fluid velocity. The free energy is such
that the passive liquid crystal is in the nematic phase. We assume low Reynolds number flow, so the fluid velocity is given by the Stokes equation:
\[\nabla^{2}\mathbf{v}=\nabla p+\alpha\nabla\cdot\mathbf{Q},\qquad\nabla\cdot \mathbf{v}=0 \tag{2}\]
where \(p\) is the fluid pressure, and the last term is an addition to the usual Stokes equation that models an active force of dimensionless strength \(\alpha\). The divergence free condition models incompressible flows.
Equations (1) and (2) are discretized in space and time and solved using the MATLAB/C++ package FELICITY [34; 37]. We consider domains with an array of astroid shaped obstacles, as shown in Fig. 1(a). The obstacles may be thought of as separating circular domains that overlap over a distance \(d\) between obstacles. In this study, we vary \(d\) from \(1\) to \(10\) while fixing the distance between centers of obstacles to \(a=14\) so that the overall system size remains constant. The limit \(d=0\) gives individual, non-interacting circular domains, while \(d=a=14\) indicates a system with no obstacles. We employ periodic boundary conditions on the side edges of the domain, while for the obstacles we impose strong planar anchoring for \(\mathbf{Q}\) and no-slip conditions for \(\mathbf{v}\). The shape of the inclusions and the strong planar anchoring necessitate the formation of topological defects, or points where the nematic director is singular. Specifically, there must be a total topological charge (winding number) of \(+N\) for \(N\) obstacles because each obstacle carries a charge of \(-1\) that must be accounted for in the nematic. Therefore, topological defects inherently exist in the system even in the passive state with \(\alpha=0\).
Upon increasing \(\alpha\) for various obstacle sizes, we typically find three qualitative transitions. At zero and small activity, the topological defects are pinned to the obstacles. As the activity is increased, the defects begin to unpin and move from obstacle to obstacle with little to no unbinding of new defects. When activity is further increased, a central vortex forms in each circular domain with two \(+1/2\) topological charge defects encircling one another, as has been shown previously for individual circular domains [30; 14; 26]. At higher activities the \(+1/2\) defects merge and a stable \(+1\) spiral defect forms in the domain center. Figure 1(b-e) shows examples of the nematic order parameter and velocity field in the vortex lattice state, which can have either ferromagnetic or antiferromagnetic order. At still higher activities the central vortex in each domain is no longer stable and an active turbulent phase persists in which defects are constantly unbinding and annihilating (see Supplemental Movie 1).
To better quantify the transitions discussed above, we map our system to a lattice of vortex "spins" by measuring the average vorticity in a circle of diameter \(a/2\) at the center of each circular domain. The spins are then indexed by their lattice position \(s_{i}\). Figure 2(a-c) shows plots of \(\langle|s_{i}|\rangle_{i,t}\), where \(\langle\cdot\rangle_{i,t}\) denotes an average over lattice sites and time, as a function of activity for obstacle gaps \(d=4\), \(d=6\), and \(d=8\). The transition to the vortex lattice state is marked by a jump followed by a linear increase in \(\langle|s_{i}|\rangle_{i,t}\). A second transition is marked by an abrupt decrease in \(\langle|s_{i}|\rangle_{i,t}\) which remains roughly constant. This is the active turbulent phase.
We first focus on the vortex lattice phase. As shown in Fig. 1(b-e), vortex lattices appear with either "ferromagnetic" or "antiferromagnetic" order. To quantify this order we measure the spin-spin correlation function
\[\chi=\frac{\sum_{\langle i,j\rangle}s_{i}s_{j}}{\sum_{\langle i,j\rangle}|s_ {i}s_{j}|} \tag{3}\]
where \(\sum_{\langle i,j\rangle}\) denotes a sum over nearest neighbor pairs. For perfectly ferromagnetic order \(\chi=1\) while for perfectly antiferromagnetic order \(\chi=-1\). In Fig. 2(d) we plot the boundaries of the phases as well as \(\langle\chi\rangle_{t}\) as a function of activity and obstacle gap for systems in both the
Figure 1: (a) Schematic of the computational domain with a periodic array of astroid shaped obstacles summarizing the boundary conditions on the obstacles and domain edges. (b,c) “Ferromagnetic” vortex lattice state at \(d=2\) and \(\alpha=1.8\). (b) Nematic scalar order parameter \(S\) where white lines indicate the director \(\mathbf{n}\). The points where \(S=0\) are topological defects. (c) Fluid vorticity with black arrows showing the velocity field. (d) \(S\) and (e) \(\omega/\omega_{\mathrm{max}}\) for an “antiferromagnetic” vortex lattice state at \(d=6\) and \(\alpha=1.5\).
vortex lattice and active turbulence regimes. We do not define \(\chi\) for small activities since the central vortices, and hence spins, are not well-established in the pinned defect and motile defect phases. As the obstacle gap increases, there is a window of activity values where the vortex lattice abruptly transitions from ferromagnetic to antiferromagnetic. In this window, the obstacle size can be used to tune the state of the vortex lattice. For large obstacle gaps the vortex lattice phase disappears and only active turbulence occurs. At the transition from vortex lattice to active turbulence, \(\langle\chi\rangle_{t}\) decreases in size but typically maintains the same sign, indicating that vestigial ferromagnetic or antiferromagnetic order persists until \(\alpha\) becomes large enough that \(\langle\chi\rangle_{t}\to 0\).
In previous work on active nematics, a one-dimensional antiferromagnetic vortex lattice was observed in channel confinement geometries [24], while numerical predictions indicate that a two-dimensional antiferromagnetic vortex lattice should appear in systems with large enough substrate friction [38]. In our system, we assume zero substrate friction so the vortices are stabilized purely by geometric confinement, but we expect the vortex lattice phase to be stable against inclusion of some substrate friction based on recent work on circularly confined systems [30]. We find both ferromagnetic and antiferromagnetic vortex lattices depending on the values of \(d\) and \(\alpha\), and to our knowledge, a ferromagnetic vortex lattice has not previously been observed or predicted in active nematics.
Vortex lattices can also form in bacterial suspensions confined by pillar arrays, but in that system, hydrodynamic interactions are the dominant ordering mechanism, and therefore the vortex lattice typically has antiferromagnetic ordering [39; 40]. In contrast, in the active nematic considered here, elastic forces play the dominant role in ordering. As shown in Fig. 1(b), since the central \(+1\) defect that forms is of spiral type, the elastic energy between circular domains is minimized if each domain has the same chirality. Thus, elastic interactions promote ferromagnetic order.
The transitions to antiferromagnetic order and active turbulence may be explained by a competition between elastic and active forces and a hierarchy of length scales. The active nematic length scale \(\xi_{a}\propto 1/\sqrt{\alpha}\) sets the defect density of a bulk active nematic system [2]. For a given obstacle size, there is an effective length scale associated with the circular domains \(R_{\rm eff}\). If \(\xi_{a}\lesssim R_{\rm eff}\), there is enough space to nucleate defects and reach the optimal defect density, so the system transitions to an active turbulent state. There is, however, another length scale associated with the size of the obstacles: the obstacle gap \(d\). In Fig. 1(d) the antiferromagnetic vortex lattice contains extra \(\pm 1/2\) defect pairs that sit in the obstacle gaps and mediate the change in chirality between central vortices. If \(\xi_{a}\gtrsim d\), a defect pair will not be stable in the gap. On the other hand, if \(d\gtrsim\xi_{a}\gtrsim R_{\rm eff}\), the vortex lattice phase is stable since an extra defect pair can nucleate in the gaps to mediate the antiferromagnetic vortex order. We note that if \(d<R_{\rm eff}\), this hierarchy of length scales cannot occur, explaining why we do not observe antiferromagnetic vortex order for small \(d\).
We now turn our attention to the active turbulent phase that occurs for large \(\alpha\). As mentioned above, the active turbulent regime is associated with a sharp decrease in \(\langle|s_{i}|\rangle_{i,t}\) (Fig. 2). We can also measure the average number of defects \(\langle N_{D}\rangle_{t}\) to detect the transition (see the Supplemental Material for details on how this and other measures are computed [34; 41; 42]). In the vortex lattice phase, \(\langle N_{D}\rangle_{t}\) is roughly constant, while in active turbulence \(\langle N_{D}\rangle_{t}\) grows linearly with \(\alpha\) (Fig. S1). Both measures are consistent with one another in marking the transition and we use them to determine the boundary shown in Fig. 2 between the motile defect phase and active turbulence for large \(d\).
It is instructive to compare the active turbulent phase of the obstacle array system with the \(d=a\) bulk system free of obstacles. While the flows in all systems become decorrelated over long time scales (Fig. S2), the flow velocity distributions \(p(|{\bf v}|)\) vary. In Fig. 3(a-c) we plot \(p(|{\bf v}|)\) for systems with \(\alpha=1.5\) at \(d=14\) (the bulk system), \(d=10\), and \(d=4\). Figure 3(a) indicates that \(p(|{\bf v}|)\) has a two-dimensional Maxwell-Boltzmann distribution with a maximum weight that shifts toward \(|{\bf v}|=0\) as the obstacle size increases and \(d\) becomes smaller. This is a natural consequence of the fact that the obstacles have no-slip conditions and their surface area increases as \(d\) decreases. The corresponding velocity direction distributions \(p(\theta_{v})\) in Fig. 3(d-f), where \(\theta_{v}=\tan^{-1}(v_{y}/v_{x})\), show
Figure 2: (a–c) Average vortex “spin” \(\langle|s_{i}|\rangle_{i,t}\) as a function of activity \(\alpha\) for obstacle gaps (a) \(d=4\), (b) \(d=6\), and (c) \(d=8\). Colors indicate the phase identity. (d) Phase diagram as a function of \(d\) vs \(\alpha\), where dot color indicates the value of the time-averaged spin-spin correlation function \(\langle\chi\rangle_{t}\). \(\langle\chi\rangle_{t}\) is not well-defined in the pinned defect or motile defect phases.
an isotropic distribution for small obstacles (\(d=10\)) that is nearly identical to \(p(\theta_{v})\) for the bulk system. For larger obstacles, \(p(\theta_{v})\) becomes anisotropic and peaks along the lattice diagonals. These velocity statistics suggest that immersed obstacles can provide control over the flows even in the active turbulent phase, which could contribute to the development of novel microfluidic devices composed of active fluids.
Finally, we simulate the active nematic in a domain constrained by a honeycomb lattice of concave, triangular obstacles similar to those used in recent experiments [31]. Here, the obstacles introduce a triangular lattice of circular domains. An example nematic configuration and flow profile is shown in Fig. 4(a,b). In order to promote antiferromagnetic ordering, we place the system just outside the ferromagnetic vortex lattice phase for this geometry: \(a=8\), \(d=4\), and \(\alpha=2\); however, the frustration in the lattice prevents the emergence of antiferromagnetic vortex order. Instead we find a primarily ferromagnetic state in which the competition between elastic and active forces results in constantly flipping spins and the formation of a dynamical state similar to active turbulence. In Fig. 4(c) we plot \(\chi(t)\) over the course of a simulation. There are multiple time intervals where \(\chi=1\), indicating a perfectly ferromagnetic vortex lattice; however, spin flips generated by the unbinding of new defects constantly reduce \(\chi\), which sometimes becomes negative. While the dynamics resemble active turbulence, we argue that they are actually closer to those of the vortex lattice state in the square lattice with \(d=6\) and \(\alpha=0.9\), in which the underlying vortex lattice order is ferromagnetic but defect unbinding pushes the system towards antiferromagnetic order. We show in Fig. S3 that the \(\chi\)-\(\chi\) temporal autocorrelation function for the frustrated triangular lattice is similar to that of the square lattice with \(d=6\) and \(\alpha=0.9\). In both systems, but unlike an active turbulent system, \(\chi\) is correlated over long times, indicating that spins and spin flips are also correlated over time. In Fig. 4(d) we show that \(p(\theta_{v})\) peaks at the six diagonals of the lattice. We expect the peaks to become more prominent as the obstacle size increases.
_Conclusion--_ We have numerically studied the effects on active nematics of fixed periodic astroid shaped obstacles with planar nematic anchoring. As a function of activity and obstacle size, we find a wide variety of phases, including a pinned defect phase, motile defect
Figure 4: Active nematic system in a honeycomb lattice of concave triangular obstacles. (a) Time snapshot of the nematic configuration at \(a=8\), \(d=4\), and \(\alpha=2\) with color given by the scalar order parameter \(S\) and white lines indicating the nematic director \(\mathbf{n}\). (b) The corresponding vorticity (color) and flow velocity (black arrows). (c) Spin-spin correlation function \(\chi\) vs time \(t\). (d) Velocity direction distribution \(p(\theta_{v})\).
Figure 3: Flow velocity distributions in the active turbulent phase with \(\alpha=1.5\) and various obstacle gap sizes. (a–c) Distributions of \(|\mathbf{v}|\) for (a) a bulk system with no obstacles, (b) obstacle gap \(d=10\), and (c) obstacle gap \(d=4\). (d–e) Velocity direction distributions \(p(\theta_{v})\) for (d) a bulk system with no obstacles, (e) obstacle gap \(d=10\), and (f) obstacle gap \(d=4\).
regime, and a vortex lattice phase that can be tuned from ferromagnetic to antiferromagnetic. There is an active turbulent phase that displays unique anisotropic velocity distributions at high activities, suggesting a new method to control active turbulence with obstacles. We also find that an antiferromagnetic vortex lattice on a triangular lattice exhibits an active frustrated state. Our system should be experimentally realizable using existing approaches for creating obstacles in active nematic systems [31, 32]. Future directions are to consider obstacle shapes that stabilize other kinds of topological defects. Also, different lattices may yield even more exotic flow states, which opens the prospect of flow control in active nematics using obstacles.
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001).
|
2302.14236 | Vibration and jitter of free-flowing thin liquid sheets as target for
high-repetition-rate laser-ion acceleration | Very thin free-flowing liquid sheets are promising targets for
high-repetition-rate laser-ion acceleration. In this work, we report the
generation of micrometer-thin free-flowing liquid sheets from the collision of
two liquid jets, and study the vibration and jitter in their surface normal
direction. The dependence of their motion amplitudes on the generation
parameters is studied in detail. The origins of the vibration and jitter are
discussed. Our results indicate that when the generation parameters are
optimized, the motion amplitudes in the stable region can be stabilized below
3.7 {\mu}m to meet the stringent requirement of sheet position stability for a
tight-focusing setup in laser-ion acceleration experiments. | Zhengxuan Cao, Ziyang Peng, Yinren Shou, Jiarui Zhao, Shiyou Chen, Ying Gao, Jianbo Liu, Pengjie Wang, Zhusong Mei, Zhuo Pan, Defeng Kong, Guijun Qi, Shirui Xu, Zhipeng Liu, Yulan Liang, Shengxuan Xu, Tan Song, Xun Chen, Qingfan Wu, Xuan Liu, Wenjun Ma | 2023-02-28T01:38:36Z | http://arxiv.org/abs/2302.14236v1 | Vibration and jitter of free-flowing thin liquid sheets as target for high-repetition-rate laser-ion acceleration
###### Abstract
Very thin free-flowing liquid sheets are promising targets for high-repetition-rate laser-ion acceleration. In this work, we report the generation of micrometer-thin free-flowing liquid sheets from the collision of two liquid jets, and study the vibration and jitter in their surface normal direction. The dependence of their motion amplitudes on the generation parameters is studied in detail. The origins of the vibration and jitter are discussed. Our results indicate that when the generation parameters are optimized, the motion amplitudes in the stable region can be stabilized below 3.7 \(\upmu\)m to meet the stringent requirement of sheet position stability for a tight-focusing setup in laser-ion acceleration experiments.
1State Key Laboratory of Nuclear Physics and Technology, and Key Laboratory of HEDP of the Ministry of Education, CAPT, Peking University, Beijing 100871, China
2Beijing Laser Acceleration Innovation Center, Huairou, Beijing 101400, China
3Institute of Guangdong Laser Plasma Technology, Baiyun, Guangzhou 510540, China
4Present address: Center for Relativistic Laser Science, Institute for Basic Science, Gwangju 61005, Korea
5Present address: Institute of Radiation Physics, Helmholtz-Zentrum Dresden-Rossendorf, Dresden 01328, Germany
* wenjun.ma@pku.edu.cn
**Correspondence:**
Wenjun Ma
wenjun.ma@pku.edu.cn
**Liquid sheet target, Laser-ion acceleration, High-repetition-rate targets, Hydrodynamic instability, Rayleigh length**
## 1 Introduction
The interaction of ultraintense laser pulses with solid targets at relativistic intensity (\(10^{18}\) W/cm\({}^{2}\)) have produced energetic protons close to 100 MeV(1) and heavy ions over 1 GeV(2). Such a novel ion acceleration method based on relativistic laser-plasma interaction attracts extensive attention in the field of particle accelerators. It not only offers an alternative way to compact accelerators but also exceptionally delivers ion beams of unique properties such as small source sizes (micrometer scale)(3, 4), short time durations (picosecond to nanosecond)(5), and ultrahigh peak flux (
A/cm). Applications of the accelerated ions, including FLASH radiotherapy(6), neutron generation(7, 8), radiolysis chemistry(9), and ultra-fast ion imaging(10), are being extensively studied.
Many potential applications of laser-driven ions require a high-repetition-rate operation (kHz or higher) to provide a high average flux of ion sources. Conventional targetry, where solid thin foils self-support on apertures that are closely arranged on a frame for shooting in sequence, can hardly meet the need(11). Typically, each laser shot would ablate a target, and a new target has to be moved in by motorized stages before the next laser shot. Several challenges arise in these processes. Firstly, solid foil targets are often manually fabricated in advance and cannot be mass-produced in a short time. However, continuous operation at kHz-repetition-rate would consume millions of targets in just one hour. The gap between target preparation and consumption is huge. Secondly, the target frame has a limited surface area. Each frame can carry no more than a thousand targets generally. It can only last for 1 second at 1 kHz repetition rate. Lastly, the less than 1 millisecond interval for target update would also be highly challenging for motorized stages to maintain micrometer-scale accuracy in target positioning.
Alternatively, free-flowing thin liquid sheets(12-23) are promising targets for high-repetition-rate laser-ion acceleration. Firstly, they are self-supporting like solid foil targets, and their electron densities are close to those of solids. Thus, they can be ideal substitutes for solid foils in laser-ion acceleration. Secondly, liquid sheet targets solve the issues of fast fabrication and target update. They are self-renewed under continuous liquid supply. After each shot, the sheet is recovered in less than 0.1 ms, which enables a shooting rate of 10 kHz or higher(15). Thirdly, the flow rates required to generate a liquid sheet are mostly less than 5 mL/min. A typical reservoir is several liters, which can support at least hours of continuous operation without downtime even without recycling. Due to the above advantages, liquid sheets have received more and more attention.
There are two schemes reported to produce very liquid sheets suitable for laser ion acceleration. The first is to collide two liquid jets at a certain angle(12-19), as shown in Figure 1A. A leaf-shaped planar liquid sheet will spontaneously form under laminar flow conditions. The underlying physics is as follows: the momentums of the two colliding jets are redistributed in the orthogonal plane; then the liquid radially expands outward, and gradually retracts under the pulling of surface tension to form a closed liquid sheet. The second method uses a single-exit convergent nozzle(20-23). The cross-section area of the channel gradually decreases till it reaches the minimum value at the exit, creating an equivalent collision effect of liquid inside the nozzle. After ejected from the nozzle exit, the liquid gets rid of the constraint of the channel and expands to form a sheet.
In laser-ion acceleration, the cut-off energies of ions scale up laser intensity I with specific powers depending on the acceleration scheme (\(I^{0.25}\sim I^{0.6}\) for target normal sheath acceleration(24)). So the laser pulses are generally tightly focused for higher intensity, where the Rayleigh lengths are less than 10 \(\upmu\)m. Due to hydrodynamic instabilities and other factors, the liquid sheets themselves vibrate and jitter in their surface normal direction. If the amplitudes of their motion exceed the Rayleigh length, the results of laser-ion acceleration will be drastically impaired. Therefore, studying the sheets' motion under different generation parameters is necessary for a stable output of ions. Previous studies have reported a general result of \(\sigma=2\)\(\upmu\)m by exploiting side-on microscope imaging(14, 15), where \(\sigma\) is the standard deviation of sheet positions. However, to our knowledge, studies that extensively investigate the influences of various parameters on surface motion amplitudes based on accurate measurements have not been reported.
This is a provisional file, not the final typeset article
In this paper, we perform a systematic study on the vibration and jitter of free-flowing liquid sheets for laser ion acceleration. Their motions were measured by a confocal displacement detector with high precision at the kHz sampling rate, which enabled us to obtain accurate and detailed motion trajectories. We then studied how the generation parameters affect the motion amplitudes of sheets. Summaries of the results and the analysis of the origins of the vibration and jitter are given. In the end, we discussed the feasibility of using liquid sheet targets for laser ion acceleration with a tight focus.
## 2 Methods
### Liquid sheets generation
The liquid sheets were generated with our homemade liquid-sheet-generation system. It is composed of a liquid-delivery subsystem (Figure 1B) and a capillary positioning subsystem (Figure 1D). The liquids used in this work were pure ethylene glycol (EG) or aqueous solutions of EG at different mass concentrations. The solutions were prepared by weighing with a balance and confirmed by a portable refractometer. In the liquid-delivery subsystem, the liquids were driven by a high-performance liquid chromatography (HPLC) pump (Shimadzu, LC-20ADXR) with a maximum flow rate of 5 mL/min. It was then equally injected into two symmetrical arms. A homemade pulsation dampener was installed to attenuate the pulsation of flow coming from the pump before the liquid entered the capillaries. The inner diameter of the capillaries was 50 \(\upmu\)m. The two liquid jets out of the capillaries collided at a certain angle to produce the liquid sheets. As shown in Figures 1A, 1C, when the jets are in the xOz plane, the resulting liquid sheet lies in the yOz plane. The minimum thickness of liquid sheets was measured as 1.1 \(\upmu\)m by white light interferometry (the detail for the measurement is beyond the scope of this work). The lengths of the jets L (defined as the length between a capillary tip and collision point of the two jets, see Figure 1A) and their collision angle (\(2\theta\)) were controlled by the capillary positioning subsystem (Figure 1D). The lengths of the jets were altered by sliding the mounting bases of the capillaries on the rails. The angle between the jets can be changed by adjusting the angles of the rails.
### Measurements of sheets' motions
We employed a confocal displacement detector (Micro-Epsilon, confocalDT 2421) to measure the motion of the liquid sheets. Figure 2 depicts the setup. The probe head of the detector was placed along the normal direction of the sheet, i.e., the x-direction. The working principle of the confocal displacement detector is as below: white light is focused by a chromatic lens in the sensor so that the focal spot positions for lights with different wavelengths are different; the detector collects the light reflected from the surface of the object (the liquid sheet in our case) and measures its wavelength; according to the measured wavelength, the position of the object can be determined with a 24-nanometer precision. The sampling rate of the detector was set to 1 kHz, which is enough to capture the high-frequency motion of the sheets. In addition to the confocal sensor, there were two lens-CCD imaging systems on the opposite and lateral sides to record the coordinates of measurement points and the lengths of the jets, respectively. Multiple light sources were used for imaging illumination.
This is a provisional file, not the final typeset article
Figure 1: (**A**) Schematic of the liquid sheets generation process in the two-jet-collision regime. (**B**) Schematic of our liquid-delivery subsystem. (**C**) Photograph of the generated liquid sheet. (**D**) Photograph of the capillary positioning subsystem.
### Data processing methods
We illustrate the data processing methods and typical measurement results under a set of generation parameters, which is referred to as the control group in this work. The parameters are: pumping flow rate of 3.5 mL/min, the mass concentration of EG 100%, the lengths of the jets 0.9 mm, the pulsation dampener disabled (not in use), and the collision angle (\(2\theta\)) 60\({}^{\circ}\). There were 7 measurement points along the central line of the sheet (shown in Figure 3A). At each point, the measurements were performed five times. The duration of each measurement was 30 s. Figure 3B shows the raw data of one measurement at a certain point, and Figure 3C is a zoom-in of Figure 3B. One can see that there is high-frequency vibration superposed on low-frequency jitter. The vibration generally accounts for a small part (\(\sim\)20%) of the total amplitude, while the jitter contributes a majority (\(\sim\)80%). For a quantitative study, the inset in Figure 3D presents the histogram of the sheet's position corresponding to one measurement. We define the amplitude of the motion by the threshold \(X_{95}\) (below which 95% of values are contained). That is to say, under 95% of circumstances, the position of the sheet will be within \(\pm X_{95}\) range. The \(X_{95}\) of five measurements for the same measurement point are averaged, yielding \(\overline{X_{95}}\) as the motion amplitude at this point.
Figure 2: (**A**) Schematic and (**B**) photograph of the measurement setup. The confocal detector is used to measure the motion of the sheets. Two imaging systems record the experimental parameters. Multiple light sources are used for imaging illumination.
Figure 3. (**A**) The arrangement of the measurement points in a typical experiment. (**B**) Raw data of the motion of a liquid sheet corresponding to one measurement at a certain point. (**C**) A zoom-in of (**B**). (**D**) The motion amplitude of the sheet as a function of the distance to the collision point. The inset in d is the typical histogram of one measurement, where two vertical lines indicate the threshold \(X_{95}\).
The motion amplitude of the sheet as a function of the distance to the collision point is shown in Figure 3D. One can see that it's neglectable at 0.25 mm, and is almost constant from 0.8 mm to 3.0 mm with amplitudes around 1 \(\upmu\)m. The 2.2-mm-long plateau region is long enough and very beneficial for a stable laser-target interaction. Therefore, this region is defined as stable region of the sheet in this work. We speculate that such small motion amplitudes are due to two reasons. Firstly, the hydrodynamic instabilities (such as Plateau-Rayleigh Instability[(25)]) are well-suppressed. One can calculate the Reynolds number[(15)] is only 52, considering the EG's viscosity of 16 mPa \(\cdot\) s (under the condition of 25\({}^{\circ}\)C and 1 atm) [(26)], and the small flow rate (3.5 mL/min). The development of the hydrodynamic instabilities for such a small Reynolds number is very slow. Secondly, the vibration of the capillaries is also well-controlled in our case due to the mechanical robustness of the capillary positioning subsystem (Figure 1D). Otherwise, the motion amplitude will increase with the distance to the collision point.
As shown in Figure 3D, the motion amplitude abruptly rises to 11 \(\upmu\)m at 3.3 mm away from the collision point. Around this point, the liquid in the first sheet is retracted by the surface tension and starts to form the second orthogonal sheet (as shown in Figures 1A, 1C). In this region, the streamline changes drastically, and it is reasonable to cause greater instability and hence the motion amplitude.
## 3 Results
The motion amplitude of the liquid sheets could be influenced by sheet generation parameters such as the flow rate of pump, the mass concentration of EG solution, the lengths of the jets, the status of pulsation dampener, and the collision angle \(2\theta\). We made a systematic study to quantify their influences and present the results below. The data processing methods are the same as those in the control group.
### Flow rate
We used four different flow rates (3.5, 4, 4.5, 5 mL/min) while keeping the other parameters the same as those of the control group. It can be seen from Figure 4A that the amplitude monotonically increases with the increased flow rate. This trend is consistent at all three measurement points. We speculate that these phenomena result from the increased reciprocating frequency and driving pressure of the pistons adapted for larger flow rates. These two factors lead to an evident increase in the pump's output power. Hence the mechanical vibration power and flow pulsation transmitted to the liquid sheet are increased.
### Mass concentration of EG solution
We performed the experiment with four different mass concentrations (100%, 75%, 50%, 25%) while the other parameters remained the same as those of the control group. Figure 4B illustrates that the amplitude goes down first and then up with the decreased concentration. This trend is the same for all three measurement points. The reason behind this is speculated to be the competition of two factors: the pump's output power and the hydrodynamic instabilities. Decreasing the EG concentration is in fact decreasing the viscosity of the liquid. For a lower viscosity, the pump pressure is lower at a
given flow rate and hence the pump's output power. For reference, the pump pressures of the four liquids are ~35 MPa, ~19 MPa, ~10 MPa, and ~5.3 MPa, respectively. Since the liquid is driven at lower pressure and power, the mechanical vibration and flow pulsation transmitted from the pump to the sheet are reduced. When the concentration further decreases to 25%, the viscosity is too low to suppress the growth of the hydrodynamic instabilities. As a result, the motion amplitude of the sheet rises again. Nevertheless, amplitudes at these concentrations are still smaller than the pure EG.
### Lengths of the jets
The lengths of the jets took six values (0.9, 3.4, 5.9, 8.4, 10.9, 13.4, 15.9 mm), while the other parameters were the same as that of the control group. Figure 4C shows the amplitude of the sheets as a function of the lengths of the jets. The most notable feature is that the amplitude is very large when the lengths are 10.9 mm. We speculate that it coincidentally corresponds to a resonance mode of the sheet. Besides this abnormal point, the amplitude increases with increased lengths of the jets. This is because the instabilities have a longer time to develop in a longer jet. In fact, we found in the experiments that if the lengths of the jets are longer than 8.4 mm, the developed instabilities will cause the sheet to disintegrate into a spray near the end of the sheet.
### Pulsation dampener
The liquid was pumped by the piston with a sinusoidal reciprocating motion, so the instantaneous output flow rate is also pulsing, which is a source of the disturbance. We use a homemade stainless-steel cylinder full of air at 1 atm as the pulsation dampener to attenuate the flow pulsation. Since air is easier to be compressed than liquid, the dampener takes in the excessive liquid at the peak of the flow and sends some liquid back at the valley, and therefore attenuates the flow pulsation.
Accordingly, the stimulated instabilities are reduced, and so is the motion amplitude of the sheet. We studied the performance of the dampener while keeping the other parameters the same as that of the control group. As we can see from Figure 4D, after the dampener is enabled, the amplitude decreases by about 0.5 \(\upmu\)m in the stable region.
### Collision angle (\(2\theta\))
Figure 4E shows the motion amplitude of the sheets with collision angles of \(60^{\circ}\) and \(90^{\circ}\), keeping the other parameters the same as that of the control group. It can be seen that when the angle of collision increases from \(60^{\circ}\) to \(90^{\circ}\), the amplitude decreases by about 0.51 \(\upmu\)m in the stable region. The previous study has found that increasing the collision angle is detrimental to the stability of the sheets, especially when using low-viscosity liquids at high flow rates[27]. However, our results indicate that increasing the collision angle can still be an option to reduce the amplitude if the flow rate is relatively low (3.5 mL/min) and liquid viscosity is high (16 mPa s).
## 4 Discussion and conclusion
### Discussion
For a clear illustration of the influences from various parameters, we summarize all the above results in Figure 5. The data for a specific sheet generation parameter comes from three measurement points with distance to the collision point of 1.5 mm, 2.0 mm, and 2.5 mm, respectively.
This is a provisional file, not the final typeset article
Figure 4: Motion amplitude of liquid sheets with different generation parameters: (**A**) flow rate, (**B**) mass concentration of EG solution, (**C**) lengths of the jets, (**D**) status of pulsation dampener, (**E**) collision angle (\(2\theta\)).
According to Figure 5, in order to minimize the amplitude of surface motion, the optimized strategies are keeping the lengths of the jets small, using a pulsation dampener, reducing the flow rate, and adjusting mass concentration to an optimal value. Nevertheless, a high flow rate is typically beneficial for laser-driven ion acceleration: the size of the sheet is larger, and the thickness is smaller[28]. Therefore, a trade-off has to be made based on the specific requirements of applications. Lastly, under the condition of a small flow rate and a high liquid viscosity, the motion amplitudes of the sheets are slightly smaller for a collision angle of 90\({}^{\circ}\) over 60\({}^{\circ}\).
A comparison between the motion amplitudes of the sheets and the Rayleigh length of the laser pulse is of particular interest. Currently, the peak power of kHz-repetition-rate femtosecond lasers is typically less than 1 TW. In order to achieve a high intensity for ion acceleration, the laser pulses typically are tightly focused with a small F-number (F/#) of \(\sim\)1. The diffraction limited (1/e\({}^{2}\)) focal spot radius \(\omega_{0}\) is calculated as
\[\omega_{0}=1.22\lambda\cdot F,\]
where \(\lambda\) is the wavelength of the laser and it takes the value of 800 nm. Therefore, the focal spot radius \(\omega_{0}\sim 1\)\(\upmu\)m. Rayleigh length \(z_{R}\) is defined as
\[z_{R}=\frac{\pi\omega_{0}^{2}}{\lambda},\]
hence the corresponding Rayleigh length \(z_{R}\sim 3.7\)\(\upmu\)m. According to Figure 5, the motion amplitudes in the stable region for most groups are less than 3.7 \(\upmu\)m. For better performance, for example, \(<\)1 \(\upmu\)m, one needs to consider adopting the proper parameters studied in our work.
### Conclusion
Figure 5: A summary of sheet generation parameters’ influences on the motion amplitude of the liquid sheet.
In summary, we produce ethylene glycol liquid sheets in the double-jet collision scheme and perform a thorough study on their motion in the surface normal direction. We find that it is crucial to avoid the development of instabilities by optimizing the sheet generation parameters to minimize the amplitude of the motion. After optimizations, we can achieve amplitudes smaller than 1.5 \(\upmu\)m to fully meet the requirement of sheet position stability for a tight-focusing setup in laser-ion acceleration experiments. Table-top ion and neutron sources based on kHz femtosecond TW lasers and liquid sheet targets are feasible and promising.
## 5 Conflict of Interest
_The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest_.
## 6 Author Contributions
WM provided overall leadership and oversight for the experiment. ZC and ZPe conceived, conducted the experiments, performed analysis, and prepared the manuscript. YS, JZ, SC, YG, JL, PW, ZM, ZPa, DK, GQ, SRX, ZL, YL, SXX, TS, XC, QW and XL all contributed to the development of the liquid-sheet-generation system. WM helped with writing, proofreading the manuscript and reformed its structure. All authors reviewed, edited, and contributed to the manuscript.
## 7 Funding
NSFC Innovation Group Project (grant number 11921006), National Grand Instrument Project (grant number 2019YFF01014402), the National Natural Science Foundation of China (grant number 12205008) and the National Science Fund for Distinguished Young Scholars (12225501).
## 8 Acknowledgments
This work was supported by the following projects: NSFC Innovation Group Project (grant number 11921006), National Grand Instrument Project (grant number 2019YFF01014402) and the National Natural Science Foundation of China (grant number 12205008). W. Ma acknowledges support from the National Science Fund for Distinguished Young Scholars (12225501). The authors thank Prof. Ke Xu, Dr. Wei Yang, Shuai Zheng, and Fei Yu for advice on liquid-sheet-generation system design. The picture of the beaker in Figure 1B is provided by "OpenClipart-Vectors" from "pixabay.com". The content of the manuscript have been uploaded to arXiv as a preprint.
## 9 Data Availability Statement
Datasets are available on request:
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
|
2309.06540 | Identifying Bayesian Optimal Experiments for Uncertain Biochemical
Pathway Models | Pharmacodynamic (PD) models are mathematical models of cellular reaction
networks that include drug mechanisms of action. These models are useful for
studying predictive therapeutic outcomes of novel drug therapies in silico.
However, PD models are known to possess significant uncertainty with respect to
constituent parameter data, leading to uncertainty in the model predictions.
Furthermore, experimental data to calibrate these models is often limited or
unavailable for novel pathways. In this study, we present a Bayesian optimal
experimental design approach for improving PD model prediction accuracy. We
then apply our method using simulated experimental data to account for
uncertainty in hypothetical laboratory measurements. This leads to a
probabilistic prediction of drug performance and a quantitative measure of
which prospective laboratory experiment will optimally reduce prediction
uncertainty in the PD model. The methods proposed here provide a way forward
for uncertainty quantification and guided experimental design for models of
novel biological pathways. | Natalie M. Isenberg, Susan D. Mertins, Byung-Jun Yoon, Kristofer Reyes, Nathan M. Urban | 2023-09-12T19:28:26Z | http://arxiv.org/abs/2309.06540v2 | # Identifying Bayesian Optimal Experiments for Uncertain Biochemical Pathway Models
###### Abstract
Pharmacodynamic (PD) models are mathematical models of cellular reaction networks that include drug mechanisms of action. These models are useful for studying predictive therapeutic outcomes of novel drug therapies _in silico_. However, PD models are known to possess significant uncertainty with respect to constituent parameter data, leading to uncertainty in the model predictions. Furthermore, experimental data to calibrate these models is often limited or unavailable for novel pathways. In this study, we present a Bayesian optimal experimental design approach for improving PD model prediction accuracy. We then apply our method using simulated experimental data to account for uncertainty in hypothetical laboratory measurements. This leads to a probabilistic prediction of drug performance and a quantitative measure of which prospective laboratory experiment will optimally reduce prediction uncertainty in the PD model. The methods proposed here provide a way forward for uncertainty quantification and guided experimental design for models of novel biological pathways.
Bayesian optimal experimental design Bayesian inference Pharmacodynamic models Uncertainty quantification
## 1 Introduction
The intricate and lengthy process of discovering novel drugs stands to gain significant advantages through the application of computational methods.[1] These methods provide efficient and cost-effective avenues for exploring expansive chemical spaces, predicting molecular properties, and optimizing potential drug candidates. Dynamic models of biological systems (e.g., pharmacokinetic (PK) or pharmacodynamic (PD) models) are useful tools for studying biomolecular processes. Such models have been applied to study inter- and intra-cellular phenomena, including regulatory, metabolic, and signalling processes within human cells.[234] These models are derived from first-principles approximations of the complex dynamics that occur _in vivo_. The utility of these biological models lies in their simplicity; tractably capturing qualitative system behaviors at the expense of prediction accuracy. Although these dynamic models are often used in preclinical drug development to determine _in vivo_ drug response,[5] they are rarely applied in drug discovery. Biological pharmacodynamic models are valuable tools in both the optimal design and validation phases of identifying novel drug candidates, making them crucial for computational drug discovery. Furthermore, regulatory agencies are beginning to accept _in silico_ studies as part of the validation and testing of new medical therapies and technologies.[87] This underscores a need to formalise approaches for uncertainty quantification and model improvement for pharmacodynamic models.
Uncertainty quantification is a critical analysis that is often overlooked in computational drug design, despite its relevance and available software tools.[8] Previous studies have applied methods of uncertainty quantification to models for calibrating algorithmic parameters in automated diabetes treatment systems,[9] a PK/PD cancer model for the antivascular endothelial growth factor in the present of a cancer therapeutic agent,[10] the calibration of dose-response relationships for the \(\gamma\)-H2AX assay to predict radiation exposure,[11] and peptide-MHC affinity prediction using a data-driven model.[12] In the aforementioned studies, experimental or clinical data was readily available for the proposed uncertainty quantification procedures. For many PD models, the scarcity of quantitative data for calibrating these models has led to significant challenges in uncertainty quantification. This limited availability of experimental data results in non-unique and/or unconstrained parameter estimations, leading to issues of nonidentifiability.[13] Researchers have acknowledged the presence of significant parameter correlations, nonidentifiability, and parameter sloppiness in dynamic biology models, where parameters can vary over orders of magnitude without significantly affecting model output.[14] Given these limitations, it is evident that incorporating uncertainty quantification to biological dynamic models, even in the absence of available data, is crucial if they are to be used for computational drug discovery and evaluation.
Optimal experimental design (OED) has been proposed as a method for improving model parameter estimates in biological pharmacodynamic models. In OED, existing measured data is used to determine what new experiment(s) should be done to best reduce parameter uncertainties.[15] Several works have applied different flavors of optimal experimental design to dynamic biological models. Work done by Liepe et al.[16] demonstrates OED to maximize expected information gained in a signaling pathway model featuring 6 differential equations, 4 uncertain parameters, and selecting between 5 experimental outcomes. Researchers in Bandara et al.[17] showed that a sequential experimental design for a cell signaling model, coupled with fluorescence microscopy-based experimental data, was able to reduce uncertainty in some parameters, while it increased in others due to nonidentifiability. This work also provides an example of selecting optimal experiments based on an experimentally relevant metric. This approach is in opposition to the more common purely information-theoretic approach. We note that the model used in this study is relatively small, only featuring 4 differential equations, 4 uncertain rate constants, and selecting between 3 experimental possibilities. Researchers in Eriksson et al.[18] utilized approximate Bayesian computation to identify approximate posterior distributions for uncertain model parameters. This work presents a larger scale problem, with 34 reactions for 25 species, 34 uncertain parameters, and 6 experimental possibilities from which to select. This approximate approach is applicable to scenarios where the the data-generation distribution (likelihood function) is not available in analytic form.
In this paper, we utilize methods from Bayesian optimal experimental design using exact Bayesian inference and Hamiltonian Monte Carlo (HMC) to recommend experiments for mechanistic model improvement. Our focus is on a dynamic model of programmed cell death, i.e., _apoptosis_. This model predicts synthetic lethality in cancer in the presence of a _PARP1_ inhibitor. Therefore, when we refer to selecting an optimal _experimental design_, we mean "which species should we measure from a _PARP1_-inhibited cell experiment to improve confidence in simulated lethality predictions of a given inhibitor?" Our goal is to first understand the impact of parameter uncertainty on the predictive reliability of this model. Then, the aim is to recommend experimental measurements that can be done in the lab to maximize confidence in simulated lethality predictions. To do this, we conduct parameter inference using HMC for large volumes of data generated from simulated experiments. This way, we can determine which experimental data, in expectation, would most reduce uncertainty in modeled drug performance predictions. Such an approach provides researchers with a quantitative method for linking physical experiment design to model improvement in the absence of existing data.
It is important to note that while our optimal experimental design study is conducted with simulated experimental data, and the results are thus subject to our modeling assumptions, the dynamic model underpinning our analysis contains biologically relevant and measurable parameters, thus offering insights that can guide future experimental endeavors. Furthermore, the framework we lay out here is general and applicable to any relevant biological modeling setting where experimental data is obtainable.
Formally, the contributions of this work are threefold: (1) Applying Bayesian optimal experimental design and high-performance computing (HPC) to a large-scale coupled ODE system of 23 equations and with 11 uncertain parameters and choosing between 5 experimental designs -- the largest PD system considered thus far in a fully Bayesian OED framework of which the authors are aware; (2) Proposing novel decision-relevant metrics for quantifying the uncertainty in therapeutic performance of novel inhibitor drugs at a given concentration (IC\({}_{50}\) ); and (3) Identifying optimal experiments that minimize the uncertainty in therapeutic performance (i.e., simulated lethality) as a function of the inhibitor dosage of interest.
As we will show, there is preference for measuring activated caspases at low IC\({}_{50}\) to reduce uncertainty in the probability of cell death. At larger IC\({}_{50}\), our results show that this uncertainty is maximally reduced by collecting data the
mRNA-Bax concentrations. Therefore, we conclude that the decision as to which species to experimentally measure must be selected by considering trade-offs between the predicted BOED objectives and the inhibitor viability _in vitro_. The approach and results presented here thus bridge model-guided optimal design with uncertainty quantification, advancing drug discovery through uncertainty-aware decision-making.
## 2 Methods
The problem addressed in this work is to understand the impact of model parameter uncertainty on the predictive capacity of the PD model for _PARP1_-inhibited cellular apoptosis; a representative model for cancer drug evaluation. And given the effects of uncertainty and lack of experimental data, we wish reestablish a predictive model by recommending measurements that can be made in the lab to better constrain the model parameters.
The workflow to achieve the stated goals is as follows:
1. Acquire many (synthetic) experimental measurements for each prospective measurable species in the model
2. Construct _prior probability distributions_ (i.e., beliefs regarding model parameter distributions in the absence of data)
3. Conduct parameter estimation via Bayesian inference using the model, data, and prior probabilities for each prospective experiment and over multiple data samples
4. Compute expected drug performance predicted by the model given _posterior probability distributions_ (i.e., updated parameter distribution beliefs after incorporating data)
5. Rank and recommend experiments based on a metric that quantifies reliability in model predictions.
A graphical depiction comparing traditional parameter calibration to the proposed approach using simulated experimental data is shown in Figure 1.
Figure 1: Workflow for Bayesian parameter calibration using (a) real, measured data and (b) experimental data simulated from the model.
Traditional model parameter calibration via Bayesian inference is shown in Figure 1a. The model-derived prior predictive distribution of concentration profiles for Species "A" is constrained by experimental data (in **black**). This leads to an updated probability density function for the model parameters \(\theta\) referred to as the _posterior distribution_. The posterior distribution is then used to quantify inhibitor performance via probability of achieving apoptosis in simulation across an array of IC\({}_{50}\) values. We can then compare the expected inhibitor drug performance between the prior and posterior uncertainty. More accurate predictions by the posterior are thus expected because real data was used to update the model.
The methodology utilized in the present work is presented in Figure 1b. Here, we do not have access to real, measured data to constrain our model. However, we can use the PD model and an expert-derived error model to generate an ensemble of simulated experimental data. In simulating and calibrating to a large volume of simulated experimental data, we account for uncertainty in what _could_ be measured in reality (in **black**). This then leads to an ensemble of posterior distribution predictions for the parameters \(\theta\). We obtain ensembles of posterior parameter distributions for several potential measurable species in the model (e.g., Species "B" and Species "C"). From these, we can compute an ensemble of inhibitor performances as measured by probability of apoptosis across a range of IC\({}_{50}\) values. This implies a distribution on expected inhibitor performances as predicted by "measuring" different species and using the simulated measurements for calibration. By comparing the distributions in expected inhibitor performance across different measured species, we can identify which experiment is "optimal" by selecting the one that leads to the greatest reduction in uncertainty.
### Mathematical Model
The general form of a pharmakodynamic model is shown in Eqn. 1, which represents a set of coupled ordinary differential equations.
\[\begin{split}&\dot{y}(t)=f\left(y(t);u,\kappa\right)\\ & y(t_{0})=y_{0}\end{split} \tag{1}\]
In the rate-based modeling framework used here, variables \(y(t)\in\mathbb{R}^{n}\) are state variables representing the time-varying concentrations of the \(n\) chemical species of interest within the cell signalling pathway. The parameters \(\kappa\in\mathbb{R}^{m}\) are the parameter data, which includes the kinetic rate constants for the reaction network. Initial concentrations of species are those at \(t_{0}\), specified by the value of the vector \(y_{0}\in\mathbb{R}^{n}\). The system parameters and time-varying species concentrations are related by the functions \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), indexed by \(\forall i\in\{1,\ldots,n\}\). Also, we separate the input parameters in Equation 1 into two vectors: \(u\), the control variables, and \(\kappa\) the uncertain model parameters. The pre-specified input parameters \(u\) are any experimental controls set _a priori_. In the model considered here, this is the half maximal inhibitory concentration (IC\({}_{50}\)) for a novel _PARP1_ inhibitor molecule. IC\({}_{50}\) is a commonly used metric in experimental biology for quantifying a molecule's ability to inhibit a specific biological function.
### Bayesian Optimal Experimental Design
Bayesian optimal experimental design is a statistical framework that aims to identify the most informative experimental conditions or measurements to improve parameter estimation and model predictions in the presence of uncertainty [19, 20]. It involves iteratively selecting experiments [21, 22] that maximize the expected information gain or other objectives, such as mean objective cost of uncertainty [23, 24], given prior knowledge, model uncertainty, and the available resources. The goal is to strategically design experiments that provide the most valuable information via uncertainty reduction in the calibrated model predictions.
In our application, we want to identify an experimental design \(\xi\) (i.e., a specific species we can measure from the cell in the presence of a _PARP1_ inhibitor) from the set of feasible designs, \(\Xi\), which optimizes information gain regarding uncertain parameters, \(\theta\), given the outcome of that experiment, \(y\). Let us denote the set of all uncertain parameters within the ODE system, including initial conditions and rate constants, as the set \(\Theta\). For tractability reasons, we may choose to only consider a subset of the parameters which are deemed most significant via global sensitivity analysis and refer to the vector of key, significant parameters as \(\theta\in\Theta\). We also denote the vector of experimental designs being considered as \(\xi\in\mathcal{X}\subset\Xi\), and \(|\mathcal{X}|=N_{\xi}\) is the number of design decisions to choose from in optimization. Here, the individual experiments represent a species to measure in an experiment representing the process modeled in the _PARP1_-inhibited cell apoptosis PD model. We assume prior distributions on the uncertain parameters \(p(\theta)\), and a likelihood of the data conditional on the parameters and controls variables, \(p(y|\theta,\xi,u)\). For a general objective function of interest, \(\mathcal{J}(\xi;u)\), where \(\xi\) are the decision variables and \(u\) inputs to Equation 1, the Bayesian optimal experimental design task is described by Equation 2.
\[\xi^{*}=\arg\max_{\xi\in\Xi}\mathcal{J}(\xi;u) \tag{2}\]
Traditionally, this objective \(\mathcal{J}\) is the expected information gain (EIG), a quantitative measure of entropy reduction between the prior and posterior probability distributions of \(\theta\).[25, 26] Under such a formulation, the objective function becomes a nested expectation computation under the marginal distribution of experimental outcomes. The posterior distributions are often then estimated using a Markov Chain Monte Carlo (MCMC) algorithm to compute the information gained. For this application, we are not interested in optimal experiments which simply minimize relative entropy in the prior and posterior distributions in \(\theta\), as is the standard practice in BOED. Instead, we wish to select optimal experimental designs that minimize uncertainty in lethality predictions made by the ODE model under different inhibitor drug IC\({}_{50}\), i.e., different \(u\in\mathcal{U}\). In this model setting, lethality (i.e., probability of cell apoptosis) can be viewed as a probabilistic analog to the traditional dose-response curve and is defined in Equation 3.
To that end, we devise two metrics for quantifying uncertainty in cell lethality predictions at different IC\({}_{50}\) values. The first is the uncertainty in the probability of triggering cell apoptosis at a given IC\({}_{50}\), represented as \(\sigma_{apop}(u)\). A graphical representation of \(\sigma_{apop}(u)\) across IC\({}_{50}\) is shown in Figure 1(a). This quantity defines a characteristic height of the distribution of the lethality curve predictions at a specific IC\({}_{50}\) of interest. Smaller \(\sigma_{apop}\) values thus imply that there is higher confidence in the PD model predictions at the IC\({}_{50}\) they are computed at. One may choose to only consider the \(\sigma_{apop}\) at very small IC\({}_{50}\), since we are interested in drug molecules which are most effective at low doses.
The second metric provides a quantification of the uncertainty in the inhibitor drug dosage (i.e., IC\({}_{50}\) ) that achieves a given probability of triggering cell apoptosis, and is represented as \(\sigma_{IC_{50}}(\mathbb{T})\). A graphical representation of of \(\sigma_{IC_{50}}(\mathbb{T})\) across IC\({}_{50}\) and for \(\mathbb{T}=0.75\) is shown in Figure 1(b). This metric defines a characteristic width of the lethality distribution at a target probability of triggering cell death, \(\mathbb{T}\in[0,1]\). A smaller \(\sigma_{IC_{50}}\) value at a given \(\mathbb{T}\) implies higher confidence in which IC\({}_{50}\) value achieves a given probability of killing the cell. More detail on these metrics is discussed in Appendix 6.3. In addition, we discuss the multi-objective treatment of these two metrics, and the sensitivity of the optimal recommended experiment to that treatment, in Section 4.
## 3 Application
We consider a dynamic _PARP1_-inhibited cell apoptosis models for both wild-type human cells published in.[27] We note that this pathway can be found in some tumorous cells, as well. Thus, for the present study, the process that this model represents is the inhibition of _PARP1_ as a therapy for cancer. By applying an inhibitor via a specified IC\({}_{50}\), inhibition of _PARP1_ halts the DNA strand break repair process, reducing damage repair pathways and potentially leading to cell apoptosis. Further information on the model and the complete system of differential equations is available in the Equation S12.
The model consists of a set of 23 ordinary differential equations (ODE) wherein a subset of the 27 rate constant parameters are considered uncertain. This model takes as inputs (1) an IC\({}_{50}\) value and (2) initial conditions and (3) kinetic parameter values. In the present study, we only consider output at a specific time point, (i.e., the final time point) when performing inference. This means that the likelihood we supply for the data follows a Normal distribution in the model prediction at \(t=\tau\), where \(\tau\) is the time limit for the numerical integrator used to solve the ODE system at a given IC\({}_{50}\), and with a known \(\sigma\) that is computed with a \(10\%\) error relative to the model prediction. Information for the uncertain parameters considered in this study was first elicited from expert understanding of the system and relevant sources. Nominal parameter values and confidence interval information is shown in Table S1. Details regarding the construction of all prior distributions on uncertain parameters considered in the present study are provided in Table S2. In addition, because experimental data for species concentrations within a single cell undergoing _PARP1_-inhibited apoptosis are not readily available, we utilize simulated experimental data for our study that is generated via the procedure described in Appendix Section 6.1.
### Objective function specification
As noted previously, the quantity-of-interest in assessing the performance of _PARP1_ inhibitor is the predicted lethality. The predicted lethalities are then used to compute our uncertainty metrics or objective functions, \(\sigma_{apop}(u)\) and \(\sigma_{IC_{50}}(\mathbb{T})\). Here, we specify the method for computing lethality, i.e., probability of triggering cell death. It has been shown that a activated caspase molecule count in the cell above a threshold of 1,000 is sufficient to trigger cell apoptosis.[16] Therefore, we can compute whether or not cell apoptosis has been irreversibly triggered by the introduction of a _PARP1_ inhibitor by computing \(y_{k}^{max}:=\max\limits_{t\in[1,\ldots,\tau]}y_{k}(t)\) where \(k=\)Casp-act. Thus, lethality, or \(P(apoptosis)\) can be computed by sampling
the prior (posterior) probability distributions of the uncertain parameters, \(\theta^{i}\sim p(\theta)\)\(\forall i\in[1,\dots,N_{s}]\), specifying a value for IC\({}_{50}\), and integrating the system of ODEs. Given the ensemble of \(N_{s}\) maximum-caspase concentrations, \(P(apoptosis)\) can be calculated as shown in Equation 3.
\[P(apoptosis):=\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}\mathbb{1}_{y_{Step}^{max}\geq 1 000} \tag{3}\]
Here, \(N_{s}\) is the Monte Carlo sample count. When computed at all IC\({}_{50}\) of interest, we retrieve a vector of \(P(apoptosis|u)\) predictions \(\forall u\in\mathcal{U}\), producing a _lethality curve_ for a given posterior parameter estimate.
### Incorporating biological constraints
In addition to the expert-elicited prior probability information in Table S1, there is other biologically and therapeutically relevant information that we can incorporate into Bayesian parameter estimation to further constrain our posterior predictions. First, we know that for _PARP1_ inhibitor drugs with sufficiently high IC\({}_{50}\) values (i.e., low efficacy), we expect the repair signal from _PARP1_ to win out over the programmed cell death induced by p53. This means that at high IC\({}_{50}\), the DNA double strand break will be repaired and the cell will live. We also know the opposite to be true: for low IC\({}_{50}\), programmed cell death will be induced and the cell will die. In mathematical terms, we can say:
Figure 2: Graphical depictions of uncertainty metrics for probabilistic lethality curves in (a) \(\sigma_{apop}(u)\) at \(u=0.001\) and (b) \(\sigma_{IC_{50}}(\mathbb{T})\) at target probability threshold \(\mathbb{T}\) = 0.75. The shaded green region represents a distribution of _lethality curves_ that is computed from an ensemble of posterior distributions in the uncertain parameters \(\theta\). The mean of this distribution in lethality curves is shown in the square data points.
\[P(\text{apoptosis})=\begin{cases}0,&\text{if IC}_{50}\geq 100\\ 1,&\text{if IC}_{50}\leq 0.001\\ \text{Equation }3&\text{otherwise}\end{cases} \tag{4}\]
This prior knowledge regarding the effect of a drug on our quantity of interest does not directly translate to knowledge regarding the uncertain model parameters. However, because the quantity of interest is computed directly from an output of the model (i.e., concentration of activated caspases) we can still include this prior knowledge in Bayesian inference in three equivalent ways: in the likelihood, the prior, or through _post facto_ filtering of the estimated posterior chain. For the present application, we choose the posterior filtering approach due to numerical challenges in the HMC sampling algorithm used to estimate the posterior distribution.
The posterior filtering approach used involves sampling a larger chain (in this study, 50,000 samples) for each simulated experimental data point. If a given parameter sample in the posterior chain leads to the violation of the constraints in Equation 4 when specified in the forward simulation model under the nominal IC\({}_{50}\) (i.e., Talazoparib, see Appendix Section 6.1), that sample is removed from the chain.
## 4 Results and Discussion
In this section, we provide the key findings of our study applying Bayesian optimal experimental design to the _PARP1_-inhibited cell apoptosis model. First, we conduct a prior sensitivity analysis, follow by posterior estimation and computation of our optimal experimental design objectives.
### Prior Sensitivity
We conducted a prior sensitivity analysis computed for a quantity of interest, \(P(\text{apoptosis})\). The purpose of this study is to understand how much reduction in uncertainty is required to achieve desired lethality curve predictions. The result of this study is shown in Figure 3. Here, the \(P(\text{apoptosis})\) is computed by (1) drawing 1,000 random samples from prior probability distributions for each uncertain parameter, then (2) solving the model in Equation S12 at those parameter sample values and (3) computing \(P(\text{apoptosis})\) via Equation 3 for all IC\({}_{50}\) = [0.001, 0.01, 0.032, 0.1, 0.32, 1.0, 3.2, 10.0, 100.0]. To determine the model sensitivity against the priors, we scale \(\sigma\) for each parameter by a multiplicative factor to narrow the support of the prior distribution around its nominal mean. In Figure (a)a, we neglect our prior constraints and just solve the model under the unconstrained prior samples. Under these conditions, we see that given the nominal prior uncertainty (i.e., \(1.0\sigma\)), the effect of IC\({}_{50}\) on lethality vanishes (i.e., it is a more-or-less flat line). From the results, it appears that the model requires _at minimum_ a reduction in prior uncertainty of two orders of magnitude (i.e., \(0.01\sigma\)) before a predictive model is restored via sigmoidal curvature in the prior lethality curve. This implies that achieving a reduction in uncertainty spanning two orders of magnitude across all uncertain parameters is imperative to differentiate therapeutic performance between different inhibitors using the PD model.
Figure 3: The probability of cell death \(P(\text{apoptosis})\) plotted against IC\({}_{50}\) for different multiplicative factors of the standard deviation \(\sigma\) in the prior distributions (a) without and (b) with applying the constraints in Equation 4.
In Figure 2(b), we conduct the same sensitivity analysis while also applying the constraints in Equation 4 to restrict the model-predicted lethality values using biological insights. Under these conditions, we obtain lethality curve predictions that have the desired sigmoidal shape, showing low therapeutic performance at large IC\({}_{50}\) and high therapeutic performance at small IC\({}_{50}\). However, it is also evident that significant reduction in parameter uncertainty (0.0001\(\sigma\)) is required to achieve apoptotic switch behavior at low IC\({}_{50}\).[16]
It is worth noting that this sensitivity analysis involved altering all prior \(\sigma\) values by the same amount, and the existence of parameter correlations could potentially yield alternative outcomes in terms of the necessary uncertainty reduction. However. these results underscores the need for substantial reduction in parameter uncertainty in PD models to re-establish predictive ability.
### Posterior Estimation
We estimate posterior probability distributions for the uncertain parameters using the proposed Bayesian optimal experimental design approach. Details regarding the underlying algorithm are provided in Appendix Section 6.2. The summary statistics for (1) mean posterior parameter estimates, (2) effective sample size per parameter, and (3) filtered chain length are shown in Table S7-S16. The filtering procedure leads to non-uniform posterior chain lengths and lower effective sample sizes, as shown in the summary statistics. We note that only approximately 1% of samples in the posterior chains pass the constraints enforced via Equation 4. This is indicative of a known challenge in applying feasibility constraints after sampling: that the vast majority of the parameter space explored is ultimately infeasible, even for sufficiently large chains. Therefore, in future work, we aim to explore methods such as Bayesian constraint relaxation [28] to enforce relaxed constraints on our probabilistic model while maintaining numerical stability in standard Hamiltonian Monte Carlo sampling algorithms. Still, the average ESS per parameter is still close to the average chain length for each species, implying a high number of non-correlated samples in the posterior chains we computed. Additionally, we note that the assumption of Log-Normal prior probability distributions for all the model parameters in Table S1 is an assumption with ramifications on the final derived posterior distributions.
### Value of Information Estimation
The ensemble of predicted lethality curves for each species in Table S3 are shown in Figure 4. It is clear from the plots that certain species induce less spread in lethality predictions across IC\({}_{50}\) (e.g., mRNA-Bax) while other exhibit more variability (e.g., Casp-pro). To quantitatively understand these trends, we compute the the values of \(\sigma_{apop}(u)\) and \(\sigma_{IC_{50}}(\mathbb{T})\). The plots in Figures 4(a)-4(b) show how these uncertainty metrics evolve over \(u\) and \(\mathbb{T}\), respectively. Tabulated results are available in Tables S4 and S5.
Figure 4: Ensemble of lethality curves predicted for each measurable species considered at different IC\({}_{50}\) values. Each individual curve represents a different lethality prediction obtained from a single synthetic experimental measurement. The mean over all 100 measurements is shown in black.
It is seen in Figure 4(a) that there is a regime at small IC\({}_{50}\) (which is the regime with the greatest therapeutic viability) wherein the minimizer of \(\sigma_{apop}\) changes. In particular, at IC\({}_{50}\) = 0.01 and IC\({}_{50}\) = 0.03, the \(\xi\) which minimizes the \(\sigma_{apop}\) objective is Casp-act. At IC\({}_{50}\) > 0.03, the minimizer becomes mRNA-Bax for the rest of the IC\({}_{50}\) range considered. This can also be seen in Table S4, where the minimum value of \(\sigma_{apop}\) in each column (i.e., for each IC\({}_{50}\) ) is demarcated with an \(*\).
When considering the \(\sigma_{apop}\) objective alone, and when only placing significance in reducing uncertainty at low IC\({}_{50}\), the optimal decision \(\xi^{*}\) = Casp-act. At IC\({}_{50}\) = 0.01, the potential reduction in uncertainty in \(P(apoptosis)\) under experimental designs (i.e., measurements) \(\xi^{*}\) = Casp-act is 24% when compared to Bad-Bcl-xL, the species with the highest \(\sigma_{apop}(0.01)\). We note again that reduction in \(\sigma_{apop}\) directly translates to a reduction in the uncertainty regarding the predicted \(P(apoptosis)\) for a given inhibitor (i.e., at a given IC\({}_{50}\) ). This is analogous to reducing uncertainty in a given inhibitor drug's expected performance. In summary, this result implies that it is optimal to acquire and calibrate to experimental data regarding Casp-act concentrations if we wish to maximize confidence in the expected lethalities of inhibitors with with low IC\({}_{50}\).
In the case of \(\sigma_{IC_{50}}\), as shown in Figure 4(b), the trend of which \(\xi\) minimizes the objective remains consistent across threshold values \(\mathbb{T}=[0.5,0.6,0.7,0.8,0.9]\). Thus, when considering the \(\sigma_{IC_{50}}\) objective alone, and for any threshold, the optimal \(\xi^{*}\) = mRNA-Bax. We note that reduction in \(\sigma_{IC_{50}}\) directly translates to a reduction in the uncertainty regarding which IC\({}_{50}\) achieves the target \(P(apoptosis)\) threshold, \(\mathbb{T}\). This is analogous to reducing uncertainty in a which inhibitor drug achieves the desired expected performance. For a potent inhibitor with \(\mathbb{T}=0.9\), selecting the optimal experimental design \(\xi^{*}\) = mRNA-Bax over Casp-act (i.e., the species with the highest \(\sigma_{IC_{50}}(0.9)\)) results in a substantial 57% reduction in uncertainty regarding the specific IC\({}_{50}\) associated with achieving this cell death probability. In summary, this result implies that it is optimal to acquire and calibrate to experimental data regarding mRNA-Bax concentrations if we wish to maximize model prediction confidence in which inhibitor dosage achieves a given expected lethality.
When combining these two metrics to determine an optimal experimental design (i.e., which protein to measure in the lab to reduce uncertainty in lethality curve predictions) we provide an optimal design \(\xi^{*}\) against the following objectives: (1) \(\sigma_{IC_{50}}\) at a threshold of \(0.90\), (2) \(\sigma_{apop}\) at IC\({}_{50}\) = 0.01, and (3) weighted averages of both. In the case of the weighted averages, we take the objective to be defined as: \(w_{1}\sigma_{IC_{50}}+w_{2}\sigma_{apop}\) wherein \(w_{1}+w_{2}=1\). For the purposes of illustrating the sensitivity of the combined objective to the selection of weights \(w_{1},w_{2}\), we show this weighted average objective under \(w_{1}=[0.01,0.1]\). These results are tabulated in Table S6. The results of the weighted average show that at \(w_{1}=0.01\), Casp-act is optimal, while at \(w_{1}=0.1\), mRNA-Bax is preferred. Although the weights used here are arbitrary, domain knowledge should be integrated in the assignment of weights, leading to optimal experiments that align with what is understood regarding the underlying biological pathways.
In each of the objectives considered, activated caspase or Bax mRNA are identified as the optimal experimental measurements that could maximally reduce model prediction uncertainty if used in calibration. Here, we provide some insight as to why this may be the case given the present study. A possible explanation for the switch from \(\xi^{*}\) = Casp-act to \(\xi^{*}\) = mRNA-Bax at low to intermediate IC\({}_{50}\) may be because of the time dynamics. If a higher IC\({}_{50}\) is needed to kill a cancer cell via apoptosis, then measuring a species that represent the precursor steps to triggering apoptosis may more informative than the activated caspase itself. It is also possible that mRNA-Bax performs best across a wider range of drug dosages because it is more proximal to the mechanism of action. Once _PARP_ is inhibited, only a few reactions are needed to trigger mRNA-Bax formation, and ultimately, the activation of caspase.
Figure 5: Uncertainty metrics for synthetic lethality computed at different IC\({}_{50}\) using posterior distributions inferred using concentration data for the measurable species considered.
These differing outcomes across the different experimental design objectives underscore the essential role of biological insights in differentiating among optimal experimental designs.
## 5 Conclusions
In this work, we utilized a realistic PD model for _PARP1_-inhibited cell apoptosis in a model-guided optimal experimental design workflow to reduce uncertainty in the performance of cancer drug candidates. We first showed that, under prior uncertainty, the ability for the model to discriminate between _PARP1_-inhibitor therapeutic performance disappears. Through extensive simulations and Bayesian inference, we have demonstrated the potential of our methodology to reduce parameter uncertainties and reestablish a discriminatory PD model for drug discovery. Additionally, our exploration of the _PARP1_ inhibited cell apoptosis model has yielded novel insights into species-specific dependencies with _PARP1_ inhibitor IC\({}_{50}\), informing optimal measurement choices. Introducing therapeutic performance metrics based on the half maximal inhibitory concentration (IC\({}_{50}\) ), we provide a comprehensive evaluation of drug efficacy under parameter uncertainty. Specifically, we identified optimal experiments under three distinct objectives: (1) \(\sigma_{IC_{50}}\) at a threshold of 0.90, (2) \(\sigma_{appo}\) at IC50=0.01, and (3) a weighted average of both metrics. At low IC\({}_{50}\), our results indicate that measuring activated caspases is optimal, with the potential to significantly reduce uncertainty in model-predicted cell death probability. And at high IC\({}_{50}\), Bax mRNA emerges as the favored contender for experimental measurement. We note that our algorithmic approach, which includes collecting a large number of posterior samples and then rejecting parameter samples that violate biological constraints, leads to a majority of samples being rejected. Therefore, in future work, we will explore alternatives for enforcing biological constraints in Bayesian inference, such as relaxed constraints in the data likelihood. Additionally, we would like to identify lab experiments that can be conducted to acquire data which is compatible with our model so that we can validate the results of our OED and further constrain the model parameters. In summary, our combined metric-based approach not only elucidates the importance of tailored species measurements to minimize uncertainty in lethality curve predictions but also introduces a flexible framework for optimal experimental design using novel biological pathway models. Thus, our contributions bridge the critical gap between model-guided optimal design and uncertainty quantification, advancing the realm of computational drug discovery.
## 6 Appendix
### Simulated Experimental Data
Simulated experimental data was generated for each of the \(N_{\xi}\) observable proteins by taking random parameter samples from their respective prior predictive distributions (see Figure S8) and modifying these samples via a relative error of \(10\%\) across all parameters. The value of \(10\%\) was selected as it is the typical standard deviations found in experimental measurements in the laboratory. First, \(N_{d}\) random draws from the parameter prior distributions are made independently for each uncertain parameter, i.e. \(\theta_{i}^{sim}\sim p(\theta)\quad\forall i=[1,\dots,N_{d}]\). Next, the \(\theta_{i}^{sim}\) are used to solve the forward model in Equation S6 at an IC\({}_{50}\)=0.005083 \(\mu M\), which is the measured IC\({}_{50}\) for WT cells under the _PARP1_-inhibitor Talazoparib, as shown in Figure 6. This generates a "true" \(y\).
The noisy \(y_{k,i}^{sim}\quad\forall k=[1,\dots,N_{\xi}],\forall i=[1,\dots,N_{d}]\) used in inference in this work is generated by applying a multiplicative error \(\epsilon\), where we assume that the error is log-norm
Figure 6: The HCT116 dose response curves. Details on the study can be found in [27].
\[y_{k,i}^{sim}=y_{k,i}\epsilon,\ \ \ log(\epsilon)\sim\mathcal{N}(0,\sigma^{2})\ \ \ \forall k=[1,\ldots,N_{\xi}],\forall i=[1,\ldots,N_{d}]\]
Where the standard deviation \(\sigma\) is 0.1 for all species \(k\). It follows that:
\[log(y_{k,i}^{sim})=log(y_{k,i})+log(\epsilon)\] \[y_{k,i}^{sim}\sim exp(log(y_{k,i})+log(\epsilon)),\]
\(\forall k=[1,\ldots,N_{\xi}],\forall i=[1,\ldots,N_{d}]\). To avoid numerical issues at 0 concentrations (i.e., \(y_{k,i}=0\)), we employ a parameter \(a\) defined such that \(a=b-min\{y_{k,i}\}\), where \(b\) is a small tolerance (in this study, \(b=0.001\)). Leading to the final form of our simulated experimental data as:
\[y^{sim}\sim exp(log(y+a)+log(\epsilon)),\ \ \ log(\epsilon)\sim\mathcal{N}(0, \sigma^{2})\]
### Algorithmic approach
The workflow diagram depicted in Figure 7 outlines the systematic procedure employed for conducting Bayesian Optimal Experimental Design (BOED) within the context of the _PARP1_-inhibited cell apoptosis model.
The process begins by specifying prior uncertainty descriptions for the key model parameters (\(p(\theta)\ \forall\theta\in\Theta\)) determined by global sensitivity analysis results presented in [27]. This subset is shown in Table S1. Next, synthetic experimental data (\(y^{sim}_{k,i}\)) are generated for each experimental design (indexed by \(k=\{1,\ldots,N_{\xi}\}\)) under consideration, as listed in Table S3, and for \(i=\{1,\ldots,N_{d}\}\) samples from the prior distributions and a known error model (see Table S7). We note that we select the experimental designs listed in Table S3 because these species exhibited high prior predictive uncertainty, as seen in Figure S8. These synthesized data are used to constrain the uncertain model parameters via Bayesian Inference, where the posterior parameter probability distributions (\(p(\theta|y^{sim}_{k,i},\xi_{k})\)) are estimated using a HMC algorithm (i.e., a type of MCMC algorithm for sampling from an unknown distribution) for each \(k=\{1,\ldots,N_{\xi}\}\) and for each datum \(i=\{1,\ldots,N_{d}\}\). Next, constraints based on biological feasibility are then applied to reject samples in the Markov chains that violate the expected behavior of the cell under extreme _PARP1_ inhibitor dosages. This leads to modified posterior estimates (\(p^{\prime}(\theta|y^{sim}_{k,i},\xi_{k})\)). These posterior distributions are then utilized to compute the Quantity of Interest (QOI), specifically the probability of cell apoptosis (\(P(apoptosis|u)\)), which is assessed for various inhibitor IC\({}_{50}\) values (\(u\in\mathcal{U}\)). These QOI values serve as the basis for computing the uncertainty metrics: \(\sigma_{apop}(u)\) and \(\sigma_{IC_{50k}}(\mathbb{T})\) for each experimental design. Finally, the uncertainty metrics are combined to make optimal experimental decisions, thereby providing recommendations for the most effective experimental designs. This comprehensive workflow guides the process of leveraging BOED to enhance our understanding of how we can optimally reduce uncertainty in the model predictions made by the _PARP1_-inhibited cell apoptosis model under different drug dosage conditions.
Figure 7: Workflow utilized for Bayesian experimental design for the _PARP1_-inhibited cell apoptosis model.
### Uncertainty Metrics
The calculated lethality serves as the basis for deriving our fundamental experimental design objectives: the \(\sigma_{IC_{50}}(\mathbb{T})\) and \(\sigma_{apop}(u)\) uncertainty metrics. These metrics offer a quantifiable means of assessing the reduction in uncertainty associated with the various experimental designs, \(\xi\).
The \(\sigma_{IC_{50}}(\mathbb{T})\) metric is determined by fitting a 4-parameter sigmoidal curve, specifically the form \(P(apoptosis|IC_{50})_{i}=d_{i}+\frac{a_{i}-d_{i}}{1+(\frac{IC_{50}}{\sigma})^{ \nu_{i}}}\), to the computed \(P(apoptosis|u)_{i}\) values for each IC\({}_{50}\) value and for each \(i\in[1,\dots,N_{d}]\). Subsequently, the intersection points of the sigmoidal fits with pre-specified \(P(apoptosis)\) thresholds (e.g., \(\mathbb{T}\) = [0.5, 0.6, 0.7, 0.8, 0.9]) determine the associated IC\({}_{50}\) values. The \(\sigma IC_{50}(\mathbb{T})\) metric is then defined as the standard deviation over IC\({}_{50}\) values at which the sigmoidal fits intersect the threshold.
The \(\sigma_{apop}\) metric is computed by taking the standard deviation over all predicted probabilities of apoptosis at a given IC\({}_{50}\), i.e., \(P(apoptosis|u)_{i},\quad\forall i=[1,\dots,N_{d}],\forall u\in\mathcal{U}\). This is defined in Equation 5, where \(\bar{P}(apoptosis|u)\) is the arithmetic mean of the predicted probabilities of cell death at the specified inhibitor IC\({}_{50}\), or \(u\).
\[\sigma_{apop}(u):=\sqrt{\frac{1}{N_{d}-1}\sum_{i=1}^{N_{d}}\left(P(apoptosis|u )-\bar{P}(apoptosis|u)\right)}\quad\forall u\in\mathcal{U} \tag{5}\]
### Implementation
The entirety of the workflow presented here was implemented using the programming language Julia.[32] Specifically, we utilize the probabilistic programming library Turing.jl[25] to implement our probabilistic model for Bayesian inference. To sample the posterior chain and obtain posterior probability distributions, we use the No U-Turn Sampler (NUTS), a variant of HMC, with an acceptance rate of 0.65. The ODE model evaluations within the probabilistic model are conducted via the DifferentialEquations.jl library.[30]
The implementation of BOED we propose here exploits several elements of high-performance computing approaches to accelerate the optimization process. First, the implementation of the ODE model was translated into the low-level programming language, Julia, from the rule-based modeling language, BioNetGen.[31] When performing ODE model simulations at the nominal parameter settings, the BioNetGen implementation requires 0.10 seconds of CPU time. In contrast, solving the ODE model under the same nominal parameter settings using Julia takes only 0.0018 seconds of CPU time. Consequently, our Julia-based model implementation exhibits an impressive 98% reduction in CPU time compared to the BioNetGen implementation. Although these individual execution times may already appear low, the acceleration of individual model evaluations is critically important in the context of Bayesian optimal experimental design. This is because these model evaluations are used in estimating the data likelihood across various parameter values, and as the number of these evaluations increases, the cumulative effect on computational time of Bayesian inference becomes more pronounced. Secondly, we accelerate the computation of each of the 100 posterior estimates per experimental design by using 100 CPU cores in parallel. This means that we can compute 100 posteriors simultaneously, decreasing the time-to-solution over a sequential approach. Thus, we show that the combined utilization of these high-performance computing strategies enhances the efficiency and practical applicability of our proposed Bayesian optimal experimental design framework within the context of a complex PD model. All instances of posterior sampling through NUTS were solved on the Institutional Cluster at the Scientific Data and Computing Center at Brookhaven National Library. Runs were done in parallel using 3 nodes at a time. Each node used consists of 2 CPUs Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz with 18 cores each.
## Funding
This work was supported by the New York Empire State Development Big Data 800 Science Capital Project, the DOE Advanced Scientific Computing Research Applied 801 Mathematics Fellowship, and the Brookhaven Laboratory Directed Research and 802 Development DEDUCE Project. (NMI, KRR, BJY, NMU) Funding sources include federal funds from the National Cancer Institute, National 793 Institutes of Health, and the Department of Health and Human Services, Leidos Biomedical 794 Research Contract No. 75N91019D00024, Task Order 75N91019F00134 through the 795 Accelerating Therapeutics for Opportunities in Medicine (ATOM) Consortium under 796 CRADA TC02349. (SDM)
## Acknowledgements
The authors would like to acknowledge Manasi P. Jogalekar and Morgan E. Diolaiti for providing experimental data related to the IC\({}_{50}\) for Talazoparib used in this work.
In addition, we acknowledge the roles of Naomi Ohashi and Eric A. Stahlberg for their project management roles at the Frederick National Laboratory for Cancer Research and ATOM Research Alliance.
## Author Contributions
Conceptualization of problem, NMI, SDM, BJY, KR, and NMU; Methodology, NMI, SDM, BJY, KR, and NMU; Implementation, NMI; Analysis, NMI, SDM, BJY, KR, and NMU; Writing - original draft, NMI; Writing - reviewing and editing, NMI, SDM, BJY, KR, and NMU.
## Declaration of Interests
SDM is Founder and CEO of BioSystems Strategies, LLC.
NMI, BJY, KR, and NMU declare no competing interests.
|
2309.09673 | Strong Coupling of Two-Dimensional Excitons and Plasmonic Photonic
Crystals: Microscopic Theory Reveals Triplet Spectra | Monolayers of transition metal dichalcogenides (TMDC) are direct-gap
semiconductors with strong light-matter interactions featuring tightly bound
excitons, while plasmonic crystals (PCs), consisting of metal nanoparticles
that act as meta-atoms, exhibit collective plasmon modes and allow one to
tailor electric fields on the nanoscale. Recent experiments show that TMDC-PC
hybrids can reach the strong-coupling limit between excitons and plasmons
forming new quasiparticles, so-called plexcitons. To describe this coupling
theoretically, we develop a self-consistent Maxwell-Bloch theory for TMDC-PC
hybrid structures, which allows us to compute the scattered light in the near-
and far-field explicitly and provide guidance for experimental studies. Our
calculations reveal a spectral splitting signature of strong coupling of more
than $100\,$meV in gold-MoSe$_2$ structures with $30\,$nm nanoparticles,
manifesting in a hybridization of exciton and plasmon into two effective
plexcitonic bands. In addition to the hybridized states, we find a remaining
excitonic mode with significantly smaller coupling to the plasmonic near-field,
emitting directly into the far-field. Thus, hybrid spectra in the strong
coupling regime can contain three emission peaks. | Lara Greten, Robert Salzwedel, Tobias Göde, David Greten, Stephanie Reich, Stephen Hughes, Malte Selig, Andreas Knorr | 2023-09-18T11:25:09Z | http://arxiv.org/abs/2309.09673v1 | # Strong Coupling of Two-Dimensional Excitons and Plasmonic Photonic Crystals:
###### Abstract
Monolayers of transition metal dichalcogenides (TMDC) are direct-gap semiconductors with strong light-matter interactions featuring tightly bound excitons, while plasmonic crystals (PCs), consisting of metal nanoparticles that act as meta-atoms, exhibit collective plasmon modes and allow one to tailor electric fields on the nanoscale. Recent experiments show that TMDC-PC hybrids can reach the strong-coupling limit between excitons and plasmons forming new quasiparticles, so-called plexcitons. To describe this coupling theoretically, we develop a self-consistent Maxwell-Bloch theory for TMDC-PC hybrid structures, which allows us to compute the scattered light in the near- and far-field explicitly and provide guidance for experimental studies. Our calculations reveal a spectral splitting signature of strong coupling of more than \(100\,\mathrm{meV}\) in gold-\(\mathrm{MoSe}_{2}\) structures with \(30\,\mathrm{nm}\) nanoparticles, manifesting in a hybridization of exciton and plasmon into two effective plexctionic bands. In addition to the hybridized states, we find a remaining excitonic mode with significantly smaller coupling to the plasmonic near-field, emitting directly into the far-field. Thus, hybrid spectra in the strong coupling regime can contain three emission peaks.
## I Introduction
The light-matter interaction strength in transition metal dichalcogenide (TMDC) monolayers has been reported to be extremely strong [1], e.g., as demonstrated by absorption rates of up to 10% in the visible spectrum [2; 3]. Such a high absorption is particularly noteworthy given the two-dimensional nature of these materials, which possess a thickness of less than \(1\,\mathrm{nm}\). In addition to featuring a direct bandgap [2], TMDC monolayers support in-plane exciton formation due to their two-dimensional structure [4]. Excitons (bound electron-hole pairs) therefore dominate the optical spectrum below the band edge [5]. In addition, the remarkably thin nature of TMDC monolayers results in an increased sensitivity to surrounding materials. Consequently, the atomically thin materials can easily be influenced by various factors, such as the choice of the substrate material, defects [6; 7], and functionalization [8], e.g., with molecules [9; 10] and heterostructure configurations [11; 12].
In contrast, the optical response of metal nanoparticles (MNPs) is dominated by localized plasmons which are collective electron oscillations formed within the metal conduction band [13]. A special feature of MNPs is a significant amplification of the electric near-field, which additionally allows for manipulating the electric field on dimensions far below the diffraction limit [14; 15]. Arranging MNPs as meta-atoms in a crystal structure yields a plasmonic crystal (PC) with extraordinary strong light-matter interaction. The localized plasmons couple with the electric field and form plasmon-polaritons which can propagate within the crystal [16; 17] and sharpen the single particle plasmon resonance. The optical properties of PCs strongly depend on a variety of parameters like lattice structure and nanoparticle shape. By manipulating these, it is possible to tune the optical properties of the crystal over a wide range [16; 18; 19; 20]. The strong tunability and enhancement of the electric field make periodic plasmonic structures appealing for light harvesting and non-linear optics, yielding, e.g., applications for nanoscale lasing [21; 19] and advanced optical spectroscopy [22; 23].
Figure 1: **Sketch of the hybrid system:** the 2D semiconductor (TMDC) is covered by a square-structured 2D PC of metal nanodisks with the example of gold. The structure is periodic and infinite in the \(xy\)-plane and embedded in a surrounding medium with constant permittivity \(\varepsilon\). |
2309.14144 | Demazure Filtrations of Tensor Product Modules and Character Formula | We study the structure of the finite-dimensional representations of
$\mathfrak{sl}_2[t]$, the current Lie algebra type of $A_1$, which are obtained
by taking tensor products of special Demazure modules. We show that these
representations admit a Demazure flag and obtain a closed formula for the
graded multiplicities of the level 2 Demazure modules in the filtration of the
tensor product of two local Weyl modules for $\mathfrak{sl}_2[t]$. Furthermore,
we derive an explicit expression for graded character of the tensor product of
a local Weyl module with an irreducible $\mathfrak{sl}_2[t]$ module. In
conjunction with the results of \cite{MR3210603}, our findings provide evidence
for the conjecture in \cite{9} that the tensor product of Demazure modules of
levels m and n respectively has a filtration by Demazure modules of level m +
n. | Divya Setia, Tanusree Khandai | 2023-09-25T13:57:18Z | http://arxiv.org/abs/2309.14144v1 | # Demazure filtrations of tensor product modules and character formula
###### Abstract.
We study the structure of the finite-dimensional representations of \(\mathfrak{sl}_{2}[t]\), the current Lie algebra type of \(A_{1}\), which are obtained by taking tensor products of special Demazure modules. We show that these representations admit a Demazure flag and obtain a closed formula for the graded multiplicities of the level 2 Demazure modules in the filtration of the tensor product of two local Weyl modules for \(\mathfrak{sl}_{2}[t]\). Furthermore, we derive an explicit expression for graded character of the tensor product of a local Weyl module with an irreducible \(\mathfrak{sl}_{2}[t]\) module. In conjunction with the results of [7], our findings provide evidence for the conjecture in [5] that the tensor product of Demazure modules of levels m and n respectively has a filtration by Demazure modules of level m + n.
## 1. Introduction
Let \(\mathfrak{g}\) be a finite-dimensional simple Lie algebra over the complex field \(\mathbb{C}\) and \(\mathfrak{g}[t]\) be its associated current algebra, which is the Lie algebra of polynomimal mappings from \(\mathbb{C}\) to \(\mathfrak{g}\). Due to its connections with combinatorics, number theory and mathematical physics, the theory of finite-dimensional representations of current algebras has garnered significant attention in the past two decades. Motivated by a conjecture in [5] that suggests tensor products of Demazure modules with levels \(m\) and \(n\), respectively, have a filtration by Demazure modules of level \(m+n\), in this paper we consider the class of finite-dimensional representations, namely those of \(\mathfrak{sl}_{2}[t]\), which are derived from tensor products of certain Demazure modules of \(\mathfrak{sl}_{2}[t]\) and prove results that provide evidence in support of the conjecture.
Let \(\hat{\mathfrak{g}}\) be an affine Kac-Moody Lie algebra and \(\hat{\mathfrak{b}}\) be a standard positive Borel subalgebra of \(\hat{\mathfrak{g}}\). Given an integrable highest weight irreducible representation \(\mathcal{V}\) of \(\hat{\mathfrak{g}}\), a Demazure module associated with \(\mathcal{V}\), is defined as the \(\hat{\mathfrak{b}}\)-module generated by an extremal weight vector \(u_{0}\) of \(\mathcal{V}\). These modules are \(\mathfrak{g}\)-stable when the restriction of the weight of \(u_{0}\) to a Cartan subalgebra of \(\mathfrak{g}\) is anti-dominant and in this case the corresponding Demazure module is a module of the current algebra \(\mathfrak{g}[t]\). A Demazure module is said to be of level \(\ell\) if the central element of \(\hat{\mathfrak{g}}\) acts on \(\mathcal{V}\) by the scalar \(\ell\). A \(\hat{\mathfrak{b}}\)-module is said to admit a Demazure flag if it has a filtration by submodules whose successive
quotients are Demazure modules. While it is known that the tensor product of two Demazure modules does not in general have a Demazure flag, in [16, 20, 17] it was proved that for simply-laced Kac-Moody Lie algebras, the tensor product of a one-dimensional Demazure module with an arbitrary one admits a Demazure flag. Our study here extends this question to tensor products of level 1 Demazure modules with Demazure modules of arbitrary levels for current algebras of type \(A_{1}\).
The notion of Weyl modules for affine Kac-Moody Lie algebras was introduced in [9]. In the case when \(\mathfrak{g}=\mathfrak{sl}_{2}\), it was proven that these modules are the classical limit (\(q\to 1\)) of standard modules of the quantum affine algebras. Furthermore, it was shown that for a dominant integral weight \(\lambda\), \(W_{loc}(\lambda)\), the local Weyl module with highest weight \(\lambda\), is a finite dimensional graded \(\mathfrak{g}[t]\)-module generated by a highest weight vector of weight \(\lambda\) and every finite-dimensional graded \(\mathfrak{g}[t]\)-module with highest weight \(\lambda\) is a quotient of \(W_{loc}(\lambda)\). Subsequently it was demonstrated ([12, 10, 15, 21]) that for current algebra of type ADE, the local Weyl modules are in fact Demazure modules of level 1 and their Demazure characters coincide with non-symmetric Macdonald polynomials, specialized at \(\mathfrak{t}=0\). On the other hand, it was demonstrated in [20] that the local Weyl modules for current algebras of non-simply laced Lie algebras \(\mathfrak{g}\) have filtrations by Demazure modules whose multiplicity coincides with the multiplicity of certain Demazure modules in local Weyl modules of type \(A_{r}\) for suitable \(r\).
In [8], a family of finite-dimensional quotients of the local Weyl modules, often referred to as Chari-Ventakesh Modules (in short CV-modules) was introduced. It was shown that these modules subsume many disparate classes of finite-dimensional graded representations of current algebras. By definition, a CV module \(V(\xi)\) is associated to a family of partitions \(\xi=\{\xi_{\alpha}\}_{\alpha}\), which is indexed by the set of positive roots of \(\mathfrak{g}\). It was proven in [8, Theorem 2] that the Demazure modules in various levels can be realized as CV-modules associated to a set of rectangular or special near rectangular partitions. The interpretation of Demazure modules as CV-module greatly simplified their defining relations and aided in the study of the structure of fusion product modules for \(\mathfrak{sl}_{2}[t]\). In [7] a necessary and sufficient condition was obtained for the existence of level \(\ell\) Demazure flags in an arbitrary CV-module for \(\mathfrak{sl}_{2}[t]\) and combinatorial descriptions of the graded multiplicities of Demazure modules in \(V(\xi)\) was given. Ensuing research ([3, 2]) on the generating functions associated with the graded multiplicities of Demazure modules in special CV-modules established connections between these modules, number theory and combinatorics.
Motivated by the study initiated in [5] on the structure of tensor product of two local Weyl modules for \(\mathfrak{sl}_{2}[t]\), we begin by obtaining a presentation of the representations of \(\mathfrak{sl}_{2}[t]\) that arise from taking the tensor product of a local Weyl module with a CV module \(V(\xi)\). Specializing the partition \(\xi\), we then explore the tensor product of local
Weyl modules with irreducible \(\mathfrak{sl}_{2}[t]\)-modules and local Weyl modules for \(\mathfrak{sl}_{2}[t]\). In each of these cases, we observe that the corresponding tensor product possesses a filtration by CV-modules. Using [7, Theorem 3.3], we thus conclude that each such module has a filtration by Demazure modules at appropriate levels.
For a given positive integer \(m\), let \(W_{loc}(m)\) denote the local Weyl module of \(\mathfrak{sl}_{2}[t]\) with the highest weight \(m\). Through explicit construction, we demonstrate that when \(\xi=(n)\), the module \(W_{loc}(m)\otimes V(\xi)\) admits a filtration by CV-modules with hook-type partitions. Utilizing this filtration, we derive expressions for the outer multiplicities in the tensor product module \(W_{loc}(m\omega)\otimes ev_{0}V(n\omega)\). As a consequence we are able to express the product of a Schur polynomial with a specialized Macdonald polynomial in terms of Schur polynomials.
Finally we consider the tensor product of a local Weyl module with certain special Demazure modules. In [18], it had been shown that the truncated local Weyl module of \(\mathfrak{sl}_{2}[t]\) of highest weight \(r\omega\) can be realized as a CV-module. Interestingly, we observe that the tensor product of two local Weyl modules have a filtration by truncated local Weyl modules of suitable weights. We determine the graded character of such truncated local Weyl modules in terms of level 2 Demazure modules and give the graded multicilities of level 2 Demazure modules in \(W_{loc}(m)\otimes W_{loc}(n)\).
The character formulas for certain tensor product modules had been obtained in [5]. Using an uniform approach we give a direct proof for the character formula of the modules which we consider. Then, using the Chari-Pressley-Loktev basis ([9, Section 6], [10, Theorem 2.1.3]) and Chari-Venkatesh basis ([8, Theorem 5]) for local Weyl modules for \(\mathfrak{sl}_{2}[t]\), the presentation of the tensor products modules we obtained and the character formulas obtained, we show that in the cases under consideration, the tensor product modules have a Demazure filtration of appropriate level.
The paper is structured as follows: In Section 2, we set the notations and recall the basic definitions and results essential for our paper. Section 3 begins with a recap of the definition of CV modules. We then proceed to give a presentation via generators and relations for the tensor product modules \(W_{loc}(m)\otimes V(\xi)\). Moving on to Section 4, we systematically construct a decreasing chain of submodules of \(W_{loc}(m\omega)\otimes ev_{0}V(n\omega)\) such that the successive quotients in this chain are isomorphic to \(V(\xi)\)-module, where \(\xi\) is a partition with a hook shape. We determine the graded character of these modules. In section 5, we prove the existence of a Demazure flag for modules arising from tensor product of local Weyl module with Demazure modules of level 1 and determine the graded character formula for the tensor product of two local Weyl modules in terms of level 2 Demazure modules.
## 2. Preliminaries
In this section we set the notation for the paper and recall the definitions that are willl be used throughout.
Throughout this paper, \(\mathbb{C}\) will denote the field of complex numbers, \(\mathbb{Z}\) (resp. \(\mathbb{Z}_{+}\)), the set of integers (resp. non-negative integers) and \(\mathbb{C}[t]\), the polynomial algebra in an indeterminate \(t\). For \(n,r\in\mathbb{Z}_{+}\), set
\[[n]_{q}=\frac{1-q^{n}}{1-q},\quad\left[\begin{matrix}n\\ r\end{matrix}\right]_{q}=\frac{[n]_{q}[n-1]_{q}\ldots[n-r+1]_{q}}{[r]_{q}[r-1 ]_{q}\ldots[1]_{q}}\]
\[\left[\begin{matrix}n\\ 0\end{matrix}\right]_{q}=1,\quad\left[\begin{matrix}n\\ 1\end{matrix}\right]_{q}=\frac{1-q^{n}}{1-q},\quad\left[\begin{matrix}n\\ r\end{matrix}\right]_{q}=0,\quad\text{ unless }\{n,r,n-r\}\subset\mathbb{Z}_{+}.\]
Given any complex Lie algebra \(\mathfrak{a}\), denote by \(U(\mathfrak{a})\) the corresponding universal enveloping algebra and by \(\mathfrak{a}[t]\) the associated current Lie-algebra with underlying vector space \(\mathfrak{a}\otimes\mathbb{C}[t]\) and Lie-bracket :
\[[x\otimes f,y\otimes g]=[x,y]\otimes fg\quad\forall x,y\in\mathfrak{a}\text{ and }f,g\in\mathbb{C}[t]\]
The natural grading on \(\mathbb{C}[t]\) defines a \(\mathbb{Z}_{+}\)-grading on \(\mathfrak{a}[t]\) and \(U(\mathfrak{a}[t])\). With respect to it, any element of \(U(\mathfrak{a}[t])\) is of the form \((x_{1}\otimes t^{a_{1}})(x_{2}\otimes t^{a_{2}})\ldots(x_{n}\otimes t^{a_{n}})\) and has grade \(a_{1}+a_{2}+\cdots+a_{n}\).
With respect to an ordered basis of \(\mathfrak{a}[t]\), the PBW filtration on \(U(\mathfrak{a}[t])\) is given as follows:
\[U(\mathfrak{a}[t])^{\leq r}=\{a_{1}^{s_{1}}a_{2}^{s_{2}}\cdots a_{l}^{s_{l}}: 0\leq\sum_{j=1}^{l}js_{j}\leq r\}\]
We say length of \(X\in U(\mathfrak{a}[t])\) is \(r\) if \(X\in U(\mathfrak{a}[t])^{\leq r}\backslash U(\mathfrak{a}[t])^{\leq r-1}\).
Let \(\mathfrak{sl}_{2}\) be the Lie algebra of two by two trace zero matrices with entries in \(\mathbb{C}\) and \(\mathfrak{sl}_{2}[t]\) the associated current algebra. Let \(\{x,h,y\}\) be the standard basis, \(\alpha\) (resp. \(\omega\)) the unique simple root (resp. fundamental weight) of \(\mathfrak{sl}_{2}\).
For \(n\in\mathbb{Z}_{+}\), let \(V(n)\) be the \(n+1\)-dimensional irreducible \(\mathfrak{sl}_{2}\)-module generated by a highest weight vector \(v_{n}\) with the following defining relations:
\[xv_{n}=0,\quad hv_{n}=nv_{n},\quad y^{n+1}v_{n}=0.\]
Any finite-dimensional \(\mathfrak{sl}_{2}\)-module \(V\) can be decomposed as:
\[V=\bigoplus_{k\in\mathbb{Z}}V_{k},\qquad V_{k}=\{v\in V\mid hv=kv\}.\]
The formal character of a finite-dimensional \(\mathfrak{sl}_{2}\)-module \(V\), denoted as \(ch_{\mathfrak{sl}_{2}}V\) is an element of \(\mathbb{Z}[e(\omega)]\) such that
\[V=\bigoplus_{k\in\mathbb{Z}_{+}}V(k)^{\oplus m_{k}}\qquad\Longrightarrow\qquad ch _{\mathfrak{sl}_{2}}\ V=\sum_{k\in\mathbb{Z}_{+}}m_{k}\ ch_{\mathfrak{sl}_{2}} \ V(k),\]
where \(ch_{\mathfrak{sl}_{2}}\ V(k)=\sum\limits_{i=0}^{k}e((k-2i)\omega)\) for \(k\in\mathbb{Z}_{+}\).
A graded \(\mathfrak{sl}_{2}[t]\)-module \(V\) is a \(\mathbb{Z}\)-graded vector space \(V=\bigoplus\limits_{k\in\mathbb{Z}}V[k]\), which admits a \(\mathfrak{sl}_{2}[t]\) action in the following way:
\[(X\otimes t^{r})V[k]\subseteq V[k+r],\quad r\in\mathbb{Z}_{+},k\in\mathbb{Z}.\]
For each \(k\), \(V[k]\) is a \(\mathfrak{sl}_{2}\)-submodule of \(V\). If \(V\) and \(W\) are graded \(\mathfrak{sl}_{2}[t]\)-modules, then so is \(V\otimes W\), with the following \(\mathfrak{sl}-2[t]\)-action:
\[(X\otimes t^{r})(v\otimes w)=(X\otimes t^{r})v\otimes w+v\otimes(X\otimes t^ {r})w\]
Thus in this case we have, \((V\otimes W)[k]=\bigoplus\limits_{j\in\mathbb{Z}_{+}}V[j]\otimes W[k-j]\), where \(W[j]=0\) if \(j<0\).
Given a \(\mathfrak{sl}_{2}\)-module \(W\), we define a \(\mathfrak{sl}_{2}[t]\)-structure on \(W\) as follows:
\[(X\otimes t^{r})w=\delta_{r,0}Xw,\]
We denote such a module by \(ev_{0}(W)\). Clearly it is \(\mathbb{Z}\)-graded \(\mathfrak{sl}_{2}[t]\)-module with \(ev_{0}(W)[k]=\delta_{k,0}W\). For \(r\in Z\), let \(\tau_{r}\) be the shift operater given \((\tau_{r}W)[k]=W[k-r].\) Given \((n,r)\in\mathbb{Z}_{+}^{2}\), we denote the irreducible \(\mathfrak{sl}_{2}[t]\)-module \(\tau_{r}\ ev_{0}V(n)\) by \(V(n,r)\) and in particular denote \(ev_{0}V(n)\) by \(V(n,0)\). It was shown in [6] that any irreducible, graded, finite-dimensional module is isomorphic to \(V(n,r)\) for a pair \((n,r)\in\mathbb{Z}_{+}^{2}\).
The graded character of a graded \(\mathfrak{sl}_{2}[t]\)-module \(V=\bigoplus\limits_{k\in\mathbb{Z}_{\geq 0}}V[k]\) is given by:
\[ch_{gr}\ V=\sum_{k\geq 0}ch_{\mathfrak{sl}_{2}}\ V[k]\ q^{k}\in\mathbb{Z}[e( \omega)][q],\qquad\text{ where $q$ is an indeterminate}.\]
For \(\lambda\in\mathbb{Z}_{+}\), the local Weyl module \(W_{loc}(\lambda)\) is a finite-dimensional, \(\mathbb{Z}_{+}\)-graded \(\mathfrak{sl}_{2}[t]\)-module generated by an element \(w_{\lambda}\) with the following defining relations:
\[(x\otimes t^{r})w_{\lambda}=0,\quad(h\otimes t^{r})w_{\lambda}=\lambda\delta_ {r,0}w_{\lambda},\quad(y\otimes 1)^{\lambda+1}w_{\lambda}=0,\qquad\quad\forall r \in\mathbb{Z}_{\geq 0}.\]
By definition, \(W_{loc}(\lambda)\) is a highest weight representation of \(\mathfrak{sl}_{2}[t]\) with highest weight \(\lambda\omega\) and any finite-dimensional integrable \(\mathfrak{sl}_{2}[t]\)-module with highest weight \(\lambda\omega\) is a quotient of \(W_{loc}(\lambda)\). In particular, \(ev_{0}V(\lambda)\) is the unique irreducible quotient of \(W_{loc}(\lambda)\). It was proven in [7] that the graded character of \(W_{loc}(m)\) is given as follows:
\[ch_{gr}\ W_{loc}(m)=\sum_{l=0}^{m}(\genfrac{[}{]}{0.0pt}{}{m}{l}_{q}-\genfrac{[}{]}{ 0.0pt}{}{m}{l-1}_{q})\ ch_{gr}\ ev_{0}V(m-2l).\]
## 3. Presentation of \(W_{loc}(m)\otimes V(\mathbf{n})\)
In [8], a family of graded representations of the current Lie algebras, which we shall refer to as CV-modules, was introduced. It was shown that local Weyl modules, Demazure modules of various levels, truncated Weyl modules and fusion product modules for \(\mathfrak{sl}_{2}[t]\) can be realized as CV-modules [8, 18]. In this section we recall the definition and properties of CV-modules and using them give a presentation of the \(\mathfrak{sl}_{2}[t]\)-modules that are obtained by taking tensor products of local Weyl modules with certain Demazure modules.
We first recall the definition of a CV-module \(V(\xi)\) associated to a partition \(\xi\).
**Definition 3.1.1**.: Given a partition \(\mathbf{n}=(n_{1}\geq n_{2}\geq\cdots\geq n_{k}>0)\) of \(n\in\mathbb{Z}_{+}\), let \(V(\mathbf{n})\) be a cyclic \(\mathfrak{sl}_{2}[t]\)-module generated by an element \(v_{\mathbf{n}}\) which satisfies the following relations:
\[(x\otimes t^{r})v_{\mathbf{n}}=0,\quad(h\otimes t^{r})v_{\mathbf{n}}=n\delta_ {r,0}v_{\mathbf{n}},\quad(y\otimes 1)^{n+1}v_{\mathbf{n}}=0 \tag{3.1}\]
\[(x\otimes t)^{s}(y\otimes 1)^{r+s}v_{\mathbf{n}}=0\quad\forall\,r+s\geq 1+rl+ \sum_{j\geq l+1}n_{j},\ \text{for some}\ l\in\mathbb{N} \tag{3.2}\]
Here, \(r\in\mathbb{Z}_{+}\) in (3.1) and \(r,s\in\mathbb{N}\) in (3.2).
Defining the grade of \(v_{\mathbf{n}}\) to be zero, we see that \(V(\mathbf{n})\) is a finite-dimensional graded \(\mathfrak{sl}_{2}[t]\)-quotient of \(W_{loc}(n)\). The CV-modules are defined in [8] for an arbitrary current algebra, but for our purposes, we need the definition only for \(\mathfrak{sl}_{2}[t]\).
Certain special cases of \(V(\mathbf{n})\) discussed in [8] are as follows:
\[\begin{split} V(\mathbf{n})&\cong W_{loc}(n),\qquad \quad\text{for}\ \ \mathbf{n}=(1^{n}),\\ V(\mathbf{n})&\cong ev_{0}(V(n))\qquad\quad\text{ for}\ \mathbf{n}=(n),\\ V(\mathbf{n})&\cong D(l,lr+n_{0})\quad\text{for}\ \mathbf{n}=(l^{r},n_{0}),\end{split} \tag{3.3}\]
where \(D(l,lr+n_{0})\) denotes the Demazure module of level \(l\) with highest weight \((lr+n_{0})\omega=n\omega\).
Define a power series in the indeterminate \(u\) as follows:
\[H(u)=\exp\left(-\sum_{r=1}^{\infty}\frac{h\otimes t^{r}}{r}u^{r}\right)\]
Let \(P(u)_{k}\) denote the coefficient of \(u^{k}\) in \(H(u)\). For \(s,r\in\mathbb{Z}_{+}\), \(x\in\mathfrak{g}\), define elements
\(x(r,s)\in U(\mathfrak{g}[t])\) as follows:
\[x(r,s)=\sum_{(b_{p})_{p\geq 0}}(x\otimes 1)^{(b_{0})}(x\otimes t^{1})^{(b_{1})} \cdots(x\otimes t^{s})^{(b_{s})}\]
with \(p\in\mathbb{Z},b_{p}\in\mathbb{Z}_{+}\) such that \(\sum\limits_{p\geq 0}b_{p}=r\), \(\sum\limits_{p\geq 1}pb_{p}=s\) and for any \(X\in\mathfrak{g}\), \(X^{(p)}=\dfrac{X^{p}}{p!}\). The following result was proved in [13] and reformulated in its present form in [9].
**Lemma 3.2.1**.: _Given \(s\in N,r\in\mathbb{Z}_{+}\), we have_
\[(x\otimes t)^{(s)}(y\otimes 1)^{(r+s)}-(-1)^{s}\big{(}\sum\limits_{k\geq 0}\,y(r,s-k)P(u)_{k}\big{)}\in U(\mathfrak{g}[t])\mathfrak{n}^{+}[t]. \tag{3.4}\]
Using Lemma 3.2.1 and (3.1), relation (3.2) in definition of \(V(\mathbf{n})\) can be replaced by the following:
\[y(r,s)v_{\mathbf{n}}=0,\text{ if }r+s\geq 1+rl+\sum_{j\geq l+1}n_{j}.\]
Using the presenation of \(W_{loc}(n)\) as a \(CV\)-module, we now give an alternate presentation of \(W_{loc}(n)\) as a lowest weight module. In what follows, we use the notation : \(a\otimes t^{r}:=a_{r}\), for all \(a\in\mathfrak{sl}_{2},\ r\in\mathbb{Z}_{+}\).
**Lemma 3.3.1**.: _Given \(n\in\mathbb{Z}_{+}\), let \(W(-n)\) be the \(\mathfrak{sl}_{2}[t]\)-module generated by a vector \(w_{-n}\) which satisfies the following relations:_
\[y_{r}w_{-n} =0, \forall\,r\geq 0,\] \[h_{r}w_{-n} =\delta_{0,r}(-n)w_{-n}, \forall\,r\geq 0,\] \[x(r,s)w_{-n} =0, \text{if }r+s\geq 1+rl+n-l.\]
_Then, \(W(-n)\) is isomorphic to \(W_{loc}(n)\)._
Proof.: Let \(W_{loc}(n)\) be generated by a vector \(w_{n}\) which satisfies the relations (3.1) and (3.2) for \(\mathbf{n}=(1^{n}).\) Then \(y_{0}^{n}w_{n}\) is a non-zero vector in \(W_{loc}(n)\) such that
\[y_{r}y_{0}^{n}w_{n}=0,\ \ \ \forall\,r\geq 0,\ \ \ \ \ \ y(n,r).w_{n}=0,\ \ \ \forall\,r\geq 1.\]
Further, as
\[y(n,1)=Ah_{1}y_{0}^{n}w_{n},\] \[y(n,r)w_{n}=A_{r}h_{r}y_{0}^{n}w_{n}+A_{r-1}h_{r-1}y(n,1)w_{n}+ \cdots+A_{1}h_{1}y(n,r-1)w_{n},\,\forall\,r\geq 2\]
with \(A,A_{1},\cdots,A_{r}\in\mathbb{Q}\), using induction and the above relations we see that
\[h_{r}y_{0}^{n}w_{n}=\delta_{r,0}(-n)w_{n}.\]
Now by applying to (3.4) the \(\mathfrak{sl}_{2}[t]\)-isomorphism that maps \(y_{s}\to x_{s}\), \(x_{s}\to y_{s}\) and \(h_{s}\to-h_{s}\) for \(s\geq 0\), using the above relations we get,
\[(y\otimes t)^{(s)}(x\otimes 1)^{(r+s)}y_{0}^{n}w_{\mathbf{n}}=(-1)^{s}x(r,s)(-n )y_{0}^{n}w_{\mathbf{n}} \tag{3.5}\]
As, \(x_{0}^{(r+s)}.y_{0}^{n}v_{\mathbf{n}}=0\), for \(r+s\geq n+1\), it follows that,
\[x(r,s)y_{0}^{n}w_{\mathbf{n}}=0,\qquad\text{ when}\quad r+s\geq 1+rk+n-k.\]
Thus, the vector \(y_{0}^{n}w_{n}\in W_{loc}(n)\) satisfies the relations
\[(y\otimes t^{r})y_{0}^{n}w_{n} =0, \forall\,r\geq 0,\] \[(h\otimes t^{r})y_{0}^{n}w_{n} =\delta_{0,r}(-n)y_{0}^{n}w_{n}, \forall\,r\geq 0,\] \[x(r,s)y_{0}^{n}w_{n} =0, \text{ if }r+s\geq 1+rl+n-l.\]
Therefore by PBW theorem, \(W_{loc}(n)=\mathbf{U}(\mathfrak{n}^{+}[t])y_{0}^{n}w_{n}\).
Let \(\phi:W(-n)\to W_{loc}(n)\) be a map such that \(\phi(w_{-n})=y_{0}^{n}w_{n}\). Then it is clear from above that \(\phi\) defines a surjective \(\mathfrak{sl}_{2}[t]\)-homomorphism from \(W(-n)\) to \(W_{loc}(n)\). Since by definition, \(W(-n)=U(\mathfrak{n}^{+}[t])w_{-n}\), it follows from above that \(\phi\) is in fact an isomorphism.
The following results on the \(\mathfrak{sl}_{2}[t]\)-modules \(Wloc(m)\) and \(V(\mathbf{n})\) were proven in [9] and [8]. We record them for future reference.
**Lemma 3.4.1**.: _Let \(m,n\in\mathbb{N}\)._
1. _For_ \(m\in\mathbb{N}\)_,_ \(\dim W_{loc}(m\omega)=2^{m}\)_._
2. _Given a partition_ \(\mathbf{n}=(n_{1}\geq n_{2}\geq\cdots\geq n_{k})\) _of_ \(n\)_, let_ \[\mathbf{n}^{+}=(n_{1},n_{2}\cdots,n_{k-1}+1,n_{k}-1);\quad\mathbf{n}^{-}=(n_{1 }\geq n_{2}\geq\cdots\geq n_{k-1}-n_{k}).\] _For_ \(k>1\)_, there exists a short exact sequence of_ \(\mathfrak{sl}_{2}[t]\)_-modules,_ \[0\to\tau_{(k-1)n_{k}}V(\mathbf{n}^{-})\stackrel{{\phi_{-}}}{{ \longrightarrow}}V(\mathbf{n})\stackrel{{+\phi_{\downarrow}}}{{ \longrightarrow}}V(\mathbf{n}^{+})\to 0.\] _Furthermore,_ \(\dim V(\mathbf{n})=\prod\limits_{i=1}^{k}(n_{i}+1)\) _and_ \[\mathbb{B}(\mathbf{n})=\{v_{\mathbf{n}},(y_{0})^{i_{1}}(y_{1})^{i_{2}}\cdots(y _{k-1})^{i_{k}}v_{\mathbf{n}}:(i_{1},i_{2},\ldots,i_{k})\in J(\mathbf{n})\}\] _is a basis of_ \(V(\mathbf{n})\)_, where_ \[J(\mathbf{n})=\{(i_{1},i_{2},\ldots,i_{k})\in\mathbb{Z}_{\geq 0}^{k}:(ji_{r-1}+ (j+1)i_{r})+2\sum\limits_{l=r+1}^{n}i_{l}\leq\sum\limits_{p=r-j}^{k}n_{p},2 \leq r\leq k+1,1\leq j\leq r-1\}.\]
**Lemma 3.4.2**.: _Given \(m,n\in\mathbb{N}\), let \(\mathbf{n}=(n_{1}\geq\cdots\geq n_{k})\) be a partition of \(n\). Let \(W_{loc}(m)\) and \(V(\mathbf{n})\) be the graded \(\mathfrak{sl}_{2}[t]\)-modules with generators \(w_{m}\) and \(v_{\mathbf{n}}\), as defined
above. Then \(\mathfrak{sl}_{2}[t]\)-module \(W_{loc}(m\omega)\otimes V(\mathbf{n})\) is a cyclic module generated by \(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}\) and_
\[\dim W(m\omega)\otimes V(\mathbf{n})=2^{m}\prod_{i=1}^{k}(n_{i}+1). \tag{3.6}\]
_._
Proof.: Given non-zero vectors \(v_{\mathbf{n}}\) and \(w_{m}\) satisfying the given conditions, let \(S\) be the submodule of \(W_{loc}(m)\otimes V(\mathbf{n})\) generated by the vector \(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}.\) Since \(\mathfrak{n}^{-}.y_{0}^{m}w_{m}=0\) and \(V(\mathbf{n})=U(\mathfrak{n}^{-}[t])v_{\mathbf{n}},\)
\[S\supseteq U(\mathfrak{n}^{-}[t])y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}=y_{0}^{ m}w_{m}\otimes U(\mathfrak{n}^{-}[t])v_{\mathbf{n}}=y_{0}^{m}w_{m}\otimes V( \mathbf{n}).\]
On the other hand, by Lemma 3.3.1, \(W_{loc}(m)=U(\mathfrak{n}^{+}[t])y_{0}^{m}w_{m}\). Hence, for \(r\in\mathbb{N}\) and \(v\in V(\mathbf{n})\),
\[x_{r}.y_{0}^{m}w_{m}\otimes V(\mathbf{n})=x_{r}y_{0}^{m}w_{m}\otimes v\mod y_ {0}^{m}w_{m}\otimes V(\mathbf{n}),\]
implying \(x_{r}y_{0}^{m}w_{m}\otimes V(\mathbf{n})\subseteq S\). Now using an obvious induction argument on the length of \(X\in U(\mathfrak{n}^{+}[t])\) for all basis elements \(Xy_{0}^{m}w_{m}\) of \(W_{loc}(m)\), we see that \(W_{loc}(m)\otimes V(\mathbf{n})\subseteq S\). This shows that \(W_{loc}(m)\otimes V(\mathbf{n})\) is generated by the element \(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}\).
Using Lemma 3.4.1, it follows from 2.5 that \(W_{loc}(m)\otimes V(\mathbf{n})\) is a finite-dimensional, graded \(\mathfrak{sl}_{2}[t]\)-module such that (3.6) holds.
Let \(\mathfrak{b}^{\pm}\) be the subalgebra \(\mathfrak{h}+\mathfrak{n}^{\pm}\) of \(\mathfrak{g}\). Then \(\mathfrak{g}=\mathfrak{b}^{+}\oplus\mathfrak{n}^{-}=\mathfrak{b}^{-}\oplus \mathfrak{n}^{+}\) and by the Poincare-Birkhoff-Witt theorem,
\[U(\mathfrak{g}[t])=U(\mathfrak{n}^{-}[t])\oplus U(\mathfrak{g}[t])\mathfrak{b }^{+}[t]=U(\mathfrak{n}^{+}[t])\oplus U(\mathfrak{g}[t])\mathfrak{b}^{-}[t]. \tag{3.7}\]
Let \(\mathbf{pr}^{\pm}:U(\mathfrak{g}[t])\to U(\mathfrak{n}^{\pm}[t])\) be the projections with respect to decompositions (3.7).
**Lemma 3.5.1**.: _Let \(\mathbf{n}=(n_{1}\geq n_{2}\geq\cdots\geq n_{k})\) be a partition of a natural number \(n\) and let \(V(\mathbf{n})\) be the associated \(CV\)-module generated by a highest weight vector \(v_{\mathbf{n}}\). Let \(J_{\mathbf{n}}\) be the ideal of \(U(\mathfrak{n}^{-}[t])\) generated by the elements_
\[\{\mathbf{pr}^{-}((x_{1})^{s}(y_{0})^{r+s}):s+r\geq 1+rl+\sum_{j\geq l+1}n_{j}, \text{ for some }k\in\mathbb{N}\}. \tag{3.8}\]
_The map \(U(\mathfrak{n}^{-}[t])\to V(\mathbf{n})\) which sends \(g\) to \(gv_{\mathbf{n}}\) induces a vector space isomorphism_
\[U(\mathfrak{n}^{-}[t])/J_{\mathbf{n}}\cong V(\mathbf{n}). \tag{3.9}\]
Proof.: Given the partition \(\mathbf{n}\) of \(n\), for all \(Y\in U(\mathfrak{n}^{-}[t]\oplus\mathfrak{b}^{+}t[t])\) we have,
\[Yv_{\mathbf{n}}=\mathbf{pr}^{-}(Y)v_{\mathbf{n}}.\]
Thus by definition of the CV-module \(V(\mathbf{n})\), the kernel of the map \(U(\mathfrak{n}^{-}[t])\to V(\mathbf{n})\) which sends \(g\) to \(gv_{\mathbf{n}}\) is generated by the set (3.8).
Using the defining relations of the \(CV\)-module \(V(\mathbf{n})\) we have
\[y(r,s)v_{\mathbf{n}}=0,\quad\text{ for }r+s\geq 1+r(k-1)+n_{k}.\]
Hence \(y_{k-1}^{i_{k}}v_{\mathbf{n}}=0\) whenever \(i_{k}>n_{k}\).
Furthermore, as \(\mathbf{n}^{+}\) is the partition \((n_{1}^{+}\geq\cdots\geq n_{k-1}^{+}\geq n_{k}-1)\) with
\[n_{j_{0}}^{+}=n_{k-1}+1,\quad\text{ if }j_{0}=\min\{j:n_{j}=n_{k}-1,1\leq j \leq k-1\}\]
and \(n_{j}^{+}=n_{j}\) for \(1\leq j\leq k-1\), \(j\neq j_{0}\), using the short exact sequence in Lemma 3.4.1(b) repeatedly, we see that for \(0\leq i_{k}<n_{k}\), if \(\mathbf{n}^{+}(i_{k})=(\bar{n}_{1}\geq\cdots\geq\bar{n}_{k-1}\geq i_{k})\) is a partition of \(n\) such that \({}_{+}\phi^{n_{k}-i_{k}}(V(\mathbf{n}))=V(\mathbf{n}^{+}(i_{k}))\), then from the definition of the map \({}_{+}\phi\), it follows that
\[\bar{n}_{k-1}\leq\min\{n_{k-1}+n_{k}-i_{k},n_{j}+\frac{n_{k}-i_{k}-\sum\limits_ {s=1}^{k-j}(s)(n_{k-s-1}-n_{k-s})}{j},1\leq j\leq k-2\}.\]
Therefore, using the short exact sequence for \(V(\mathbf{n}^{+}(i_{k}))\), we see that
\[y_{k-2}^{i_{k-1}}y_{k-1}^{i_{k}}v_{\mathbf{n}}=0, \text{ if }i_{k-1}>\bar{n}_{k-1}-i_{k},\] \[i.e., y_{k-2}^{i_{k-1}}y_{k-1}^{i_{k}}v_{\mathbf{n}}=0, \text{ if }ji_{k-1}+(j+1)i_{k}>\sum\limits_{r=k-j}^{k}n_{r}, \text{ for some }j\leq k-1.\]
As \(\sum\limits_{j=1}^{k-1}\bar{n}_{j}=\sum\limits_{j=1}^{k}n_{j}-i_{k}\), using the arguments recursively, we see that for a \(k\)-tuple \((s_{1},\cdots,s_{k})\),
\[y_{r-1}^{s_{r-1}}y_{r}^{s_{r}}\cdots y_{k-1}^{s_{k}}v_{\mathbf{n}}=0\quad\text { if }(ji_{r-1}+(j+1)i_{r})+2\sum\limits_{l=r+1}^{n}i_{l}>\sum\limits_{p=r-j}^{k}n_{ p},\]
for some \(0<r-1\leq k-1\) and \(1\leq j\leq r-1\). This shows that all monomials of the form \(y_{0}^{s_{1}}\cdots y_{r-1}^{s_{r-1}}y_{r}^{s_{r}}\cdots y_{k-1}^{s_{k}}\in J_ {\mathbf{n}}\), whenever \((ji_{r-1}+(j+1)i_{r})+2\sum\limits_{l=r+1}^{n}i_{l}>\sum\limits_{p=r-j}^{k}n_ {p}\) for some \(0<r-1\leq k-1\) and \(1\leq j\leq r-1\). Consequently, \(U(\mathfrak{n}^{-}[t])/J_{\mathbf{n}}\) is spanned by cosets of the elements \((y_{0})^{i_{1}}(y_{1})^{i_{2}}\cdots(y_{k-1})^{i_{k}}\) with \((i_{1},i_{2},\cdots,i_{k})\in J(\mathbf{n})\cup(0,\cdots,0)\).
On the other hand by Lemma 3.4.1, \(\{v_{\mathbf{n}},(y_{0})^{i_{1}}(y_{1})^{i_{2}}\cdots(y_{k-1})^{i_{k}}v_{ \mathbf{n}}:(i_{1},i_{2},\ldots,i_{k})\in J(\mathbf{n})\}\) is a basis of \(V(\mathbf{n})\). Thus it follows that the map \(U(\mathfrak{n}^{-}[t])\to V(\mathbf{n})\) gives a vector space isomorphism between \(U(\mathfrak{n}^{-}[t])/J_{\mathbf{n}}\) and \(V(\mathbf{n})\).
**Remark 3.5.2**.: Similarly, it can be shown that if \(I_{m}\) is the ideal of \(U(\mathfrak{n}^{+}[t])\) generated by the set \(\{\mathbf{pr}^{+}((y_{1})^{k}(x_{0})^{\ell}):\ell>m\}\), then the map \(U(\mathfrak{n}^{+}[t])\to W_{loc}(m)\) given by \(g\mapsto gy_{0}^{m}w_{m}\) gives a vector space isomorphism between \(U(\mathfrak{n}^{+}[t])/I_{m}\) and \(W_{loc}(m)\). In this case \(U(\mathfrak{n}^{+}[t])/I_{m}\) is spanned by the set
\[\{(x_{0})^{i_{1}}(x_{1})^{i_{2}}\cdots(x_{m-1})^{i_{m}}:(i_{1},i_{2},\cdots,i_{ m})\in J(1^{n})\cup(0,\cdots,0)\}.\]
### The module \(W(m,\mathbf{n})\)
Given \(n,m\in\mathbb{N}\) and a partition \(\mathbf{n}=(n_{1}\geq n_{2}\geq\cdots\geq n_{k})\) of \(n\), define \(W(m,\mathbf{n})\) to be the \(\mathfrak{sl}_{2}[t]\)-module generated by an element \(w_{\mathbf{n}}^{m}\) which satisfies the following relations:
\[(h\otimes t^{r})w_{\mathbf{n}}^{m} =(n-m)\delta_{r,0}w_{\mathbf{n}}^{m} \tag{3.10}\] \[x(r,s)w_{\mathbf{n}}^{m} =0,\text{ if }r+s\geq 1+rl+m-l\text{ for some }l\in\mathbb{N}\] (3.11) \[y(r,s)w_{\mathbf{n}}^{m} =0,\text{ if }r+s\geq 1+rl+\sum_{j\geq l+1}n_{j}\text{ for some }l\in\mathbb{N} \tag{3.12}\]
The following is the main result of this section :
**Theorem 3.6.1**.: _Given \(n,m\in\mathbb{N}\) and \(\mathbf{n}=(n_{1}\geq n_{2}\geq\cdots\geq n_{k})\), a partition of \(n\), the \(\mathfrak{sl}_{2}[t]\)-module \(W(m,\mathbf{n})\) is isomorphic to \(W_{loc}(m)\otimes V(\mathbf{n})\)._
Observe that Theorem 3.6.1 gives a presentation of the \(\mathfrak{sl}_{2}[t]\)-module \(W_{loc}(m)\otimes V(\mathbf{n})\). The following lemmas help prove the theorem.
**Lemma 3.6.2**.: _There exists a surjective, \(\mathfrak{sl}_{2}[t]\)-module homomorphism from \(W(m,\mathbf{n})\) to \(W_{loc}(m)\otimes V(\mathbf{n})\)._
Proof.: By Lemma 3.4.2, we know that if \(w_{m}\) and \(v_{\mathbf{n}}\) are the highest weight generators of \(W_{loc}(m)\) and \(V(\mathbf{n})\) respectively, then \(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}\) generates \(W_{loc}(m)\otimes V(\mathbf{n})\).
Define a map \(\phi:W(m,\mathbf{n})\to W_{loc}(m)\otimes V(\mathbf{n})\), such that, \(\phi(w_{\mathbf{n}}^{m})=y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}\). Since \(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}\) generates \(W_{loc}(m)\otimes V(\mathbf{n})\) and for \(r\geq 0\), using Lemma 3.3.1 we have,
\[(h\otimes 1)(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}) =h_{0}y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}+y_{0}^{m}w_{m}\otimes h _{0}v_{\mathbf{n}}=(n-m)(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}})\] \[x(r,s)(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}) =x(r,s)y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}=0\quad\text{ if }r+s\geq 1+rl+m-l\] \[y(r,s)(y_{0}^{m}w_{m}\otimes v_{\mathbf{n}}) =y_{0}^{m}w_{m}\otimes y(r,s)v_{\mathbf{n}}=0\quad\text{ if }r+s\geq 1+rl+\sum_{j\geq l+1}n_{j}\quad \text{(By \eqref{eq:w_m_m_n
as those used in Lemma 3.5.1 show that \(Yw_{\mathbf{n}}^{m}=0\) for all \(Y\in J_{\mathbf{n}}\). Hence it follows that if
\[\mathcal{B}_{\mathbf{n}}=\{y_{0}^{i_{1}}y_{1}^{i_{2}}\ldots y_{k-1}^{i_{k}}|(i_{ 1},i_{2},\ldots,i_{k})\in J(\mathbf{n})\cup(0,0,\cdots,0)\},\]
then
\[W(m,\mathbf{n})\subseteq\sum_{Y\in\mathcal{B}_{\mathbf{n}}}U(\mathfrak{n}^{+} [t])Yw_{\mathbf{n}}^{m}.\]
For \(Y\in U(\mathfrak{n}^{-}[t],\) let \(\ell n(Y)\) be the PBW length of \(Y\).
For \(Y\in U(\mathfrak{n}^{-}[t]),\) when \(\ell n(Y)=0,\) we have, \(U(\mathfrak{n}^{+}[t])Yw_{\mathbf{n}}^{m}=U(\mathfrak{n}^{+}[t])w_{\mathbf{n} }^{m}.\) In this case, as a consequence of the defining relations (3.10) and (3.11), the similar arguments to those used in Lemma 3.5.1 show that if \(I_{m}\) is the ideal of \(U(\mathfrak{n}^{+}[t])\) defined in Remark 3.5.2, then, as vector spaces \(U(\mathfrak{n}^{+}[t])/I_{m}\) is isomorphic to \(U(\mathfrak{n}^{+}[t])w_{n}^{m}\). Hence \(\dim U(\mathfrak{n}^{+}[t])w_{\mathbf{n}}^{m}\leq 2^{m}.\)
Given \(Y\in\mathcal{B}_{\mathbf{n}},\) by applying induction on \(\ell n(Y),\) we now prove that
\[U(\mathfrak{n}^{+}[t])Yw_{\mathbf{n}}^{m}\subseteq YU(\mathfrak{n}^{+}[t])w_{ \mathbf{n}}^{m}+\sum_{\{Y^{\prime}\in\mathcal{B}_{\mathbf{n}}:\ell n(Y^{ \prime})<\ell nY\}}Y^{\prime}U(\mathfrak{n}^{+}[t])w_{\mathbf{n}}^{m}. \tag{3.13}\]
When \(\ell n(Y)=1,\)\(Y=y_{j}\) with \(0\leq j\leq m-1.\) In this case, given \(x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}\in U(\mathfrak{n}^{+}[t]),\) we have,
\[x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}y_{j}w_{\mathbf{n}}^{m}=x_{0}^{a _{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-1}y_{j}x_{p}w_{\mathbf{n}}^{m}+x_{0}^{a _{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-1}h_{j+p}w_{\mathbf{n}}^{m}.\]
Since \(x_{r}w_{\mathbf{n}}^{m}=0\) for \(r\geq m,\) we see that \(x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}y_{j}x_{p}w_{\mathbf{n}}^{m}=0\) whenever \(p\geq m.\) Therefore, assuming \(p\leq m-1,\) we get,
\[x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}y_{j}w_{\mathbf{n}}^ {m} =x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-1}y_{j}x_{p}w_{ \mathbf{n}}^{m}+x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-1}h_{j+p}w_{ \mathbf{n}}^{m},\] \[=x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-2}y_{j}x_{p}^{2}w_ {\mathbf{n}}^{m}+x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-2}h_{j+p}x_{p}w_ {\mathbf{n}}^{m}+x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-1}h_{j+p}w_{ \mathbf{n}}^{m}\] \[\vdots\] \[=y_{j}x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}w_{\mathbf{n}}^ {m}+u,\quad\text{where }u\in U(\mathfrak{n}^{+}[t])w_{\mathbf{n}}^{m}.\]
Thus (3.13) holds \(\ell n(Y)=1\). Now assume that (3.13) holds for all \(Y\in\mathcal{B}_{\mathbf{n}}\) with \(\ell n(Y)<r\) and consider \(x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}Yw_{\mathbf{n}}^{m},\) with \(Y=y_{0}^{i_{1}}y_{1}^{i_{2}}\ldots y_{k-1}^{i_{k}}\) with \(\ell n(Y)=r\). Then,
\[x_{0}^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}}y_{0}^{i_{1}}\cdots y _{k-1}^{i_{k}}w_{\mathbf{n}}^{m}\] \[=(x_{0}^{a_{0}}x_{1}^{a_{1}}\cdots x_{p}^{a_{p}-1}y_{0}x_{p}y_{0 }^{i_{1}-1}y_{1}^{i_{2}}\ldots y_{k-1}^{i_{k}}+x_{0}^{a_{0}}x_{1}^{a_{1}} \ldots x_{p}^{a_{p}-1}h_{p}y_{0}^{i_{1}-1}y_{1}^{i_{2}}\ldots y_{k-1}^{i_{k}})w _{\mathbf{n}}^{m}\] \[=x_{0}^{a_{0}}x_{1}^{a_{1}}\cdots x_{p}^{a_{p}-1}y_{0}x_{p}y_{0 }^{i_{1}-1}y_{1}^{i_{2}}\ldots y_{k-1}^{i_{k}}w_{\mathbf{n}}^{m}+(i_{1}-1)x_{0} ^{a_{0}}x_{1}^{a_{1}}\ldots x_{p}^{a_{p}-1}y_{0}^{i_{1}-2}y_{1}^{i_{2}}\ldots y _{p}^{i_{p}+1}\cdots y_{k-1}^{i_{k}}w_{\mathbf{n}}^{m}\] \[+\sum_{j=2}^{k-p-1}(i_{j})x_{0}^{a_{0}}x_{1}^{a_{1}}\cdots x_{p}^ {a_{p}-1}y_{0}^{i_{1}-1}y_{1}^{i_{2}}\ldots y^{i_{j}-1}\ldots y_{j+p}^{i_{j+p}+1 }\cdots y_{k-1}^{i_{k}}w_{\mathbf{n}}^{m}.\]
Using the vector space isomorphism between \(U(\mathfrak{n}^{-}[t])w_{\mathbf{n}}^{m}\) and \(U(\mathfrak{n}^{-}[t])/J_{\mathbf{n}}\), it follows from Lemma 3.5.1 that,
\[x_{0}^{a_{0}}x_{1}^{a_{1}}\dots x_{p}^{a_{p}}y_{0}^{i_{1}}\cdots y_{k-1}^{i_{k}} w_{\mathbf{n}}^{m}=x_{0}^{a_{0}}x_{1}^{a_{1}}\dots x_{p}^{a_{p}-1}y_{0}x_{p}y_{0}^{ i_{1}-1}y_{1}^{i_{2}}\dots y_{k-1}^{i_{k}}w_{\mathbf{n}}^{m}+u_{r-1},\]
where \(u_{r-1}\in\sum\limits_{\{Y^{\prime}\in\mathcal{B}_{\mathbf{n}}:\ell n(Y^{ \prime})=r-1\}}U(\eta^{+}[t])\cdot Y^{\prime}w_{\lambda}^{m}\). Thus by induction hypothesis,
\[u_{r-1}\in\sum\limits_{Y\in\mathcal{B}_{\mathbf{n}}}Y\cdot U(\eta^{+}[t]) \cdot w_{\mathbf{n}}^{m}.\]
Now, repeating the proceedure we get,
\[x_{0}^{a_{0}}x_{1}^{a_{1}}\dots x_{p}^{a_{p}}y_{0}^{i_{1}}y_{1}^{i_{2}}\dots y _{k-1}^{i_{k}}w_{\mathbf{n}}^{m}=x_{0}^{a_{0}}x_{1}^{a_{1}}\dots x_{p}^{a_{p}- 2}y_{0}x_{p}^{2}y_{0}^{i_{1}-1}y_{1}^{i_{2}}\dots y_{k-1}^{i_{k}}w_{\mathbf{n} }^{m}+u_{r-2}+u_{r-1},\]
where \(u_{r-2}\in\sum\limits_{\{Y\in\mathcal{B}_{\mathbf{n}}\}}YU(\mathfrak{n}^{+}[t ])w_{\mathbf{n}}^{m}.\) Continuing this way, we get
\[x_{0}^{a_{0}}x_{1}^{a_{1}}\dots x_{p}^{a_{p}}y_{0}^{i_{1}}y_{1}^{i_{2}}\dots y _{k-1}^{i_{k}}w_{\mathbf{n}}^{m}=y_{0}x_{0}^{a_{0}}x_{1}^{a_{1}}\dots x_{p}^{a _{p}}y_{0}^{i_{1}-1}y_{1}^{i_{2}}\dots y_{k-1}^{i_{k}}w_{\mathbf{n}}^{m}+u,\]
where \(u\in\sum\limits_{\{Y^{\prime}\in\mathcal{B}_{\mathbf{n}}\}}Y^{\prime}\cdot U( \eta^{+}[t])w_{\mathbf{n}}^{m})\). This shows that \(W(m,\mathbf{n})\) is spanned by elements of the form \(\{YXw_{\mathbf{n}}^{m}:X\in\mathcal{B}_{\mathbf{1}^{m}}\,Y\in\mathcal{B}_{ \mathbf{n}}\}\). Hence it follows that
\[\dim W(m,\mathbf{n})\leq|\mathcal{B}_{\mathbf{1}^{m}}||\mathcal{B}_{\mathbf{n }}|=2^{m}\prod\limits_{j=1}^{k}(n_{j}+1).\qed\]
We conclude this section with the proof of Theorem 3.6.1.
Proof.: From Lemma 3.6.2 and Lemma 3.4.1, it follows that \(\dim W(m,\mathbf{n})\geq 2^{m}\prod\limits_{i=1}^{k}(n_{i}+1)\).
On the other hand by Lemma 3.6.3, \(\dim W(m,\mathbf{n})\leq 2^{m}\prod\limits_{i=1}^{k}(n_{i}+1)\).
Hence it follows that \(\dim W(m,\mathbf{n})\geq 2^{m}\prod\limits_{i=1}^{k}(n_{i}+1)=\dim W(m)\otimes V (\mathbf{n})\).
This completes the proof of the theorem.
## 4. \(Cv\)-filtration and Graded character of \(W_{loc}(m\omega)\otimes V(n)\)
Given a finite-dimensional graded \(\mathfrak{sl}_{2}[t]\)-module \(U\), we say \(U\) admits a \(CV\)-filtration, if there exist an decreasing chain of \(\mathfrak{sl}_{2}[t]\)-submodules
\[U=U_{0}\supset U_{1}\supset U_{2}\supset U_{3}\supset\dots\supset U_{r} \supset(0)\]
such that for each \(i\), \(\frac{U_{i}}{U_{i+1}}\cong V(\xi^{i})\), where \(\xi^{i}\) is a partition. On the other hand, a filtration of a representation whose successive quotients are isomorphic to Demazure modules is called an excellent filltration. It is known that if \(V=V_{1}\supseteq V_{2}\supseteq\dots\supseteq V_{n}\supset 0\) is a chain of graded \(\mathfrak{g}[t]\)-modules whose successive quotients have excellent filtration then \(V\) has an excellent filtration. In this context the following result was proved in [7, Theorem 3.3].
**Proposition 4.0.1**.: _Let \(n\in\mathbb{N}\). Given a partition \(\mathbf{n}=(n_{1}\geq n_{2}\geq\cdots\geq n_{k})\) of \(n\), the \(\mathfrak{sl}_{2}[t]\)-module \(V(\mathbf{n})\) has a Demazure flag of level \(l\) if and only if \(n_{1}\leq l\)._
We use this result to show that the \(\mathfrak{sl}_{2}[t]\)-modules of the form \(W_{loc}(m)\otimes V(n)\) admit a Demazure flag of level \(k\) for \(k\geq n+1\). We also deduce the fermionic version of the character formula for \(W_{loc}(m)\otimes V(n)\).
Given \(k,r\in\mathbb{N}\), with \(k\geq r\), let \(\lceil(k,r):=(k-r+1,1^{r-1})\) denote the hook type partition of \(k\) into \(r\) parts. In this section we prove the following:
**Theorem 4.1.1**.: _Let \(m,n\in\mathbb{N}\). The \(\mathfrak{sl}_{2}[t]\)-module, \(W_{loc}(m\omega)\otimes V(n)\) admits a \(CV\)-filtration whose successive quotients are isomorphic to_
\[V(\lceil(m-n+2i,m-n+i)),\quad 0\leq i\leq n,\quad\text{when }m>n;\] \[V(\lceil(n-m+2i,i),V(n-m),\quad 1\leq i\leq m,\quad\text{when }m\leq n.\]
We need the following lemmas to prove the theorem.
**Lemma 4.1.2**.: _Let \(r,i\in\mathbb{N}\). If \(r\geq i\), then the \(i\times i\) matrix \(A(r,i)\) given by_
\[A(r,i)=\begin{bmatrix}1&1&\cdots&1\\ \binom{r}{1}&\binom{r+1}{1}&\cdots&\binom{r+i-1}{1}\\ \vdots&\vdots&\cdots&\vdots\\ \binom{r}{i-1}&\binom{r+1}{i-1}&\cdots&\binom{r+i-1}{i-1}\end{bmatrix}\]
_is invertible. If \(r<i\), then the \(i\times i\) matrix \(B(r,i)\) given by_
\[B(r,i)=\begin{bmatrix}1&1&1&\cdots&1&1&\cdots&1\\ \binom{r}{1}&\binom{r+1}{1}&\binom{r+2}{1}&\cdots&\binom{i-2}{1}&\binom{i-1}{ 1}&\cdots&\binom{r+i-1}{1}\\ \vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\cdots&\vdots\\ \binom{r}{r}&\binom{r+1}{r}&\binom{r+2}{r}&\cdots&\binom{i-2}{r}&\binom{i-1}{ r}&\cdots&\binom{r+i-1}{r}\\ 0&\binom{r+1}{r+1}&\binom{r+2}{r+1}&\cdots&\binom{i-2}{r+1}&\binom{i-1}{r+1}& \cdots&\binom{r+i-1}{r+1}\\ \vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&0&\cdots&0&\binom{i-1}{i-1}&\cdots&\binom{r+i-1}{i-1}\end{bmatrix}\]
_is invertible._
Proof.: We prove the lemma by applying induction on \(i\).
When \(i=1\), \(A(r,1)=[1]\) which is invertible.
When \(i=2\), \(A(r,2)=\begin{bmatrix}1&1\\ r&r+1\end{bmatrix}\) and \(\det A(r,2)=1\), impying that \(A(r,i)\) is invertible.
Assume that the result holds for all \(k<i\) and consider the matrix \(A(r,i)\). On applying column transformation to \(A(r,i)\) we can get the matrix
\[\tilde{A}(r)=\left[\begin{array}{cccc}1&0&\cdots&0\\ \binom{r}{1}&1&\cdots&1\\ \binom{r}{2}&\binom{r}{1}&\cdots&\binom{r+i-2}{1}\\ \vdots&\vdots&\cdots&\vdots\\ \binom{r}{i-1}&\binom{r}{i-2}&\cdots&\binom{r+i-2}{i-2}\end{array}\right]\]
Clearly \(\det A(r,i)=\det\tilde{A}(r)=\det A(r,i-1).\) By induction hypothesis the matrix \(A(r,i-1)\) is invertible. Hence, it follows that \(A(r,i)\) is invertible.
Similar arguments show that the matrix \(B(r,i)\) is also invertible when \(r<i\).
**Lemma 4.1.3**.: _Let \(m,n\in\mathbb{N}\). (i) For \(m>n\), \(\sum\limits_{i=0}^{n}\dim V(\lceil(m-n+2i,m-n+i)=2^{m}(n+1).\) (ii) For \(m\leq n\), \(\sum\limits_{i=1}^{m}\dim V(\lceil(n-m+2i,i)+\dim V(n-m)=2^{m}(n+1)\).
Proof.: By Lemma 3.4.1,
\[\dim V(\lceil(m+n-2i,m-i))=2^{m-i-1}(n-i+1+1)=2^{m-i-1}(n-i+2).\]
(i) When \(m>n\), we have,
\[\sum\limits_{i=0}^{n}\dim V(\lceil(m-n+2i,m-n+i)=\sum\limits_{i=0}^{n}(i+2)2^ {m-n+i-1}=2^{m-n-1}\sum\limits_{i=0}^{n}(i+2)2^{i}\]
Putting \(s_{n}=\sum\limits_{i=0}^{n}(i+2)2^{i}\), we see that \(s_{n}=\sum\limits_{i=1}^{n}i2^{i}+\sum\limits_{i=1}^{n+1}2^{i}.\)
As \(\sum\limits_{i=1}^{n}i2^{i}+(n+1)2^{n+1}-2(\sum\limits_{i=1}^{n}i2^{i})=2(2^{ n+1}-1),\)
\[s_{n}=(n+1)2^{n+1}-2(2^{n+1}-1)+\sum\limits_{i=1}^{n+1}2^{i}=(n+1)2^{n+1}.\]
Hence, \(\sum\limits_{i=0}^{n}\dim V(\lceil(m+n-2i,m-i)=2^{m-n-1}s_{n}=2^{m}(n+1).\)
(ii) When \(m\leq n\), we have,
\[\sum\limits_{i=1}^{m}\dim V(\lceil(n-m+2i,i))+\dim V(n-m)=\sum \limits_{i=1}^{m}(n-m+2+i)2^{i-1}+(n-m+1)\] \[=\sum\limits_{i=0}^{m-1}2^{i}(n-m+3+i)+(n-m+1)=(n-m+1)\sum \limits_{i=0}^{m-1}2^{i}+\sum\limits_{i=0}^{m-1}(i+2)2^{i}+(n-m+1)\]
Using part(i), we thus get,
\[\sum_{i=0}^{m-1}\dim V(\lceil(n+m-2i,m-i))+\dim V(n-m)\] \[=(n-m+1)(2^{m}-1)+m2^{m}+(n-m+1)=2^{m}(n+1).\]
This completes the proof of the lemma.
We now prove Theorem 4.1.1.
Proof.: Let \(w_{m}\) and \(v_{n}\) be the highest weight generators of the modules \(W_{loc}(m)\) and \(V(n)\) respectively. By Lemma 3.4.2, \(W_{loc}(m)\otimes V(n)=U(\mathfrak{sl}_{2}[t]).w_{m}\otimes y^{(n)}v_{n}\).
When \(m>n\geq 1\), set
\[U_{i}=W_{n-i}=U(\mathfrak{sl}_{2}[t])(w_{m}\otimes y_{0}^{(n-i)}v_{n}),\qquad 0 \leq i\leq n.\]
As \(x_{r}.w_{m}=0\) for all \(r\in\mathbb{Z}_{+}\), using representation theory of \(\mathfrak{sl}_{2}(\mathbb{C})\)[14, Section 7], we have
\[x_{0}.(w_{m}\otimes y_{0}^{(i)}v_{n})=w_{m}\otimes x_{0}y_{0}^{(i)}v_{n}=(n-i +1)w_{m}\otimes y_{0}^{(i-1)}v_{n}.\]
generates \(W_{loc}(m\omega)\otimes V(n)\). Hence,
\[W_{loc}(m\omega)\otimes V(n)=U_{0}\supset U_{1}\supset\cdots U_{n}\supset 0,\]
is an decreasing chain of submodules of \(W_{loc}(m\omega)\otimes V(n)\). To prove the result we show:
\[\frac{U_{i}}{U_{i+1}}\cong V(\lceil(m-n+2i,m-n+i)=V(i+1,1^{m-n+i-1}),\qquad \text{ for }\,0\leq i\leq n,\]
or equivalently, \(\frac{W_{i}}{W_{i-1}}\cong V(\lceil(m+n-2i,m-i)=V(n+1-i,1^{m-i-1})\text{ for }0\leq i\leq n.\)
By definition (Section 2.5),
\[g_{r}.v=0,\qquad\text{ for all }\,v\in V(n),\,g\in\mathfrak{g},\,r\in\mathbb{N}.\]
Hence for \(r,s\in\mathbb{Z}_{+}\),
\[\begin{array}{ll}(-1)^{s}y(r,s)(w_{m}\otimes y_{0}^{(i)}v_{n})&=x_{1}^{(s)} y_{0}^{(r+s)}(w_{m}\otimes y_{0}^{(i)}v_{n})\\ &=\frac{x_{1}^{(s)}}{(r+s)!i!}\left(\sum\limits_{l=0}^{r+s}\binom{r+s}{l}y_{0} ^{r+s-l}w_{m}\otimes y_{0}^{l+i}v_{n}\right)\\ &=\ x_{1}^{(s)}\quad\left(\sum\limits_{l=0}^{r+s}\binom{l+i}{l}x_{1}^{(s)}y_{0 }^{(r+s-l)}w_{m}\otimes y_{0}^{(l+i)}v_{n}\right)\\ &=\sum\limits_{l=0}^{r+s}\ \binom{l+i}{l}\ x_{1}^{(s)}y_{0}^{(r+s-l)}w_{m} \otimes y_{0}^{(l+i)}v_{n}\\ &=\sum\limits_{l=0}^{r-1}\ \binom{l+i}{l}\ y(r-l,s)w_{m}\otimes y_{0}^{(l+i)}v_{ n}\end{array} \tag{4.1}\]
Further, as \(y(r,s)w_{m}=0\) whenever \(r+s>m\),
\[y(r,s)(w_{m}\otimes y_{0}^{(i)}v_{n})=0\qquad\forall\,s\geq m,\,r\geq 1,\,i\geq 0.\]
Therefore,
\[x_{r}.(w_{m}\otimes v_{n})=0,\qquad h_{r}.(w_{m}\otimes v_{n})=\delta_{r,0}(n+ m)(w_{m}\otimes v_{n}),\qquad\forall\,r\in\mathbb{Z}_{+},\]
\[y(r,m)(w_{m}\otimes v_{n})=0\quad\forall\,1\leq r\leq n+1,\]
implying that \(W_{0}=U_{n}\) is a quotient of \(V(\lceil(m+n,m)\rceil)\).
Now, using standard \(\mathfrak{sl}_{2}\)-representation theory ([14, Section 7]) we see that for \(i>0\),
\[x_{r}.(w_{m}\otimes y_{0}^{(i)}v_{n}) =\delta_{r,0}(n-i+1)\,w_{m}\otimes y_{0}^{(i-1)}v_{n}\in U_{n-i+1}\] \[h_{r}.(w_{m}\otimes y_{0}^{(i)}v_{n}) =\delta_{r,0}(m+n-2i)\,w_{m}\otimes y_{0}^{(i)}v_{n},\]
and putting \(s=m-i\) in (4.1), we get,
\[y(r,m-i)(w_{m}\otimes y_{0}^{(i)}v_{n})=\sum_{l=0}^{r-1}\ \binom{l+i}{l}\ y(r-l,m-i)w_{m}\otimes y_{0}^{(l+i)}v_{n}. \tag{4.2}\]
Once again using the relation \(y(r,s)w_{m}=0\) whenever \(r+s>m\), we see that for \(r\geq i\), (4.2) reduces to:
\[(x\otimes t)^{(m-i)}(y\otimes 1)^{(r+m-i)}(w_{m}\otimes y_{0}^{(i)}v_{n}) =\sum_{l=0}^{r-1}\ \binom{l+i}{l}\ y(r-l,m-i)w_{m}\otimes y_{0}^{(l+i)}v_{n}\] \[=\sum_{l=r-i}^{r-1}\ \binom{l+i}{l}\ y(r-l,m-i)w_{m}\otimes y_{0}^{(l+i) }v_{n}\] \[=\sum_{l=0}^{i-1}\binom{r+l}{i}y(i-l,m-i)w_{m}\otimes y_{0}^{(r+l) }v_{n}); \tag{4.3}\]
and for \(r<i\), putting \(r=i-p\), (4.2) reduces to:
\[\begin{array}{rl}y(r,m-i)(w_{m}\otimes y_{0}^{(i)}v_{n})&=\sum_{l=0}^{r-1}\ \binom{l+i}{l}\ y(i-p-l,m-i)w_{m}\otimes y_{0}^{(l+i)}v_{n}\\ &=\sum_{k=p}^{r+p-1}\ \binom{k-p+i}{i}\ y(i-k,m-i)w_{m}\otimes y_{0}^{(k-p+i) }v_{n}\\ &=\sum_{k=p}^{i-1}\ \binom{k+r}{i}\ y(i-k,m-i)w_{m}\otimes y_{0}^{(k+r) }v_{n}.\end{array} \tag{4.4}\]
Having established (4.3) and (4.4), to prove that \(\dfrac{W_{i}}{W_{i-1}}\) is a quotient of \(V(\lceil(m+n-2i,m-i))\) it suffices to show that
\[\{y(i-l,m-i)w_{m}\otimes y_{0}^{(r+l)}v_{n}:0\leq l\leq i-1\}\subset W_{i-1}, \quad\text{ for }1\leq r\leq n+i+1 \tag{4.5}\]
Since \(w_{m}\otimes y_{0}^{(j)}v_{n}\in W_{i-1}\) for \(0\leq j\leq i-1\), \(0\leq i\leq n\),
\[y(r+i-j,m-i)(w_{m}\otimes y_{0}^{(j)}v_{n})\in W_{i-1},\qquad\forall\ 0\leq j \leq i-1.\]
Further, for \(r\geq i>j\geq 0\) we have,
\[\begin{array}{rl}(-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-j)}(w_{m}\otimes y_{0}^{( j)}v_{n})&=y(i+r-j,m-i)(w_{m}\otimes y_{0}^{(j)}v_{n}),\\ &=\sum\limits_{l=r-j}^{m+r-j-1}\binom{j+l}{j}y(r-j+i-l,m-i)w_{m}\otimes y_{0}^ {(j+l)}v_{n}\\ &=\sum\limits_{s=0}^{i-1}\binom{r+s}{j}y(i-s,m-i)w_{m}\otimes y_{0}^{(r+s)}v_ {n}.\end{array} \tag{4.6}\]
Setting, \(X:=\begin{bmatrix}y(i,m-i)w_{m}\otimes y_{0}^{(r)}v_{n}\\ y(i-1,m_{i})w_{m}\otimes y_{0}^{(r+1)}v_{n}\\ \vdots\\ y(1,m-i)w_{m}\otimes y_{0}^{(r+i-1)}v_{n}\end{bmatrix}\) and \(Y:=\begin{bmatrix}(-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-1)}(w_{m}\otimes y_{0}^ {(1)}v_{n})\\ (-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-1)}(w_{m}\otimes y_{0}^{(1)}v_{n})\\ \vdots\\ (-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-i+1)}(w_{m}\otimes y_{0}^{(i-1)}v_{n}) \end{bmatrix}\)
we can write the set of equations (4.6), for \(0\leq j\leq i-1\), as \(Y=A(r,i)X\). As the entries of the column matrix \(Y\) are contained in \(W_{i-1}\) and by Lemma 4.1.2, the coeffient matrix \(A(r,i)\) is invertible, it follows that the entries of \(X\) lie in \(W_{i-1}\). Hence, (4.5) holds implying that \(y(r,m-i)(w_{m}\otimes y_{0}^{(i)}v_{n})\in U_{i-1}\) for \(i\leq r\leq n-i+1\).
On the other hand, for \(r=i-p<i\), \(j=r+s\leq i-1\), \(1\leq s\leq p-1\), we have
\[\begin{array}{rl}(-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-j)}(w_{m}\otimes y_{0}^{ (j)}v_{n})&=y(i+r-j,m-i)(w_{m}\otimes y_{0}^{(j)}v_{n}),\\ &=\sum\limits_{l=0}^{m+r-j-1}\binom{r+s+l}{r+s}y(i-s-l,m-i)w_{m}\otimes y_{0}^ {(r+s+l)}v_{n}\\ &=\sum\limits_{k=j-r}^{i-1}\binom{r+k}{j}y(i-k,m-i)w_{m}\otimes y_{0}^{(r+k)}v _{n}.\end{array} \tag{4.7}\]
Now setting,
\[\bar{Y}:=\begin{bmatrix}(-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r)}(w_{m}\otimes v_{ n})\\ \vdots\\ (-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m)}(w_{m}\otimes y_{0}^{(r)}v_{n})\\ (-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-(r+1)}(w_{m}\otimes y_{0}^{(r+1)}v_{n})\\ \vdots\\ (-1)^{m-i}x_{1}^{(m-i)}y_{0}^{(m+r-(i-1)}(w_{m}\otimes y_{0}^{(i-1)}v_{n}) \end{bmatrix}\qquad\text{and}\qquad\bar{X}:=\begin{bmatrix}y(i,m-i)w_{m}\otimes y _{0}^{(r)}v_{n}\\ y(i-1,m-i)w_{m}\otimes y_{0}^{(r+1)}v_{n}\\ \vdots\\ y(1,m-i)w_{m}\otimes y_{0}^{(r+i-1)}v_{n}\end{bmatrix}\]
we can write the equations (4.6) for \(0\leq j\leq r\) and (4.7) for \(r<j\leq i-1\), as \(\bar{Y}=B(r,i)\bar{X}\), where and \(B(r,i)\) is the \(i\times i\) matrix defined in Lemma 4.1.2. As in the case above, using
the fact that \(B(r,i)\) is invertible (Lemma 4.1.2), we see that the entries of \(X\) lie in the space spanned by the vectors \(\{x_{1}^{(m-i)}y_{0}^{(m+r-j)}(w_{m}\otimes y_{0}^{(j)}v_{n}):0\leq j\leq i-1\} \subset W_{i-1}\).
If \(\overline{w_{m}\otimes y_{0}^{(i)}v_{n}}\) is the image of \(w_{m}\otimes y_{0}^{(i)}v_{n}\) in \(\dfrac{W_{i}}{W_{i-1}}\) and \(\Phi_{i}:V(\lceil(m+n-2i,m-i)\to\dfrac{W_{i}}{W_{i-1}}\) is a map such that \(\Phi_{i}(v_{\lceil(m+n-2i,m-i)\right)}=\overline{w_{m}\otimes y_{0}^{(i)}v_{n}}\), then it follows from above that \(\Phi_{i}\) is a surjective \(\mathfrak{sl}_{2}[t]\)-module homomorphism. Therefore,
\[\dim W(m\omega)\otimes V(n)=\sum_{i=0}^{n}\dim\dfrac{W_{i}}{W_{i-1}}\leq\sum_ {i=0}^{n}\dim V(\lceil(m+n-2i,m-i)).\]
Setting \(W_{-1}=0\) and using Lemma 3.4.2 and Lemma 4.1.3(i), we thus see that
\[\sum_{i=0}^{n}\dim\dfrac{W_{i}}{W_{i-1}}=2^{m}(n+1).\]
However, this is possible only if the surjection \(\Phi_{i}\) is an isomorphism for \(0\leq i\leq n\). This completes the proof of the theorem in this case.
When \(m\leq n\), considering the generator \(y_{0}^{m}w_{m}\otimes v_{n}\) of \(W_{loc}(m)\otimes V(n)\) and setting \(U_{i}=U(\mathfrak{sl}_{2}[t])(y_{0}^{(m-i)}w_{m}\otimes v_{n})\) for \(0\leq i\leq m\), we see that there exists a decreasing chain of submodules
\[W_{loc}(m\omega)\otimes V(n)=U_{0}\supset U_{1}\supset\cdots\supset U_{m} \supset 0.\]
Using similar arguments as above and Lemma 4.1.3(ii) we see that even in this case, \(\dfrac{U_{i}}{U_{i+1}}\) is isomorphic to \(V(\lceil(n-m+2i,i)\right)\) for \(1\leq i\leq m\) and \(\dfrac{U_{0}}{U_{1}}\) is isomorphic to \(V(n-m)\). This completes the proof of the theorem.
The following result is an immediate consequence of Theorem 4.1.1
**Corollary 4.1.4**.: _Given two positive integers \(m,n\), we have,_
\[ch_{gr}\,W_{loc}(m)\otimes V(n)\] \[=\left\{\begin{array}{ll}\sum\limits_{i=0}^{n}ch_{gr}\,V(i+1,1 ^{m-n+i-1})&\text{for $m>n$},\\ \sum\limits_{i=1}^{m}ch_{gr}\,V(n-m+i+1,1^{i-1})+ch_{gr}\,V(n-m,0)&\text{for $m\leq n$}. \end{array}\right.\]
**Remark 4.1.5**.: Combinatorial formulas for determining the multiplicity of level \(\ell\) Demazure modules in CV-modules associated with hook-type partitions have been obtained in [2, Theorem 4] and [7, Theorem 4.3]. Using the latter in combination with Corollary 4.1.4, one can deduce a combinatorial formula for the graded multiplicities of level \(\ell\) Demazure modules in \(W_{loc}(m\omega)\otimes ev_{0}V(n\omega)\) for all positive integers \(m,n\).
In view of Theorem 3.6.1, we shall refer to \(W_{loc}(m)\otimes V(n)\) as \(W(m,n)\) in the rest of this section. As \(W(m,n)\) is a graded \(\mathfrak{sl}_{2}[t]\)-module,
\[ch_{gr}W(m,n)=\sum\limits_{k\in\mathbb{Z}_{+}}ch_{\mathfrak{sl}_{2}}\ W(m,n)[k] \ q^{k},\]
where \(W(m,n)[k]\) is a finite-dimensional \(\mathfrak{sl}_{2}\)-module.
Define \([W(m,n):V(r)]_{q}\) as the polynomial in indeterminate \(q\) given by
\[[W(m,n):V(r)]_{q}=\sum\limits_{k\geq 0}{[W(m,n)[k]:V(r,k)]q^{k}},\]
where \([W(m,n)[k]:V(r,k)]\), denotes the multiplicity of the irreducible \(\mathfrak{sl}_{2}[t]\)-module \(V(r,k)\) in \(W(m,n)[k]\) for \(r\in\mathbb{Z}_{+}\). Using Theorem 4.1.1, we now obtain the polynomials \([W(m,n):V(r)]_{q}\).
**Theorem 4.2.1**.: _Let \(m,n\in\mathbb{N}\), and \(i\in\mathbb{Z}_{+}\) be such that \(m+n-2i\geq 0\). If \(m\leq n\),_
\[[W(m,n):V(m+n-2i)]_{q}=\left[\begin{matrix}m\\ i\end{matrix}\right]_{q},\quad 0\leq i\leq m,\]
_If \(m>n\),_
\[[W(m,n):V(m+n-2i)]_{q}=\left\{\begin{array}{c}\left[\begin{matrix}m\\ i\end{matrix}\right]_{q},\quad 0\leq i\leq n,\\ \left[\begin{matrix}m\\ i\end{matrix}\right]_{q}-\left[\begin{matrix}m\\ i-n-1\end{matrix}\right]_{q},\quad n+1\leq i\leq\lfloor\frac{n+m}{2}\rfloor \end{array}\right.\]
The proof of the theorem occupies the rest of this section.
At the onset we record three \(q-\)identities that we shall use.
\[q^{r}\left[\begin{matrix}n-1\\ r\end{matrix}\right]_{q}+\left[\begin{matrix}n-1\\ r-1\end{matrix}\right]_{q}=\left[\begin{matrix}n\\ r\end{matrix}\right]_{q},\qquad\qquad\text{for $n\in\mathbb{N}$, $r\in\mathbb{Z}_{+}$}. \tag{4.8}\] \[\left[\begin{matrix}n\\ r\end{matrix}\right]_{q}+q^{n-r+1}\left[\begin{matrix}n\\ r-1\end{matrix}\right]_{q}=\left[\begin{matrix}n+1\\ r\end{matrix}\right]_{q},\qquad\qquad\text{for $n\in\mathbb{N}$, $r\in\mathbb{Z}_{+}$}.\] (4.9) \[\sum\limits_{i=0}^{n}q^{i}\left[\begin{matrix}m+i\\ i\end{matrix}\right]_{q}=\left[\begin{matrix}n+m+1\\ n\end{matrix}\right]_{q}\qquad\qquad\text{for $m\in\mathbb{N}$, $n\in\mathbb{Z}_{+}$}. \tag{4.10}\]
We need the following results to prove Theorem 4.2.1.
**Lemma 4.2.2**.: _Given \(k\in\mathbb{N}\) and \(r\in Z_{+}\), we have,_
\[ch_{gr}V(k+r,1^{k}) = ch_{gr}V(k+r+1,1^{k-1})+q^{k}\Big{(}ch_{\mathsf{sl}_{2}}V(r,0)+ \sum\limits_{i=2}^{k}ch_{gr}V(i+r,1^{i-2})\Big{)}\] \[ch_{gr}V(k,1^{k+r}) = ch_{gr}\ V(k+1,1^{k+r-1})+q^{k+r}\Big{(}ch_{gr}W_{loc}(r)+\sum \limits_{i=2}^{k}ch_{gr}\ V(i,1^{i+r-2})\Big{)}\]
Proof.: This is a direct consequence of [2, Theorem 18].
**Lemma 4.2.3**.: _The graded characters of \(V(\xi)\) corresponding to hook type partitions \(\xi\) are as follows:_
\[ch_{gr}V(k+r,1^{k}) = \sum\limits_{p=0}^{k}q^{p}\begin{bmatrix}k\\ p\end{bmatrix}_{q}ch_{\mathsf{sl}_{2}}V(r+2(k-p))\] \[ch_{gr}V(k,1^{k+1}) = \sum\limits_{p=0}^{k}q^{k-p}\begin{bmatrix}k+1\\ p+1\end{bmatrix}_{q}ch_{\mathsf{sl}_{2}}V(1+2p)\] \[ch_{gr}V(k,1^{k+r}) = \sum\limits_{p=0}^{k}q^{p}\begin{bmatrix}k+r\\ p\end{bmatrix}_{q}ch_{\mathsf{sl}_{2}}V(r+2(k-p))\] \[+ \sum\limits_{p=k+1}^{\lfloor\frac{2k+r}{2}\rfloor}q^{p}( \begin{bmatrix}k+r\\ p\end{bmatrix}_{q}-q^{2k+r+1-2p}\begin{bmatrix}k+r\\ p-k-1\end{bmatrix})ch_{\mathsf{sl}_{2}}V(r-2(p-k))\qquad\text{ for }r\geq 2.\]
Proof.: Applying induction on \(k\), we prove the character formula of \(V(k+r,1^{k})\), \(r\geq 0\).
When \(k=1\), by Lemma 3.4.1(b) we have the following short exact sequence :
\[0\to\tau_{1}(V(n-1))\to V(n,1)\xrightarrow{\phi}V(n+1,0)\to 0\]
Therefore,
\[ch_{gr}V(n,1)=ch_{\mathsf{sl}_{2}}V(n+1)+q\ ch_{\mathsf{sl}_{2}}V(n-1)=\sum \limits_{p=0}^{1}q^{p}\begin{bmatrix}1\\ p\end{bmatrix}_{q}ch_{\mathsf{sl}_{2}}\ V(n+1-2p).\]
Assume that the given character formula holds for all \(k<l\) and consider the module \(V(l+r,l)\). Using Lemma 4.2.2 and applying the inductive hypothesis, we get
\[ch_{gr}\ V(l+r,1^{l})=ch_{gr}V(l+r+1,1^{l-1})+q^{l}ch_{\mathfrak{sl}_{2}}V(r)+q^ {l}\big{(}\sum_{i=2}^{l}ch_{gr}V(i-2+r+2,1^{i-2})\big{)}\] \[=\sum_{p=0}^{l-1}q^{p}\begin{bmatrix}l-1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r+2+2(l-p))+q^{l}\Big{(}ch_{ \mathfrak{sl}_{2}}V(r)+\sum_{i=0}^{l-2}\big{(}\sum_{s=0}^{i}q^{s}\begin{bmatrix} i\\ s\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}\ V(r+2+2i-2s)\big{)}\Big{)}\] \[=\sum_{p=0}^{l-1}q^{p}\begin{bmatrix}l-1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r+2(l-p+1))+q^{l}\Big{(}ch_{ \mathfrak{sl}_{2}}V(r)+\sum_{j=0}^{l-2-j}\big{(}\sum_{s=0}^{l-2-2j}q^{s} \begin{bmatrix}j+s\\ s\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}\ V(r+2+2j)\big{)}\Big{)}\] \[=\sum_{j=1}^{l}q^{l-j}\begin{bmatrix}l-1\\ l-j\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r+2(j+1))+\sum_{j=0}^{l-2}q^{l} \begin{bmatrix}l-1\\ j+1\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}\ V(r+2(j+1))+q^{l}ch_{\mathfrak{sl}_{2 }}V(r),\quad\text{ by \eqref{eq:ch_gr}} \tag{4.10}\] \[=\sum_{p=0}^{l}q^{p}\begin{bmatrix}l\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r+2(l-p))\qquad\quad\text{ by \eqref{eq:ch_gr}}.\]
This establishes the character formula for the module \(V(k+r,1^{k})\).
When \(r=1\), using Lemma 4.2.2 and the character formula for \(V(k+1,1^{k})\), we have
\[ch_{gr}(V(k,1^{k+1}))=ch_{gr}V(k+1,1^{k})+q^{k+1}\big{(}\sum_{i= 1}^{k}ch_{gr}V(i,1^{i-1})\big{)}\] \[=\sum_{p=0}^{k}q^{p}\begin{bmatrix}k\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2(k-p))+q^{k+1}\Big{(}\sum_{j=0}^{k -1}\big{(}\sum_{s=0}^{j}q^{s}\begin{bmatrix}j\\ s\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2(j-s))\big{)}\Big{)}\]
( Rearranging the terms and using (4.10) we get )
\[=\sum_{p=0}^{k}q^{p}\begin{bmatrix}k\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2(k-p))+q^{k+1}\Big{(}\sum_{i=0}^{k -1}\begin{bmatrix}k\\ i+1\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2i)\Big{)}\] \[=\sum_{j=0}^{k}q^{k-j}\Big{(}\begin{bmatrix}k\\ j\end{bmatrix}_{q}+q^{j+1}\begin{bmatrix}k\\ j+1\end{bmatrix}_{q}\Big{)}ch_{\mathfrak{sl}_{2}}V(1+2j)=\sum_{j=0}^{k}q^{k-j} \begin{bmatrix}k+1\\ j+1\end{bmatrix}_{q}V(1+2j),\qquad\text{(using \eqref{eq:ch_gr})}.\]
To to establish the character formula of \(V(k,1^{k+r})\) for \(r\geq 2\), we apply induction on \(r\).
When \(r=2\), using Lemma 4.2.2 and the character formula for \(V(k,1^{k})\) we have,
\[ch_{gr}(V(k,1^{k+2}))=ch_{gr}V(k+1,1^{k+1})+q^{k+2}\big{(}\sum_{i= 2}^{k}ch_{gr}V(i,1^{i-1})\big{)}\] \[=\sum_{p=0}^{k+1}\begin{bmatrix}k+1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2(k+1-p))+q^{k+2}\Big{(}\sum_{i=1}^{k }\sum_{s=0}^{i}\begin{bmatrix}i\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2(i-p))\Big{)}\]
Rearranging the terms in the summation and using (4.10) we have,
\[ch_{gr}(V(k,1^{k+2}))=\sum_{p=0}^{k}q^{p}\begin{bmatrix}k+1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2+2(k-p))\] \[+\sum_{j=1}^{k}q^{k+2}\begin{bmatrix}k+1\\ j+1\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2j)+(q^{k+1}+q^{k+3})\begin{bmatrix} k\\ 1\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(0)\] \[=V(2k+2)+(q^{k+1}+q^{k+3})\begin{bmatrix}k\\ 1\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(0)+\sum_{j=1}^{k}q^{k-j+1}\big{(} \begin{bmatrix}k+1\\ j\end{bmatrix}_{q}+q^{j+1}\begin{bmatrix}k+1\\ j+1\end{bmatrix}_{q}\big{)}ch_{\mathfrak{sl}_{2}}V(2j)\]
Using (4.8) and substituting \(k-j+1=p\) we get,
\[=\sum_{j=1}^{k}q^{p}\begin{bmatrix}k+2\\ k-p+2\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2+2(k-p))+q^{k+1}\big{(} \begin{bmatrix}k+2\\ 1\end{bmatrix}_{q}-q\big{)}ch_{\mathfrak{sl}_{2}}V(0).\]
This proves that the formula for \(V(k,1^{k+r})\) holds when \(r=2\). Similarly, we can prove the formula for \(r=3\) separately. Now, assume that the character formula for \(V(k,1^{k+r})\) holds for all \(4\leq r<l\). Then
\[ch_{gr}V(k,1^{k+l})=ch_{gr}V(k+1,1^{k+l-1})+q^{k+l}\Big{(}\sum_{i =1}^{k}ch_{gr}V(i,1^{i+l-2})\Big{)}\] \[=\sum_{p=0}^{k+1}q^{p}\begin{bmatrix}k+l-1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2k+l-2p)+q^{k+l}\Big{(}\sum_{i=1}^{k} \big{(}\sum_{p=0}^{i}q^{p}\begin{bmatrix}i+l-2\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(l+2(i-p-1))\Big{)}\Big{)}\] \[+\sum_{p=k+2}^{\lfloor\frac{2k+l}{2}\rfloor}q^{p}(\begin{bmatrix} k+l-1\\ p\end{bmatrix}_{q}-q^{2k+l+1-2p}\begin{bmatrix}k+l-1\\ p-k\end{bmatrix})ch_{\mathfrak{sl}_{2}}V(2k+l-2p)\] \[+q^{k+l}\Big{(}\sum_{p=i+1}^{\lfloor\frac{2i+l-2}{2}\rfloor}q^{p} (\begin{bmatrix}i+l-2\\ p\end{bmatrix}_{q}-q^{2i+l-1-2p}\begin{bmatrix}i+l-2\\ p-i-1\end{bmatrix})ch_{\mathfrak{sl}_{2}}V(2i+l-2-2p)\big{)}\Big{)}\] \[=\sum_{p=0}^{k}q^{p}\begin{bmatrix}k+l\\ p\end{bmatrix}_{q}ch_{gr}V(2k+l-2p,0)\] \[+\sum_{p=k+1}^{\lfloor\frac{2k+l}{2}\rfloor}q^{p}(\begin{bmatrix} k+l\\ p\end{bmatrix}_{q}-q^{2k+l+1-2p}\begin{bmatrix}k+l\\ p-k-1\end{bmatrix}_{q})ch_{gr}V(2k+l-2p,0),\]
where the last line follows after rearranging the terms and using (4.8) and (4.9).
We now complete the proof of Theorem 4.2.1.
Proof.: We first consider the case when \(m\leq n\). Suppose that \(n=m+r\).
Then from Corollary 4.1.4 and Lemma 4.2.3 it follows that :
\[ch_{gr}\ W(m,m+r) =\sum_{i=1}^{m}ch_{gr}V(r+2+i-1,1^{i-1})+ch_{\mathfrak{sl}_{2}}\ V(r)\] \[=\sum_{k=0}^{m-1}\Big{(}\sum_{p=0}^{k}q^{p}\begin{bmatrix}k\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}\ V(r+2+2(k-p))\Big{)}+ch_{\mathfrak{sl} _{2}}V(r)\]
Rearranging the terms we thus get,
\[=\sum_{i=1}^{m}\Big{(}\sum_{p=0}^{m-i}q^{p}\begin{bmatrix}p+i-1\\ p\end{bmatrix}_{q}\Big{)}ch_{\mathfrak{sl}_{2}}\ V(r+2+2i)+ch_{\mathfrak{sl} _{2}}V(r)\] \[=\sum_{i=0}^{m}\begin{bmatrix}m\\ i\end{bmatrix}_{q}Ch_{gr}V(r+2i),\quad(\text{ by \eqref{eq:2.1})}.\]
We now consider the case when \(m>n\), i.e., \(m=n+r\). It follows from Corollary 4.1.4 that,
\[ch_{gr}W(n+r,n)=\sum_{i=0}^{n}ch_{gr}V(i+1,1^{r+i-1})=\sum_{k=1}^{n+1}ch_{gr}V (k,1^{r-2+k}).\]
Hence, using Lemma 4.2.3, for \(r>3\), we have,
\[ch_{gr}W(n+r,n)=\sum_{k=1}^{n+1}\big{(}\sum_{p=0}^{k}q^{p}\begin{bmatrix}k+r- 2\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r-2+2(k-p))\big{)}\] \[\quad+\sum_{k=1}^{n+1}\Big{(}\sum_{p=k+1}^{\lfloor\frac{2k+r-2}{ 2}\rfloor}q^{p}(\begin{bmatrix}k+r-2\\ p\end{bmatrix}_{q}-q^{r-1-2(p-k)}\begin{bmatrix}k+r-2\\ p-k-1\end{bmatrix})ch_{\mathfrak{sl}_{2}}V(r-2-2(p-k))\Big{)}\] \[=\sum_{i=-1}^{n}\big{(}\sum_{p=0}^{n-i}q^{p}\begin{bmatrix}p+i+r- 1\\ p\end{bmatrix}_{q}\big{)}ch_{\mathfrak{sl}_{2}}V(r+2i)\] \[+\sum_{i=1}^{\lfloor\frac{r}{2}\rfloor-1}\sum_{p=i+1}^{i+n+1}q^{p }(\begin{bmatrix}p-i+r-2\\ p\end{bmatrix}_{q}-q^{r-1-2i}\begin{bmatrix}p-i+r-2\\ i-1\end{bmatrix}_{q})ch_{\mathfrak{sl}_{2}}V(r-2(i+1))\] \[=\sum_{i=-1}^{n}\begin{bmatrix}n+r\\ n-i\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r+2i)\] \[+\sum_{i=1}^{\lfloor\frac{r}{2}\rfloor-1}\Big{(}\begin{bmatrix}n+r \\ n+i+1\end{bmatrix}_{q}-\begin{bmatrix}r-1\\ i\end{bmatrix}_{q}-\big{(}\sum_{j=0}^{n}q^{r-i+j}\begin{bmatrix}r-1+j\\ i-1\end{bmatrix}_{q}\big{)}\Big{)}ch_{\mathfrak{sl}_{2}}V(r-2(i+1))\quad( \text{ by \eqref{eq:2.1})}\] \[=\sum_{i=-1}^{n}\begin{bmatrix}n+r\\ n-i\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(r+2i)+\sum_{i=1}^{\lfloor\frac{r}{ 2}\rfloor-1}\Big{(}\begin{bmatrix}n+r\\ n+i+1\end{bmatrix}_{q}-\begin{bmatrix}r+n\\ i-1\end{bmatrix}_{q}\Big{)}ch_{\mathfrak{sl}_{2}}V(r-2(i+1))\quad(\text{ by \eqref{eq:2.1})}.\]
The proof of the theorem in the case when \(r=3\), is similar except for the fact that here we need to use the formula for \(ch_{gr}V(k,1^{k+1})\) as given in Lemma 4.2.3.
For \(m=n+1\), using Corollary 4.1.4, Lemma 4.2.3 and identities (4.8)-(4.10), we have:
\[ch_{gr}W(n+1,n)=\sum_{i=0}^{n}ch_{gr}V(i+1,1^{i})=\sum_{i=0}^{n} \sum_{p=0}^{i}q^{p}\begin{bmatrix}i\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2(i-p))\] \[=\sum_{j=0}^{n}\sum_{p=0}^{n-j}q^{p}\begin{bmatrix}p+j\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2j)=\sum_{j=0}^{n}\begin{bmatrix} n+1\\ n-j\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(1+2j).\]
Likewise, for \(m=n+2\), we have,
\[ch_{gr}W(n+2,n)=\sum_{i=0}^{n}ch_{gr}V(i+1,1^{i+1})=\sum_{i=0}^{ n}\sum_{p=0}^{i+1}q^{p}\begin{bmatrix}i+1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2+2(i-p))\] \[=\sum_{j=0}^{n}\sum_{p=0}^{n-j}q^{p}\begin{bmatrix}p+j+1\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2+2j)+\sum_{p=1}^{n+1}q^{p}\begin{bmatrix} p\\ p\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(0)\] \[=\sum_{j=0}^{n}\begin{bmatrix}n+2\\ n-j\end{bmatrix}_{q}ch_{\mathfrak{sl}_{2}}V(2+2j)+\big{(}\begin{bmatrix}n+2\\ 1\end{bmatrix}_{q}-1\big{)}ch_{\mathfrak{sl}_{2}}V(0).\]
This completes the proof of Theorem 4.2.1.
## 5. The module \(W_{loc}(n\omega)\otimes W_{loc}(m\omega)\) and Demazure flags
In this section we recall the definition of truncated local Weyl modules and show that the tensor product of two local Weyl modules has a filtration by truncated local Weyl modules. Then using Proposition 4.0.1, we establish these tensor product modules have an excellent filtration and give a closed formula for the graded multiplicity of level two Demazure modules in the tensor product of level one Demazure modules. In addition we show that the tensor product of a level one Demazure module with a level two Demazure module admits a filtration by level 3 Demazure modules.
### Truncated local Weyl Modules
We begin by recalling the definition of truncated local Weyl modules.
Given a pair of integers \((m,N)\), the truncated local Weyl module \(W_{loc}([m],N)\) is a quotient of the local Weyl module \(W_{loc}(m)\) generated by an element \(w_{m,N}\) satisfying the following relations:
\[(x\otimes t^{r})w_{m,N}=0, (h\otimes t^{r})w_{m,N}=m\delta_{r,0}w_{m,N},\qquad(y\otimes 1)^{m +1}w_{m,N}=0, \tag{5.1}\] \[y\otimes t^{s}.w_{m,N}=0,\quad\forall\,s\geq N.\]
The truncated local Weyl modules were studied in [18, Theorem 4.3] and it was shown that if \(m=Nk+r\) with \(0\leq r<N\), then the \(\mathfrak{sl}_{2}[t]\)-module \(W_{loc}([m],N)\) is isomorphic to the fusion product module \(V(k+1)^{*r}*V(k)^{*N-r}\). Hence by [8, Theorem 5](ii), it is
known that the module \(W_{loc}([m],N)\) is isomorphic to the CV-module \(V(\xi(m,N))\) where \(\xi(m,N)=((k+1)^{r},k^{N-r})\).
In particular, given non-negative integers \(a\) and \(b\), since \(2a+b=1\cdot(a+b)+a\), the module \(W_{\rm loc}([2a+b],a+b)\) is isomorphic to \(V(2^{a},1^{b})\) and hence has a Demazure flag of level 2, as per Proposition 4.0.1. The following result gives the character formula for such truncated local Weyl modules in terms of level 1 and level 2 Demazure modules.
**Lemma 5.1.1**.: _Give two non-negative integers \(a,b\), we have,_
_i._ \(ch_{gr}V(2^{a},1^{b})=\sum\limits_{k=0}^{a}(-1)^{k}\begin{bmatrix}a\\ k\end{bmatrix}_{q}q^{k(a+b)-\frac{k(k-1)}{2}}\ ch_{gr}\ W_{loc}(b+2a-2k)\)__
_ii. The truncated local Weyl module \(W_{loc}([2a+b],a+b)\) has a level two Demazure flag and_
\[ch_{gr}V(2^{a},1^{b})=\sum\limits_{k=0}^{\lfloor\frac{b}{2}\rfloor}q^{k(a+ \lceil\frac{b}{2}\rceil)}\begin{bmatrix}\lfloor\frac{b}{2}\rfloor\\ k\end{bmatrix}_{q}\ ch_{gr}D(2,2a+b-2k) \tag{5.2}\]
Proof.: i. We prove part (i) of the lemma by applying induction on \(a\).
For \(a=1\), from the short exact sequence of \(W_{loc}((b+2)\omega)\) given in Lemma 3.4.1(b), we deduce that
\[ch_{gr}\ V(2,1^{b})=ch_{gr}W_{loc}(b+2)-q^{(b+1)}ch_{gr}W_{loc}(b)\]
Thus, the result holds for \(a=1\).
Assume that the result holds when \(a<l\) and consider the case when \(a=l\).
Once again, from Lemma 3.4.1(b) we deduce that
\[ch_{gr}V(2^{l},1^{b}) =ch_{gr}\ V(2^{l-1},1^{b+2})-q^{(l+b)}ch_{gr}\ V(2^{l-1},1^{b})\] Applying induction hypothesis we have, \[=\sum\limits_{k=0}^{l-1}(-1)^{k}\begin{bmatrix}l-1\\ k\end{bmatrix}_{q}q^{k(l+b+1)-\frac{k(k-1)}{2}}ch_{gr}\ W_{loc}(b+2(l-k))\] \[-q^{l+b}\sum\limits_{k=0}^{l-1}(-1)^{k}\begin{bmatrix}l-1\\ k\end{bmatrix}_{q}q^{k(l+b-1)-\frac{k(k-1)}{2}}ch_{gr}W_{loc}(b+2(l-k-1))\] \[=\sum\limits_{k=0}^{l-1}(-1)^{k}\begin{bmatrix}l-1\\ k\end{bmatrix}_{q}q^{k(l+b+1)-\frac{k(k-1)}{2}}ch_{gr}\ W_{loc}(b+2(l-k))\] \[-\sum\limits_{k=0}^{l-1}(-1)^{k}\begin{bmatrix}l-1\\ k\end{bmatrix}_{q}q^{(k+1)(l+b)-\frac{k(k+1)}{2}}ch_{gr}\ W_{loc}(b+2(-k-1))\]
Reindexing the indices we get,
\[ch_{gr}V(2^{l},1^{b}) =\sum_{k=0}^{l-1}(-1)^{k}\begin{bmatrix}l-1\\ k\end{bmatrix}_{q}q^{k(l+b+1)-\frac{k(k-1)}{2}}ch_{gr}W_{loc}(b+2(l-k))\] \[+\sum_{k=1}^{l}(-1)^{k}\begin{bmatrix}l-1\\ k-1\end{bmatrix}_{q}q^{k(l+b)-\frac{k(k-1)}{2}}ch_{gr}W_{loc}(b+2(l-k))\] \[=\sum_{k=0}^{l}(-1)^{k}\begin{bmatrix}l\\ k\end{bmatrix}_{q}q^{k(l+b)-\frac{k(k-1)}{2}}ch_{gr}W_{loc}(b+2(l-k)).\]
ii. We prove part(ii) of the lemma by applying induction on \(b\).
First note that for any partition \(\mathbf{n}\) of \(n\in\mathbb{N}\), the multiplicity of a Demazure module \(D(m,r)\) with highest weight \(r\) is zero whenever \(n-r\notin 2\mathbb{Z}_{+}\), and
\[V(2^{a})\cong D(2,2a),\qquad V(2^{a},1)\cong D(2,2a+1).\]
Hence there is nothing to prove for \(b=0\) and \(1\).
Now suppose \(2a+b\in 2\mathbb{Z}+1\) and \(b=2d+1\).
For \(d=1\), using (3.4.1), we have the following short exact sequence,
\[0\to\tau_{a+1}V(2^{a},1)\to V(2^{a},1^{3})\overset{\phi}{\to}V(2^{a+1},1)\to 0\]
Hence,
\[ch_{gr}V(2^{a},1^{3}) =ch_{gr}D(2,2(a+1)+1)+q^{a+d+1}ch_{gr}D(2,2a+1)\] \[=\sum_{k=0}^{1}q^{k(a+d+1)}\begin{bmatrix}d\\ k\end{bmatrix}_{q}ch_{gr}D(2,2a+3-2k)\]
Assume that the result holds for all \(d<l\) and consider module \(V(2^{a},1^{2l+1})\).
Once again, using (3.4.1), we have the following short exact sequence,
\[0\to\tau_{a+2b-1}V(2^{a},1^{2l-1})\to V(2^{a},1^{2l+1})\overset{\phi}{\to}V(2^ {a+1},1^{2l-1})\to 0\]
\[ch_{gr}V(2^{a},1^{2l+1}) =ch_{gr}V(2^{a+1},1^{2l-1})+q^{a+2l}ch_{gr}V(2^{a},1^{2l-1})\] \[=\sum_{k=0}^{l-1}q^{k(a+l+1)}\begin{bmatrix}l-1\\ k\end{bmatrix}_{q}ch_{gr}D(2,1+2(a+l-k))\quad\text{(By induction hypothesis)}\] \[+\sum_{r=0}^{l-1}q^{r(a+l)+a+2l}\begin{bmatrix}l-1\\ r\end{bmatrix}_{q}ch_{gr}D(2,1+2(a+l-k-1))\] \[=\sum_{k=0}^{l}q^{k(a+\lceil\frac{b}{2}\rceil)}\begin{bmatrix} \lfloor\frac{b}{2}\rfloor\\ k\end{bmatrix}_{q}ch_{gr}D(2,2a+b-2k),\]
where we obtain the last equality by using the q-combinatorial identity (4.9). This proves part (ii) when \(2a+b\) is an odd integer. The proof is similar in the case when \(2a+b\) is an even integer.
### Character formula of \(W_{loc}(m)\otimes W_{loc}(n)\)
The following result on the character formula for the tensor product of two local Weyl modules of \(\mathfrak{sl}_{2}[t]\) was given in [5]. Since this result plays an important role in the proof of the main result of this section, Theorem 5.4.1, for the sake of completeness, we include its independent proof here.
**Lemma 5.2.1**.: _Given \(m,n\in\mathbb{Z}_{+}\),_
\[ch_{gr}W_{loc}(n)\otimes W_{loc}(m)=\sum\limits_{i=0}^{\min\{n,m\}}\begin{bmatrix} n\\ i\end{bmatrix}_{q}\begin{bmatrix}m\\ i\end{bmatrix}_{q}(1-q)\dots(1-q^{i})ch_{gr}W_{loc}(n+m-2i). \tag{5.3}\]
Proof.: The result is a consequence of [1, Proposition 3.2, Proposition 5.13].
**Remark 5.2.2**.: In Appendix we give an alternate proof of the lemma using Pieri rule.
**Lemma 5.2.3**.: _For \(m,n\in\mathbb{Z}_{+}\) with \(n\geq m\),_
\[ch_{gr}(W_{loc}(m)\otimes W_{loc}(n))=\sum\limits_{k=0}^{m}\begin{bmatrix}m\\ k\end{bmatrix}_{q}ch_{gr}V(2^{m-k},1^{n-m}). \tag{5.4}\]
Proof.: Given (5.3), to prove (5.4) it suffices to show that
\[\sum\limits_{i=0}^{m}\begin{bmatrix}n\\ i\end{bmatrix}_{q}\begin{bmatrix}m\\ i\end{bmatrix}_{q}(1-q)\dots(1-q^{i})\,ch_{gr}W_{loc}(n+m-2i)=\sum\limits_{k=0} ^{m}\begin{bmatrix}m\\ k\end{bmatrix}_{q}ch_{gr}V(2^{m-k},1^{n-k}),\]
or equivalently:
\[\begin{array}{l}\sum\limits_{i=0}^{m}\begin{bmatrix}n\\ i\end{bmatrix}_{q}\begin{bmatrix}m\\ i\end{bmatrix}_{q}(1-q)\dots(1-q^{i})\,ch_{gr}W_{loc}(n+m-2i)\\ =\sum\limits_{k=0}^{m}\sum\limits_{i=0}^{k}(-1)^{i}\begin{bmatrix}m\\ k\end{bmatrix}_{q}\begin{bmatrix}k\\ i\end{bmatrix}_{q}q^{i(k+n-m)-\frac{i(i-1)}{2}}\,ch_{gr}W_{loc}(n-m+2k-2i)\end{array} \tag{5.5}\]
Using Lemma 5.1.1. Rearranging the terms on the right hand side of (5.5) we have:
\[\sum_{k=0}^{m}\sum_{i=0}^{k}(-1)^{i}\begin{bmatrix}m\\ k\end{bmatrix}_{q}\begin{bmatrix}k\\ i\end{bmatrix}_{q}q^{i(k+n-m)-\frac{i(i-1)}{2}}\,ch_{gr}W_{loc}(n-m+2(k-i))\] \[=\sum_{i=0}^{m}\sum_{k=i}^{m}(-1)^{i}\begin{bmatrix}m\\ k\end{bmatrix}_{q}\begin{bmatrix}k\\ i\end{bmatrix}_{q}q^{i(k+n-m)-\frac{i(i-1)}{2}}\,ch_{gr}W_{loc}(n-m+2(k-i))\] \[=\sum_{i=0}^{m}\sum_{k=0}^{i}(-1)^{k}\begin{bmatrix}m\\ m-i+k\end{bmatrix}_{q}\begin{bmatrix}m-i+k\\ k\end{bmatrix}_{q}q^{k(n-i+k)-\frac{k(k-1)}{2}}\,ch_{gr}W_{loc}(n+m-2i)\] \[=\sum_{i=0}^{m}\begin{bmatrix}m\\ i\end{bmatrix}_{q}(\sum_{k=0}^{i}(-1)^{k}\begin{bmatrix}i\\ k\end{bmatrix}_{q}q^{k(n-i+k)-\frac{k(k-1)}{2}}\,)ch_{gr}W_{loc}(n+m-2i)\]
To complete the proof of the lemma it suffices to show
\[\sum_{k=0}^{i}(-1)^{k}\begin{bmatrix}i\\ k\end{bmatrix}_{q}q^{k(n-i+k)-\frac{k(k-1)}{2}}=\begin{bmatrix}n\\ i\end{bmatrix}_{q}(1-q)\ldots(1-q^{i}). \tag{5.6}\]
We prove (5.6) by using the following combinatorial identity:
\[\sum_{k=0}^{i}(-1)^{i-k}\begin{bmatrix}i\\ k\end{bmatrix}_{q}q^{\binom{i-k}{2}}x^{k}=(x-1)(x-q)\cdots(x-q^{i-1}) \tag{5.7}\]
Putting \(x=q^{n}\) in (5.7), we get
\[\sum_{k=0}^{i}(-1)^{i-k}\begin{bmatrix}i\\ k\end{bmatrix}_{q}q^{\binom{i-k}{2}}q^{nk} =(q^{n}-1)(q^{n}-q)\cdots(q^{n}-q^{i-1})\] \[\sum_{k=0}^{i}(-1)^{i-k}\begin{bmatrix}i\\ k\end{bmatrix}_{q}q^{(i-k)(i-k-1)/2}q^{nk} =(-1)^{i}q^{i(i-1)/2}(1-q^{n})(1-q^{n-1})\cdots(1-q^{n-i+1})\] \[\sum_{k=0}^{i}(-1)^{k}\begin{bmatrix}i\\ k\end{bmatrix}_{q}q^{k(n-i+k)-k(k-1)/2} =(1-q^{n})\cdots(1-q^{n-i+1})=\begin{bmatrix}n\\ i\end{bmatrix}_{q}(1-q)\ldots(1-q^{i}).\]
This establishes (5.6) and hence the lemma.
### Ordered basis of the graded weight spaces of \(W_{loc}(m)\)
We have seen in Lemma 3.4.1(b), for \(m\in\mathbb{Z}_{+}\), \(W_{loc}(m)\) has a basis \(\mathbb{B}(1^{n})\) which is indexed by the set \(J(1^{n})\cup\{\emptyset\}\). Define a function \(|.|:J(1^{n})\cup\{\emptyset\}\to\mathbb{Z}\) such that
\[|\emptyset|=0,\qquad\quad|\mathbf{i}|:=\sum_{k=1}^{m}i_{k},\quad\forall\, \mathbf{i}=(i_{1},\cdots,i_{m})\in J(1^{m}).\]
Defining in \(U(\mathfrak{n}^{-}[t])\) the elements
\[y(0,\emptyset)=1,\qquad\quad y(|\mathbf{i}|,\mathbf{i}):=(y_{0})^{i_{1}}(y_{1} )^{i_{2}}\cdots(y_{m-1})^{i_{m}},\quad\forall\,\mathbf{i}=(i_{1},\cdots,i_{m}) \in J(1^{m}),\]
we see that, \(\mathbb{B}(1^{n})=\{y(|{\bf i}|,{\bf i})w_{m}:{\bf i}\in J(1^{m})\cup\{\emptyset\}\}.\) We define an ordering on \(J(1^{m})\cup\{\emptyset\}\) as follows. By fiat, \(\emptyset>{\bf i}\) for all \({\bf i}\in J(1^{m})\) and given \({\bf i},{\bf j}\in J(1^{m})\), we say, \({\bf i}>{\bf j}\), if either \(|{\bf i}|<|{\bf j}|\) or \(|{\bf i}|=|{\bf j}|\) and there exists \(1\leq k\leq m\) such that \(i_{k}>j_{k}\) and \(i_{s}=j_{s}\) for \(k+1\leq s\leq m\). This clearly induces an ordering in \(\mathbb{B}(1^{m})\), thereby making \(\mathbb{B}(1^{m})\) an ordered basis of \(W_{loc}(m)\).
On the other hand it was proved in [10] that :
**Proposition 5.3.1**.: _Given a positive integer \(m\), if_
\[F(m)=\{(k,\underline{s}):k\in\mathbb{N},\underline{s}=(s_{1},\cdots,s_{k})\in \mathbb{Z}^{k},0\leq s_{i}\leq m-k\text{ for }1\leq i\leq k\}.\]
_Then the set \(\mathbb{B}(m)=\{y(k,\underline{s})w_{m}:(k,\underline{s})\in F(m)\cup\{(0, \emptyset)\}\}\) is a basis of \(W_{loc}(m)\), where \(y(k,\underline{s})=(y\otimes t^{s_{1}})\cdots(y\otimes t^{s_{k}})\) and \(y(0,\emptyset)w_{m}=w_{m}\)._
Thus using standard q-binomial theory, it follows from Proposition 5.3.1 that the number of \(l\)-graded elements of weight \(m\omega-k\alpha\) in \(\mathbb{B}(m)\) and hence in \(\mathbb{B}(1^{m})\) is equal to the coeffient of \(q^{l}\) in the polynomial \(\begin{bmatrix}m\\ k\end{bmatrix}_{q}\).
### Filtration of \(W_{loc}(n)\otimes W_{loc}(m)\)
The following is the main result of this section.
**Theorem 5.4.1**.: _Let \(n,m\in\mathbb{N}\) and \(n\geq m\). The \(\mathfrak{sl}_{2}[t]\)-module \(W_{loc}(n)\otimes W_{loc}(m)\) admits a filtration whose successive quotients are isomorphic to truncated local Weyl modules_
\[\tau_{k_{r}}W_{loc}([m+n-2r],n-r),\quad 0\leq r\leq m,\,0\leq k_{r}\leq r(m-r).\]
Proof.: Given \(n,m\in\mathbb{N}\) with \(n\geq m\), let \(w_{n}\) be the generator of \(W_{loc}(n)\) and \(\mathbb{B}(1^{m})\) the ordered basis of \(W_{loc}(m)\) as described in Section 5.3. For each element \({\bf i}\in J(1^{m})\cup\{\emptyset\}\), set
\[v_{n,m}(k,{\bf i})=w_{n}\otimes y(k,{\bf i})w_{m},\qquad\text{ where }k=|{\bf i}|,\]
\[\mathcal{B}(m,n)^{{\bf i}\leq}=\{v_{n,m}(l,{\bf j}):{\bf j}\in J(1^{m})\cup\{ \emptyset\},{\bf j}\geq{\bf i}\}\]
and let \(T(k,{\bf i})\) be the submodule of \(W_{loc}(n)\otimes W_{loc}(m)\) generated by \(\mathcal{B}(m,n)^{{\bf i}\leq}\).
Let \({\bf e}_{i}[m]\) denote the \(m\)-tuple with \(1\) in the \(i^{th}\)-position and zero elsewhere. By Lemma 3.4.2,
\[T(m,m{\bf e}_{1}[m])=W_{loc}(n)\otimes W_{loc}(m),\]
and from the definition of the submodules \(T(|{\bf i}|,{\bf i})\) it is clear that
\[T(m,m{\bf e}_{1}[m])\supset T(m-1,(m-1){\bf e}_{1}[m])\supset\cdots\supset T (1,{\bf e}_{m-1}[m])\supset T(1,{\bf e}_{m}[m])\supset T(0,\emptyset)\supset 0,\]
is a descending chain of submodules of \(W_{loc}(n)\otimes W_{loc}(m)\).
Additionally, since \(x.y(k,\mathbf{i}).w_{m}\) lies in the subspace of \(W_{loc}(m)\) spanned by the vectors \(\{y(k-1,\mathbf{j})w_{m}:\mathbf{j}\in J(1^{m})\}\) and for \(r>0\), \(\mathbf{i}\in J(1^{m})\) with \(|\mathbf{i}|=k\),
\[(h\otimes t^{r})y(k,\mathbf{i}).w_{m}=\sum\limits_{\{1\leq j\leq m-r\,:\,0<i_{ j}<m-r\}}-2y(k,\mathbf{i}+(\mathbf{e}_{i_{j}+r}[m]-\mathbf{e}_{i_{j}}[m])).w_{m},\]
using the fact that \(x.w_{n}=0\), and \(h\otimes t^{r}.w_{n}=0\), for \(r>0\), it follows that
\[\begin{array}{ll}x.v_{n,m}(1,\mathbf{i})=0,&x.v_{n,m}(k,\mathbf{i})\in T(k- 1,(k-1)\mathbf{e}_{1}[m])\quad\text{ when }k>1,\\ \\ &(h\otimes t^{r})v_{n,m}(k,\mathbf{i})\in\sum\limits_{\mathbf{j}>\mathbf{i}}T (k,\mathbf{i}).\end{array} \tag{5.8}\]
As \(J(1^{m})\), and hence \(\mathbb{B}(1^{m})\) is totally ordered, given \(\mathbf{i}\in J(1^{m})\) with \(|\mathbf{i}|=k\) there exists a unique pair \((\hat{k},\hat{\mathbf{i}})\in\mathbb{Z}\times J(1^{m})\) such that \(y(\hat{k},\hat{\mathbf{i}})>y(k,\mathbf{i})\) and if \(y(l,\mathbf{j})\in\mathbb{Z}\times J(1^{m})\) is such that \(y(l,\mathbf{j})>y(k,\mathbf{i})\), then \(y(l,\mathbf{j})\geq y(\hat{k},\hat{\mathbf{i}}).\) In particular, as \(y(k-1,(k-1)\mathbf{e}_{1}[m])>y(k,\mathbf{i})\), in view of (5.8) we see that, \(\dfrac{T(k,\mathbf{i})}{T(\hat{k},\hat{\mathbf{i}})}\) is a highest weight module generated by the image of \(v_{n,m}(k,\mathbf{i})\) under the natural surjection from \(T(k,\mathbf{i})\) onto \(\dfrac{T(k,\mathbf{i})}{T(\hat{k},\hat{\mathbf{i}})}.\)
Since \(n\geq m\), from the defining relations of a local Weyl module we have \(y_{n}v_{n,m}(k,\mathbf{i})=0\). Further,
\[(h\otimes 1)v_{n,m}(k,\mathbf{i})=h.w_{n}\otimes y(k,\mathbf{i})w_{m}+w_{n} \otimes h.y(k,\mathbf{i})w_{m}=(n+m-2k)(w_{n}\otimes y(k,\mathbf{i})w_{m}).\]
Hence, for all \(\mathbf{i}\in J(1^{m})\), the module \(\dfrac{T(k,\mathbf{i})}{T(\hat{k},\hat{\mathbf{i}})}\) is a quotient of the truncated Weyl module \(\tau_{g_{\mathbf{i}}}^{*}W_{loc}(m+n-2k,n)\), where \(g_{\mathbf{i}}=\sum\limits_{k=1}^{m}(k-1)i_{k}\), \(T(0,\emptyset)\) is a quotient of \(W_{loc}(n+m,n)\) and
\[ch_{gr}W_{loc}(n)\otimes W_{loc}(m)=ch_{gr}T(0,\emptyset)+\sum\limits_{ \mathbf{i}\in J(1^{m})}q^{g_{\mathbf{i}}}\,ch_{gr}\dfrac{T(|\mathbf{i}|, \mathbf{i})}{T(|\hat{\mathbf{i}}|,\hat{\mathbf{i}})}. \tag{5.9}\]
Now comparing with the character formulas (5.4) and (5.9), we see that \(T(0,\emptyset)\) is isomorphic \(V(2^{n},1^{m-n})\) which by Section 5.1 is isomorphic to \(W_{loc}(m+n,n)\).
On the other hand for \(\mathbf{i}\in J(1^{m})\), with \(|\mathbf{i}|=k\) and \(g_{\mathbf{i}}=l\), as \(y(|\mathbf{i}|,\mathbf{i})w_{m}\) is a \(l\)-graded vector of weight \(m\omega-k\alpha\), the quotient module, \(\dfrac{T(|\mathbf{i}|,\mathbf{i})}{T(|\hat{\mathbf{i}}|,\hat{\mathbf{i}})}\) is a quotient of \(\tau_{l}W(m+n-2k,n)\). By Section 5.3, the number of l-graded vectors of weight \(m\omega-k\alpha\) in \(\mathbb{B}(1^{m})\) is equal to the coefficient of \(q^{k}\) in the polynomial \(\begin{bmatrix}m\\ k\end{bmatrix}_{q}\). Hence comparing the character formulas (5.4) and (5.9) and the grades of the highest weight generating vectors the modules \(\dfrac{T(|\mathbf{i}|,\mathbf{i})}{T(|\hat{\mathbf{i}}|,\hat{\mathbf{i}})}\),
we see that for any \(0\leq k\leq m\), \(\frac{T(k,\mathbf{i})}{T(\hat{k},\mathbf{i})}\) is isomorphic to the CV module \(V(2^{m-k},1^{n-m})\) which, by Section 5.1, is a quotient of \(W_{loc}(n+m-2k,n)\)and is isomorphic to the truncated Weyl module \(W_{loc}(n+m-2k,n-k)\). This completes the proof of the theorem.
#### 5.4.1.
For \(m,n,s\in\mathbb{N}\), such that \(m+n-2s\geq 0\), define a polynomial in indeterminate \(q\) by:
\[[W_{loc}(n)\otimes W_{loc}(m):D(2,m+n-2s)]_{q}=\sum_{p\geq 0}[W_{loc}(n)\otimes W _{loc}(m):\tau_{p}D(2,m+n-2s)]q^{p},\]
where \([W_{loc}(n)\otimes W_{loc}(m):\tau_{p}D(l,m+n-2s)]\) is the multiplicity of \(\tau_{p}D(l,m+n-2s)\) in \(W_{loc}(n)\otimes W_{loc}(m)\). The polynomial \([W_{loc}(n)\otimes W_{loc}(m):D(2,m+n-2s)]_{q}\) is called the graded multiplicity of \(D(2,m+n-2s)\) in level 2 Demazure flag of \(W_{loc}(n)\otimes W_{loc}(m)\).
**Corollary 5.4.2**.: _Given two positive integers \(m,n\), the \(\mathfrak{sl}_{2}[t]\)-module \(W_{loc}(n)\otimes W_{loc}(m)\) has a level 2 Demazure flag and_
\[[W_{loc}(n)\otimes W_{loc}(m):D(2,m+n-2s)]_{q}\] \[=\left\{\begin{array}{ll}\sum\limits_{k=0}^{\min\{s,m\}}q^{(s-k )(m-k+\lceil\frac{n-m}{2}\rceil)}\begin{bmatrix}m\\ k\end{bmatrix}\begin{bmatrix}\lfloor\frac{n-m}{2}\rfloor\\ s-k\end{bmatrix}_{q},&0\leq s\leq\lfloor\frac{n-m}{2}\rfloor,\\ \sum\limits_{k=0}^{\min\{m-j,\lfloor\frac{n-m}{2}\rfloor\}}q^{(\lfloor\frac{n -m}{2}\rfloor-k)(m-k-j+\lceil\frac{n-m}{2}\rceil)}\begin{bmatrix}m\\ k+j\end{bmatrix}\begin{bmatrix}\lfloor\frac{n-m}{2}\rfloor\\ \lfloor\frac{n-m}{2}\rfloor-k\end{bmatrix}_{q},&s=j+\lfloor\frac{n-m}{2} \rfloor,\,1\leq j\leq m,\right.\]
_for \(m+n-2s\geq 0\)._
Proof.: By Theorem 5.4.1, the tensor product product module \(W_{loc}(n)\otimes W_{loc}(m)\) has a filtration by submodules whose successive quotients are isomorphic to translates of modules of the form \(V(2^{a},1^{b})\). Hence by Theorem 4.0.1, \(W_{loc}(n\omega)\otimes W_{loc}(m\omega)\) has a level 2 Demazure flag. Now using Lemma 5.2.3 and Lemma 5.1.1, we see that
\[ch_{gr}W_{loc}(n)\otimes W_{loc}(m) =\sum\limits_{k=0}^{m}\begin{bmatrix}m\\ k\end{bmatrix}_{q}ch_{gr}\,V(2^{m-k},1^{n-m})\] \[=\sum\limits_{k=0}^{m}\begin{bmatrix}m\\ k\end{bmatrix}_{q}\left(\sum\limits_{r=0}^{\lfloor\frac{n-m}{2}\rfloor}q^{r(m- k+\lceil\frac{n-m}{2}\rceil)}\begin{bmatrix}\lfloor\frac{n-m}{2}\rfloor\\ r\end{bmatrix}_{q}ch_{gr}D(2,n+m-2(k+r))\right).\]
Hence, we have
\[[W_{loc}(n\omega)\otimes W_{loc}(m\omega):D(2,m+n-2s)]_{q} =\sum\limits_{k=0}^{\min\{s,m\}}q^{(s-k)(m-k+\lceil\frac{n-m}{2} \rceil)}\begin{bmatrix}m\\ k\end{bmatrix}\begin{bmatrix}\lfloor\frac{n-m}{2}\rfloor\\ s-k\end{bmatrix}_{q},\]
if \(s\in\mathbb{N}\) is such that \(0\leq s\leq\lfloor\frac{n-m}{2}\rfloor\) and
\[[W_{loc}(n\omega)\otimes W_{loc}(m\omega):D(2,m+n-2s)]_{q}\] \[=\sum_{k=0}^{\min\{m-j,\lfloor\frac{n-m}{2}\rfloor\}}q^{(\lfloor \frac{n-m}{2}\rfloor-k)(m-k-j+\lceil\frac{n-m}{2}\rceil)}\begin{bmatrix}m\\ k+j\end{bmatrix}\begin{bmatrix}\lfloor\frac{n-m}{2}\rfloor\\ \lfloor\frac{n-m}{2}\rfloor-k\end{bmatrix}_{q},\]
when \(s=j+\lfloor\frac{n-m}{2}\rfloor\) for some \(1\leq j\leq m\).
In [5] the graded character formula for \(D(2,2n+r)\otimes W_{loc}(m)\) has been obtained when \(n\geq m\). We can extend the methods employed in proving Theorem 5.4.1 to show that the tensor product of a level \(2\) Demazure module with a local Weyl module has a decreasing chain of submodules, with each successive quotient isomorphic to a CV-module of the form \(V(3^{a},2^{b},1^{c})\). Consequently, it follows from Proposition 4.0.1 that the module \(D(2,2n+r)\otimes W_{\mathrm{loc}}(m)\) admits a level \(3\)-Demazure flag.
## 6. Appendix
In the appendix we give a proof of Lemma 5.2.1 using Pieri formulas ([19, Chapter VI]).
Let \(t,q\) be independent indeterminates and \(\mathbb{C}(q,t)\) be the field of rational functions in \(q\) and \(t\). Let \(\{x_{i}:1\leq i\leq n+1\}\) be a set of indeterminates and \(\mathbb{C}(q,t)[x_{1},\cdots,x_{n+1}]\) be the ring of polynomials in \(\{x_{j}:1\leq j\leq n+1\}\) with coefficients in \(\mathbb{C}(q,t)\). Let \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n+1})\) be a basis for the ring of symmetric polynomials in \(\mathbb{C}(q,t)[x_{1},x_{2},\ldots,x_{n+1}]\).
For \(\lambda\in P^{+}\), a family of orthogonal symmetric polynomials \(P_{\lambda}(\mathbf{x};q,t)\) was introduced in [19], and it was shown in [21] that the character of a level one Demazure module of highest weight \(\lambda\) is given by the specialized Macdonald polynomial \(P_{\lambda}(\mathbf{x};q,0)\). On the other hand it was proved in [12] that a local Weyl module for a simply-laced current Lie algebra is isomorphic to a Demazure module of level one. Thus in case of \(\mathfrak{sl}_{2}[t]\), we see that for \(m\in\mathbb{Z}_{+}\), \(ch_{gr}W_{loc}(m)=P_{(m,0)}(\mathbf{x};q,0)\). We use this fact to give an alternate proof of Lemma 5.2.1.
Set
\[(q;q)_{n} =(1-q)(1-q^{2})\ldots(1-q^{n})\] \[(t;q)_{n} =(1-t)(1-tq)\ldots(1-tq^{n-1})\quad\text{for }|q|<1\] \[(t;q)_{\infty} =\prod_{i=0}^{\infty}(1-tq^{i})\]
By [19, Chapter VI, Equation 4.9],
\[P_{(m)}(\mathbf{x};q,t)=\frac{(q;q)_{m}}{(t;q)_{m}}g_{m}(\mathbf{x},q,t) \tag{6.1}\]
where \(g_{m}(\mathbf{x};q,t)\) denotes the coefficient of \(y^{m}\) in the power-series expansion of the infinite product
\[\prod_{i\geq 1}\frac{(tx_{i}y;q)_{\infty}}{(x_{i}y;q)_{\infty}}=\sum_{m\geq 0}g_ {m}(\mathbf{x};q,t)y^{m}\]
Putting \(t=0\) in (6.1), we get
\[P_{(m)}(\mathbf{x};q,0)=(q;q)_{m}g_{m}(\mathbf{x},q,0) \tag{6.2}\]
We need the following result on Pieri formula for our proof of Lemma 5.2.1.
**Lemma 6.0.1**.: _[_19_, Theorem 6.24]_ _Let \(m,n\in\mathbb{Z}_{+}\), with \(n\geq m\). Given a partition \(\lambda\) of \(m+n\) containing the partition \((n)\) and having atmost two parts, we have,_
\[P_{n}(\mathbf{x};q,0)g_{m}(\mathbf{x};q,0)=\sum_{\lambda-(n)=(m)}\phi_{\lambda/(n)}P_ {\lambda}(\mathbf{x};q,0)\]
_and the coefficients are given by_
\[\phi_{\lambda/(n)}=\prod_{s\in C_{\lambda/(n)}}\frac{b_{\lambda}(s)}{b_{(n)}( s)}\]
_where \(C_{\lambda/(n)}\) denote the union of columns that intersect \(\lambda-(n)\), and_
\[b_{\lambda}(s)=b_{\lambda}(s;q,t)=\begin{cases}\frac{1-q^{a_{\lambda}(s)}t^{ \lambda}(s)+1}{1-q^{a_{\lambda}(s)+1}t^{\lambda}(s)}&\quad\text{if $s\in \lambda$},\\ 1&\quad\text{otherwise}\end{cases}\]
**Proof of Lemma 5.2.1.** Since for \(m\in\mathbb{Z}_{+}\), \(ch_{gr}W_{loc}(m)=P_{(m,0)}(\mathbf{x};q,0)\), we have
\[ch_{gr}(W_{loc}(n)\otimes W_{loc}(m)) =ch_{gr}W_{loc}(n)ch_{gr}W_{loc}(m)\] \[=P_{(n,0)}(\mathbf{x};q,0)P_{(m,0)}(\mathbf{x};q,0)\] \[=(q;q)_{m}P_{(n,0)}(\mathbf{x};q,0)g_{m}(\mathbf{x};q,0)\quad\text{(Using \ref{eq:P1})}\] \[=(q;q)_{m}\sum_{\lambda-(n)=(m)}\phi_{\lambda/(n)}(q,0)P_{\lambda }(\mathbf{x};q,0)\] \[=(q;q)_{m}\sum_{\lambda-(n)=(m)}\prod_{s\in C_{\lambda/(n)}}\frac {b_{\lambda}(s;q,0)}{b_{(n)}(s;q,0)}P_{\lambda}(\mathbf{x};q,0)\]
\[=(q;q)_{m}(\frac{1}{(1-q)(1-q^{2})\dots(1-q^{m})}P_{(n+m,0)}(\mathbf{x};q,0)\] \[+\frac{1-q^{n}}{(1-q)(1-q^{2})\dots(1-q^{m-1})(1-q)}P_{(n+m-1,1)}( \mathbf{x};q,0)\] \[+\frac{(1-q^{n})(1-q^{n-1})}{(1-q)(1-q^{2})\dots(1-q^{m-2})(1-q)(1- q^{2})}P_{(n+m-2,2)}(\mathbf{x};q,0)\] \[+\dots+\frac{(1-q^{n})(1-q^{n-1})\dots(1-q^{n-m+1})}{(1-q)(1-q^{2 })\dots(1-q^{m})}P_{(n-m,m)}(\mathbf{x};q,0))\] \[=P_{(n+m,0)}(\mathbf{x};q,0)+\frac{(1-q^{n})(1-q^{m})}{(1-q)}P_{(n+m-1,1)}(\mathbf{x};q,0)\] \[+\frac{(1-q^{n})(1-q^{n-1})(1-q^{m})(1-q^{m-1})}{(1-q)(1-q^{2})} P_{(n+m-2,2)}(\mathbf{x};q,0)\] \[+\dots+\frac{(1-q^{n})(1-q^{n-1})\dots(1-q^{n-m+1})(1-q)(1-q^{2}) \dots(1-q^{m})}{(1-q)(1-q^{2})\dots(1-q^{m})}\] \[P_{(n-m,m)}(\mathbf{x};q,0)\]
Hence,
\[ch_{gr}(W_{loc}(n)\otimes W_{loc}(m)) =\sum_{k=0}^{m}\begin{bmatrix}n\\ k\end{bmatrix}_{q}\begin{bmatrix}m\\ k\end{bmatrix}_{q}(1-q)(1-q^{2})\dots(1-q^{k})P_{(n+m-k,k)}(\mathbf{x};q,0)\] \[=\sum_{k=0}^{m}\begin{bmatrix}n\\ k\end{bmatrix}_{q}\begin{bmatrix}m\\ k\end{bmatrix}_{q}(1-q)\dots(1-q^{k})ch_{gr}W_{loc}(n+m-2k).\qed\]
## Acknowledgements
The first author would like to thank Professor S.Vishwanath and Professor Vyjayanthi Chari for useful discussions. The first author is grateful to IISER Mohali for PhD fellowship and support.
|
2309.04357 | SSIG: A Visually-Guided Graph Edit Distance for Floor Plan Similarity | We propose a simple yet effective metric that measures structural similarity
between visual instances of architectural floor plans, without the need for
learning. Qualitatively, our experiments show that the retrieval results are
similar to deeply learned methods. Effectively comparing instances of floor
plan data is paramount to the success of machine understanding of floor plan
data, including the assessment of floor plan generative models and floor plan
recommendation systems. Comparing visual floor plan images goes beyond a sole
pixel-wise visual examination and is crucially about similarities and
differences in the shapes and relations between subdivisions that compose the
layout. Currently, deep metric learning approaches are used to learn a
pair-wise vector representation space that closely mimics the structural
similarity, in which the models are trained on similarity labels that are
obtained by Intersection-over-Union (IoU). To compensate for the lack of
structural awareness in IoU, graph-based approaches such as Graph Matching
Networks (GMNs) are used, which require pairwise inference for comparing data
instances, making GMNs less practical for retrieval applications. In this
paper, an effective evaluation metric for judging the structural similarity of
floor plans, coined SSIG (Structural Similarity by IoU and GED), is proposed
based on both image and graph distances. In addition, an efficient algorithm is
developed that uses SSIG to rank a large-scale floor plan database. Code will
be openly available. | Casper van Engelenburg, Seyran Khademi, Jan van Gemert | 2023-09-08T14:28:28Z | http://arxiv.org/abs/2309.04357v1 | # SSIG: A Visually-Guided Graph Edit Distance for Floor Plan Similarity
###### Abstract
We propose a simple yet effective metric that measures structural similarity between visual instances of architectural floor plans, without the need for learning. Qualitatively, our experiments show that the retrieval results are similar to deeply learned methods. Effectively comparing instances of floor plan data is paramount to the success of machine understanding of floor plan data, including the assessment of floor plan generative models and floor plan recommendation systems. Comparing visual floor plan images goes beyond a sole pixel-wise visual examination and is crucially about similarities and differences in the shapes and relations between subdivisions that compose the layout. Currently, deep metric learning approaches are used to learn a pair-wise vector representation space that closely mimics the structural similarity, in which the models are trained on similarity labels that are obtained by Intersection-over-Union (IoU). To compensate for the lack of structural awareness in IoU, graph-based approaches such as Graph Matching Networks (GMIs) are used, which require pairwise inference for comparing data instances, making GMNs less practical for retrieval applications. In this paper, an effective evaluation metric for judging the structural similarity of floor plans, coined **SSIG** (**S**tructural **S**imilarity by IoU and GED), is proposed based on both image and graph distances. In addition, an efficient algorithm is developed that uses SSIG to rank a large-scale floor plan database. Code will be openly available.
## 1 Introduction
Floor plans, as one of the most celebrated media for communicating and thinking in architectural design, are horizontal orthographic projections of a particular building's floor. Floor plans are simple but powerful representations of space, conveying a richness of information about the compositional structure of buildings - the shapes, proportions, and relations between the building's individual subdivisions. The compositional structure of a building is a key factor in the architectural quality.
The multi-modal nature of floor plans as pictorial (as images), graphical (as a set of geometrical entities), and topological (as graphs) entities, leverage different data modeling and machine learning frameworks for digital representation and machine understanding in the domain of architectural design. In particular, floor plan synthesis, such as floor plan generation [9, 10, 18] or style transfer [1], has often been explored. Another major research line is that of architectural element recognition in which the goal is to automatically learn to extract geometrical features from floor plan images walls or furniture [7, 3, 14].
Despite the developments in the generative models for floor plans, there have been few valuable attempts at developing data-driven methods to _reason_ about floor plans, such as floor plan retrieval by effectively comparing data instances measuring similarity. As found by [13], structural similarity judgment for floor plans is a challenging task - it is multi-faceted and is about the shapes and proportions, visual features, and relations between the floor plan's subdivisions. Generally speaking, comparing floor plans is about finding commonalities in the underlying structure, hence referred to as _structural similarity_.
The most effective methods for measuring structural similarity between floor plans rely on deeply learned vector representations with the goal to make the feature vector space reflect structural similarity the distance between feature vectors is small for structurally similar floor plans and vice versa. To train such models, [8] propose to use weakly supervised deep metric learning setup by obtaining similarity labels through Intersection-over-Union (IoU). To compensate for the lack of structural awareness in IoU, [13, 4] use Graph Matching Networks (GMNs) [6] that naturally embed topological information in the feature vectors. Besides the fact that GMNs hinder the practical usability [22], we believe that simpler frameworks could learn similarly effective vector representations when the objective is better aligned with structural similarity in the first place. In our work, we develop an improved objective and opt for
simplicity. We start by investigating the merits and bottlenecks of well-known image- and graph-based similarity metrics.
The _Intersection-over-Union_ (IoU) is often used as an evaluation metric to measure structural similarity _e.g_. to find similar or matching pairs for deep metric learning models [13, 4], to evaluate the accuracy of structural reasoning algorithms [7], or to evaluate the output diversity of generative models [9]. In this paper, we show how IoU fails to robustly measure structural similarity because 1) IoU is overly sensitive to geometric perturbations in the image representation of a floorplan and 2) IoU is often unable to identify important differences in the connectivity between space subdivisions such as doors, walls, openings.
Floor plans have been successfully modelled as graphs [9, 13], which allows for graph matching algorithms to compute a distance (similarity) between a pair of graphs. For example, the _Graph Edit Distance_ (GED) computes the minimum cost of converting a source graph into another (isomorphic to it) target graph [17]. For example, [9] uses GED to check the compatibility between generated layouts. Although GED effectively captures topological similarity and with that addresses some of IoU's limitations, we show that GED has inherent limitations when used for measuring structural similarity.
Independently, image- and graph-based similarity metrics lack to holistically capture the structural similarity between floor plans. Therefore, instead of treating images and graphs in isolation, we propose an evaluation metric that ascertains closeness based on the floor plans' corresponding image _and_ graph, called **SSIG**. In addition, an efficient algorithm based on **SSIG** is used to rank RPLAN [26], a large-scale floor plan database, on structural similarity. Our contributions include:
* A study on the distributions and correlations between IoU and GED on a large-scale floor plan database, with one main finding: IoU and GED independently fail to robustly capture structural similarity.
* A proposed simple yet powerful measure for the structural similarity of floor plans.
* An effective ranking technique to sort a large floor plan database, hence developing a starting point for proper evaluation and training of floor plan retrieval models.
## 2 Related works
Floor plan representations.Floor plans are digitally represented in various ways that each emphasize different components of interest _e.g_. as images allowing to model fine-grained details such as materialization and furniture [21, 5], geometries that explicitly model the shapes, proportions and locations of the shapes of the subdivisions [13], and graphs that model the relations between the subdivisions [9, 13, 18, 12].
Floor plan analysis.Traditional approaches on floor plan analysis - or similar data such as documents [28] or UIs [2] - involve primitive heuristics to approximate a floor plan distance and are, hence, instance-specific [11]. More recently, two-step approaches became the standard to approximate similarity of floor plans [27, 16, 20, 23, 21, 19, 25, 12], in which the first step involves extracting relevant features from an image _e.g_. a graph representation of the room connectivity [12] and the second step computes a distance based on the extracted features _e.g_. a subgraph matching algorithm for graph similarity [16]. Besides the fact that the two-step approaches are prone to error propagation, the two-step approaches assume that the extracted features, such as the access graph, completely and accurately capture all aspects of floor plans. In our work, we challenge this assumption, particularly concerning semantic image and access graph representations of floor plans.
Structural similarity by Graph Neural Networks.Recently, Graph Neural Networks (GNN) have been proposed to _learn_ floor plan similarity [13, 8, 4], learning floor plan vector representations in an end-to-end fashion. Most successfully, [4, 13] leverage Graph Matching Networks (GMN) [6] in combination with weakly-supervised labels based on the IoU. Due to the cross-graph information sharing in GMNs, [13] found that GMNs compensate for the lack of structural awareness in IoU and naturally learn to embed the structural commonalities in the vector representations. A critical shortcoming of GMNs is that vector representations cannot be computed in isolation, therefore limiting the practical usability. We believe that the main reason that cross-graph information sharing is needed is that essentially the wrong objective is injected - that is to mimic a distance metric inspired by IoU. We show that our metric, which is a combination between visual- and a graph-inspired similarity scores, can be leveraged to rank a floor plan database for which the retrievals share similar characteristics as to the works in [13].
## 3 Method
In this section, we develop a simple yet effective strategy to numerically approximate structural similarity between floor plans. Formally, we seek a _similarity function_\(s\left(\cdot,\cdot\right)\) between two floor plans \(p_{1}\) and \(p_{2}\):
\[s\left(p_{1},p_{2}\right):\mathcal{P}\times\mathcal{P}\rightarrow\mathbb{R}^{ +}, \tag{1}\]
in which \(\mathcal{P}\) is the (mathematical) space of a floor plan. \(\mathcal{P}\) can be multi-modal _e.g_. a joint space of images and graphs: \(\mathcal{P}=\mathbb{R}^{H\times W\times 3}\times\mathcal{G}\). The goal is to make \(s\) closely mimic
the structural similarity between floor plans: the larger the structural similarity the larger \(s\) and vice versa.
The section is split into three main parts. The first part explores the modeling and attribution of floor plans, (Subsec. 3.1). The second part investigates known and frequently used image- and graph-based similarity metrics (Subsec. 3.2, 3.3). The third part describes the new similarity metric (Subsec. 3.4), and provides an algorithm that can efficiently rank a floor plan dataset (Subsec. 3.5).
### Floor plan representations
In this work, we consider three well-known floor plan representations: 1) the pictorial image, 2) the semantic image, and 3) the access graph.
Pictorial imageConventionally, floor plans are modeled as images. All floor plan images that are either grayscale or RGB and do not contain any categorial information at pixel level about the semantics of the subdivisions are referred to as _pictorial images_, \(I\in\mathcal{I}=\mathbb{R}^{H\times W\left(\times 3\right)}\) (Fig. 1, Left).
Semantic imageFloor plans are _structured arrangements_ consisting of subdivisions - areas such as a living room or bathroom, openings such as doors or windows, and separators such as walls or railings. All floor plans that have categorical labels for subdivision areas and/or semantic information encoded at pixel level are referred to as _semantic images_ in our definition, \(X\in\mathcal{X}=\left\{0,1,\ldots,N_{c}\right\}^{H\times W}\), in which \(N_{c}\) is the number of subdivision categories (Fig. 1, Center). In the semantic image representation, pixel values correspond to subdivision categories _e.g_. \(1\leftarrow\) "living room", \(2\leftarrow\) "bathroom", \(16\leftarrow\) "internal wall", etc.
Access graphFloor plans often have clearly identifiable _relations_ between subdivisions _e.g_. access information between two rooms by means of a door. Even though semantic images efficiently capture the shapes and categories of the subdivisions, semantic image maps do not explicitly define the relations between the subdivisions.
Instead, floor plans can be attributed an _access graph_, directly laying bare the topological structure of the subdivisions (Fig. 1, Right). Formally, a graph is defined as a 4-tuple \(g=(N,E,\mu,\nu)\), in which \(N\) is a finite set of nodes, \(E\) is the set of edges, and \(\mu\) and \(\nu\) are the node and edge labelling functions respectively [15]. The nodes in an access graph correspond to the areas, and the edges to connections between the areas.
Depending on the use-case and feature accessibility, the nodes and edges can have (one or several) attributes. Usually, room-type information is encoded on the nodes, and edges are only present if a door is in between two rooms, hence edges do not have an attribute [9, 18, 24]. Even though we agree that access connectivity is the most important relation between rooms, an important relation is disregarded: room adjacency. Room adjacency is an influential factor _e.g_. for decisions around privacy, structural integrity, and function. Access graphs, in the remainder of this work, have therefor a connectivity-type attribute, which is either "door" or "adjacent".
Proposed by [8], it is noteworthy to mention that several approaches _compute_ the edge features. Specifically, edge features between two nodes are computed by pairwise geometric features, such as the ratio between areas or the relative position.
### Intersection-over-Union
The _Intersection-over-Union_ (IoU) is not considered a proper distance metric, but it is a well-adopted evaluation metric for tasks where measuring the overlap between regions or sets is important, such as in object detection or segmentation tasks. In turn, IoU is often used as a measure for estimating structural similarity in various applications related to floor plans _e.g_. for finding similar or matching floor plan pairs for similarity learning [13, 4], evaluating the accuracy of segmentation algorithms [7], or to check the diversity of outputs in floor plan generation [9]. The IoU is commonly defined as the fraction between the amount of overlap and the union of two binary images \(X_{1}\) and \(X_{2}\)\(\in\left\{0,1\right\}^{H\times W}\):
\[\text{IoU}\left(X_{1},X_{2}\right)=\frac{X_{1}\cap X_{2}}{X_{1}\cup X_{2}}. \tag{2}\]
When more classes are present in an image _e.g_. in the case of semantic images (\(X_{1},X_{2}\in\mathcal{X}\)), IoU is computed per class and the results are aggregated to find the final score. For example, the mean IoU (mIoU) computes the average IoU, with respect to the ground truth, across the different classes, therefore treating every class similarly regardless of its proportion to the image size.
\[\text{mIOU}\left(X_{1},X_{2}\right)=\frac{1}{N_{c}}\sum_{c=1}^{N_{c}}\text{ IoU}\left(X_{1}==c,X_{2}==c\right), \tag{3}\]
where \(N_{c}\) is the number of classes.
Figure 1: **Floor plan representations**. Left: Pictorial image. Center: Semantic image. Right: Access graph.
Even though IoU is a powerful metric that in part is suitable for approximating structural similarity between floor plans, IoU frequently fails on both extremes: 1) false-negative: low IoU - high structural similarity, and 2) false-positive: high IoU - low structural similarity. 1) IoU is over-sensitive to irrelevant geometric perturbations in the image _e.g_. translation, rotations, and scale. For example, when identical floor plans are centered at different locations in an image, the IoU will be low; falsely indicating that the floor plans are _not_ structurally similar. 2) IoU frequently fails to correctly identify the connectivity in floor plans _e.g_. small "air gaps" between rooms are hardly identified by IoU yet are indicative of structural characteristics (see example in Fig. 2).
### Graph Edit Distance
The _Graph Edit Distance_ (GED) [17], is another measure to judge similarity between floor plans, used for _e.g_. compatibility of generated layouts [10] or for floor plan retrieval [12]. GED is a metric that quantifies the similarity (dissimilarity) between two graphs by computing the minimum cost of transforming one graph \(g_{1}\) into (a graph isomorphic to) \(g_{2}\) in terms of a sequence of edit operations \(\left(e_{1},e_{2},\ldots,e_{j}\right)\), referred to as the _edit path_. Edit operations typically include node and edge deletions, insertions, and substitutions, which can have different costs. Given a set of graph edit operations \(e_{i}\) and associated costs \(c_{i}\left(e_{i}\right)\), the GED \(\left(g_{1},g_{2}\right)\) is formally defined as,
\[\text{GED}\left(g_{1},g_{2}\right)=\min_{\left(e_{1},e_{2},\ldots,e_{j}\right) \in\pi\left(g_{1},g_{2}\right)}\sum_{j=1}^{k}c_{j}\left(e_{j}\right), \tag{4}\]
in which \(\pi\left(g_{1},g_{2}\right)\) denotes the set of edit paths that transforms \(g_{1}\) into (a graph isomorphic to) \(g_{2}\). In our work, all edit operations have the same cost, hence Eq. 4 becomes,
\[\text{GED}\left(g_{1},g_{2}\right)=\min_{k}\left(e_{1},e_{2},\ldots,e_{k} \right), \tag{5}\]
such that \(\left(e_{1},e_{2},\ldots,e_{k}\right)\in\pi\left(g_{1},g_{2}\right)\). GED in this case is equivalent to the smallest number of edit operations. To normalize the GED between 0 and 1, we define the _Normalized Graph Edit Distance_ (nGED) by dividing GED with the product of the orders1 - the number of nodes - of the graphs (\(\left|N_{1}\right|\) and \(\left|N_{2}\right|\)):
Footnote 1: Note that Eq. 6 does not strictly normalizes GED. Even so, for floor plans _e.g_., in [26], nGED will always be below 1. Furthermore, the product of the orders, instead of for example the sum, nicely spreads the nGED distributions.
\[\text{nGED}\left(g_{1},g_{2}\right)=\frac{\text{GED}\left(g_{1},g_{2}\right)} {\left|N_{1}\right|\cdot\left|N_{2}\right|}. \tag{6}\]
nGED can be used as of measure of closeness between the topology of two floor plans, in which a small nGED suggests that two floor plans share topological characteristics, which could be, for example, indicative of commonalities in flow or function between two buildings.
nGED has several limitations. First of all, the amount of different floor plans possible for a particular access graph is huge, for which many corresponding floor plans pairs are not structurally similar (see Fig. 3). Second, it is unclear which edit costs align with structural similarity: Is the removal of a node or an edge more costly? Does the node label matter? etc. Third, nGED is compute-heavy which limits its use in real-time systems _e.g_. floor plan retrieval in search engines. We elaborate more on the shortcomings in Subsec. 4.2.
In Subsec. 4.3 we investigate the trends and correlations between nGED and mIoU on RPLAN [26] empirically.
### A Visually-guided Graph Edit Distance
Independently, IoU and GED fail to holistically capture structural similarity between floor plans. Therefore, instead
Figure 2: **Visual- and graph-based similarity metrics for floor plans**. A) IoU can be directly used as a measure for similarity between semantic image representations. mIoU between the semantic images is computed through Eq. 3 (floor plans have 3 semantic labels); in this case being 0.90 which is a relatively high number. B) nGED is computed through Eqs. 5 and 6: GED equals the shortest edit path which is 2; the orders of the graphs are both 3, hence nGED \(=2/9\). When \(\gamma=0.4\) (a typical value), \(1-\text{nGED}\left(g_{1},g_{2}\right)^{\gamma}\approx 0.45\). The visual-based similarity (\(0.90\)) and graph-based (\(0.45\)) similarity scores are significantly different which indicates that in some cases IoU and GED negatively correlate in ”seeing” similarity.
of treating images and graphs separately, we propose a metric that ascertains closeness based on the semantic image _and_ access graph. We define the similarity between a pair of floor plans \((p_{1},p_{2})\), \(p_{i}=(X_{i},g_{i})\in\mathcal{X}\times\mathcal{G}\), as
\[\text{SSIG}\left(p_{1},p_{2}\right)=\frac{\text{mIoU}\left(X_{1},X_{2}\right)+ \left(1-\text{nGED}\left(g_{1},g_{2}\right)^{\gamma}\right)}{2}. \tag{7}\]
Coined as **SSIG** (Structural Similarity by **IoU** and **G**ED), 7 essentially computes a weighted average between a IoU-based and a nGED-based similarity metric. \(\gamma\in\mathbb{R}^{+}\) is a weight that allows to tune between the relevance of nGED w.r.t. mIoU: the higher \(\gamma\) the _less_ influence nGED and vice versa.2 For example for RPLAN [26], the mIoU and nGED distributions are balanced when \(\gamma=0.4\). Specifically, balanced distributions here means that the IoU and nGED probability density functions have as much overlap as possible.
Footnote 2: Note that nGED \(\in(0,1)\), hence increasing the power term decreases nGED.
### Efficient algorithm for ranking
Even for small graphs, GED is a compute-heavy measure, usually taking several seconds per floor plan pair. When **SSIG** is used for evaluation, pair mining, or direct floor plan retrieval on large datasets, the time constrain hugely limits practical usage. We propose an algorithm that drastically speeds up retrieval and finds structurally similar pairs relatively fast.
First, mIoU is computed for every combination of floor plan pairs in the dataset. Second, only the \(n\) (set between 50 and 100) best scoring pairs on mIoU for each floor plan identity are kept. Third, the **SSIG** is computed for each remaining pair. Fourth, the corresponding lists for each floor plan identity are ranked on **SSIG** in descending order.
The algorithm assumes that the best matches (highest **SSIG** scores) are found in the \(n\)-best mIoU pairs. In the next session, we empirically find that the assumption generally holds if \(n\) is large enough. In Subsec. 4.4, we show that for ranking RPLAN, the assumption holds for relatively small \(n\). For a dataset with size \(N\) (usually between 10k and 100k), the algorithm is orders of magnitude faster than brute-forcing through it.
An assessment of the ranking algorithm and relation to floor plan retrieval systems are provided in the following section (Sec. 4).
## 4 Results and Evaluation
### Data
In this paper, RPLAN [26] is used for analysis and evaluation. RPLAN is a large-scale dataset containing floor plan images with semantically segmented areas, consisting of \(>\)80k single-unit apartments across Asia. We further cleaned the dataset, where there were several apartments with rooms without any doors _i.e_. disconnected in terms of the topology. Moreover, apartments are removed that are not fully connected and/or for which our graph extraction algorithm cannot reliably compute the corresponding access graphs. The cleaned dataset contains \(\sim\)56k apartments. Furthermore, we found that RPLAN contains many (near) duplicates. For evaluation of the ranking algorithm (Subsec. 4.4), we remove the (near) duplicates to reduce clutter in the results. Duplicates are removed by a threshold on mIoU, \(\tau_{\text{IoU}}=0.87\). Duplicates are _not_ removed when assessing the distributions and correlations of mIoU and nGED.
### Distributions
To understand the relations between the pairwise similarity measures (mIoU and nGED) and how they relate to structural similarity, we start by investigating the probability density functions of the pairwise similarity measures (\(p_{\text{IoU}}\) and \(p_{\text{GED}}\), respectively). \(p_{\text{IoU}}\) and \(p_{\text{GED}}\) are approximated by computing IoU and GED for over a million randomly sampled floor plan pairs. For brevity, we use "IoU" and "GED" instead of "mIoU" and "nGED".
IoUIoU is computed according to Eq. 3 and across all subdivision categories except for the "background". The histogram of the distribution is provided in Fig. 4 in the middle left plot in blue. The mean of the distribution is
Figure 3: **Graph to image examples.** Each row represents an access graph (column 1) and several randomly selected corresponding floor plans semantic images (columns 2 - 5).
around 0.25 which means that on average approximately 25% of the pixels are overlapping. We also plot the IoU distribution for the 50 best-scoring examples on IoU per floor plan identity, shown in orange. As expected, compared to the full IoU distribution, the mean is significantly shifted. The spike around 1.0 can be attributed to the many (near) duplicates in the dataset.
To give an idea of the diversity of floor plan samples, Fig. 4 (top row) highlights floor plans with different 'originality' scores. The originality is computed by the average IoU score for a given sample with all remaining samples. The more right (\(\approx 0.1\)) the more original _i.e_. seldom and the more left (\(\approx 0.4\)) the less original _i.e_. more frequent. Generally speaking, the originality positively correlated mostly with the complexity of the shapes of both the rooms and boundaries.
GednGED is computed according to Eqs. 5 and 6 and hence cost for node and edge deletions, insertions, and substitutions are kept the same. The histogram of the full distribution is illustrated in Fig. 4 in the middle right plot in blue. We also plot the GED distribution for the 50 best scoring examples _on IoU_ per floor plan identity, shown in orange. Surprisingly, both full and top-50 distributions are nearly similar, revealing a weak (if at all) correlation between IoU and GED. The weak correlation is further investigated in the next section.
It is worth noting that there are only 50 unique GED values in our population of random pairs. The limited range of GED values can be attributed to:
Firstly, the amount of floor plan pairs that share the same underlying _base graph_ - all possible graphs present in the dataset for which each pairwise combination is isomorphic: \(g_{1}\) is isomorphic to \(g_{2}\) - is relatively large. To make our point, we sort all base graphs based on the number of corresponding floor plans and show the huge imbalance between base graph occurrences (see Fig. 5). For example, more than 25% of the data (\(\approx\)14k samples) correspond to the 100 most prominent base graphs only. Visualized in Fig. 4 (bottom row) and similar to the IoU originality scores, we score floor plans on graph originality as well. Graph originality is based on the number of corresponding floor plans for a given base graph: the more original the fewer floor plans per base graph. While the main patterns of graph originality are to be more closely investigated, on average the number of nodes as well as edges slightly increase with originality.
Secondly, revealed empirically by the graph originality distribution, the distribution of topology - the graph's structure - alone is small because the graphs are limited in size (number of connections) and order (number of subdivisions). Furthermore, the topological _types_ are limited as well. Typically, access graphs are confined to be flower-like and thus centered around a single node. The central node is usually the living room and/or hallway.
Indeed, the majority of floor plans stem from a limited
Figure 4: **IoU and GED distributions and originality.** The two plots in the center illustrate the distributions of IoU (left) and GED (right): blue depicts the distribution over all samples and orange only considers the top-50 _on IoU_. An indication of semantic image originality (based on the average IoU) is given on the top and one for the graph originality (based on the base graph occurrence) is given on the bottom: originality increases going from left to right.
Figure 5: **Distribution of base graphs.** The plot on the top provides the distribution of base graphs (in descending order of occurrence) for base graphs that at least occur 9 times. On the bottom, the base graphs are plotted for several randomly picked floor plans. For some of the base graphs, we indicate the position in the distribution through the colored boxes.
collection of base graphs, and only a few node and edge attributes are considered, leading to the ineffectiveness of measuring structural similarity solely based on graph similarity.
### Correlations
The correlation between IoU and GED is further investigated. The 2D density map of IoU and GED is depicted in Fig. 6 and plotted as a heatmap. We define a positive correlation when increments in visual similarity _i.e._ _higher_ IoU result in increments in the graph-based similarity _i.e._ _lower_ GED. Effectively we measure the correlation between IoU and _negative_ GED. The heatmap shows for increasing IoU ranges that GED distributions shift towards lower values, hence the correlation between IoU and GED is positive. To be precise, the Pearson correlation coefficient is 0.503 between mIoU and -nGED, hence GED and IoU are moderately correlated. However, the overlapping regions between GED distributions for different IoU ranges are often large. Specifically, IoU and GED oppose each other 38% of the time - opposition here means that a higher IoU results in a higher GED or that a lower IoU results in a lower GED. Several examples in which IoU and GED oppose each other are given in Fig. 7.
From the triplets shown in Fig. 7, many characteristics of IoU and GED rise to the surface. For an anchor, we define pos@IoU as the positive (IoU high) and neg@IoU as the negative query (IoU low). The same holds for pos@GED (GED low) and neg@GED (GED high). Most prominent is the sensitivity of IoU w.r.t. the overall shape of the building: in all cases, the overall shape between the anchor and the pos@IoU are nearly identical and slightly different for the pos@GED. On the other hand, the number of rooms is often different between the anchor and pos@IoU while (nearly) always equivalent between the anchor and pos@GED: placing or removing rooms is detrimental for GED but in many cases does not hurt IoU so much. Another difference between pos@IoU and pos@GED is the fact that changes in the room shapes - especially to the central space - are _not_ penalized by GED yet detrimental for IoU. The examples reveal the difficulty of measuring the structural similarity of floor plans. Apparently, both measures independently cannot fully grasp structural similarity.
### Floor plan retrieval
We evaluate our ranking algorithm provided in Subsec. 3.5 and qualitatively compare the retrievals to independent IoU- and GED-based rankings, and to retrievals based on a deeply learned metric [13].
We start by finding a proper value for \(n\) - the number of samples (when ranked on IoU) considered for further **SSIG** assessment. In Fig. 9, the distributions for IoU, GED, and **SSIG** are provided for various values of \(n\). In our observation, for relatively large values of **SSIG** (\(>\)0.7) every linear increase in \(n\) adds fewer samples every time. We found for RPLAN that values of \(n>=50\) seldom change the top-10 on **SSIG**. Compared to measuring **SSIG** on the whole dataset, **SSIG** only needs to be computed 50 times. We believe that IoU, at least in the case of RPLAN, is a proper first step to find correspondences in structural similarity coarsely, while GED is a proper help to measure structural similarity on a finer level. Hence, we refer to our ranking method as a visually-guided graph edit distance.
Our algorithm is compute-efficient w.r.t. GMNs. Depending on the size and originality of the floor plan's corresponding access graphs, our algorithm takes between 1
Figure 6: **2D density map of IoU and GED as heatmap.** A moderate correlation (0.503) between IoU and -GED is found. The distribution is, however, widely spread indicating that IoU and GED often fail to predict one another with confidence.
Figure 7: **Negative examples between IoU and GED on RPLAN.** IoU is sensitive to the scales and shape of the room while GED captures the connections between the rooms.
and 5 s on a single CPU per unseen sample to fully rank it. Compared to LayoutGMN, the same computation takes approximately 1500 s. Hence, our algorithm is compute-efficient w.r.t. GMNs.
Randomly picked retrievals (top 10) of our method are shown in Fig. 8 and compared against IoU- and GED-only ranks. In red and blue are IoU and GED ranks, respectively. Not surprisingly the **SSIG**, IoU, and GED ranks are quite different which can be attributed to the fact that the correlation between IoU and GED is only weakly positive (see Subsec. 4.3). Compared to IoU- and GED-only retrievals, it can be seen that **SSIG** indeed compensates for 'failures' in IoU and GED. For example in the last row of Fig. 8, the first 4 retrieval results, even though having a low IoU value, are compensated by GED, while the 5th retrieval is compensated by IoU, even though having a low GED value. Similar compensation for failures can be found back in the retrievals of deeply learned metrics, see [13, 4, 8].
## 5 Conclusion and future work
In this work, we showed that image- as well as graph-based similarity metrics alone are likely not enough to robustly measure structural similarity between floor plans. We overcome the shortcomings by combining IoU and GED into one metric, called **SSIG**. We qualitatively showed that a floor plan database which is ranked on **SSIG** shares similar characteristics to results of state-of-the-art methods that use GMNs to compute the ranking.
This work is however limited to investigating IoU and GED and does not consider other well-known image- and graph-based similarity metrics. Moreover, we did not conduct any user study to rigorously assess our claims further. In future studies we hope to address both limitations.
The formulation of similarity as defined in Eq. 1 is setup in a task-agnostic manner - the formulation addresses floor plan similarity as a _single_ overall measure. In many cases, however, a more realistic way to setup similarity might be dynamic (_e.g_., based on categories), hence, allowing for different similarity scores across multiple use cases whether used in floor plan tailored search engines, evaluation of floor plan generation, etc. Our aim is to work on such formulations in the future.
We believe that **SSIG** will pave a way towards improved data-driven research on floor plans, such as for providing a more natural objective to train deep metric learning frameworks that do not necessarily require information sharing across branches.
|
2309.04431 | External Codes for Multiple Unicast Networks via Interference Alignment | We introduce a formal framework to study the multiple unicast problem for a
coded network in which the network code is linear over a finite field and
fixed. We show that the problem corresponds to an interference alignment
problem over a finite field. In this context, we establish an outer bound for
the achievable rate region and provide examples of networks where the bound is
sharp. We finally give evidence of the crucial role played by the field
characteristic in the problem. | F. R. Kschischang, F. Manganiello, A. Ravagnani, K. Savary | 2023-09-08T16:50:21Z | http://arxiv.org/abs/2309.04431v1 | # External Codes for Multiple Unicast Networks
###### Abstract
We introduce a formal framework to study the multiple unicast problem for a coded network in which the network code is linear over a finite field and fixed. We show that the problem corresponds to an interference alignment problem over a finite field. In this context, we establish an outer bound for the achievable rate region and provide examples of networks where the bound is sharp. We finally give evidence of the crucial role played by the field characteristic in the problem.
## 1 Introduction
The typical scenario considered in the context of _network coding_ consists of one or multiple sources of information attempting to communicate to multiple terminals through a network of intermediate nodes. Various communication paradigms have been studied in this setting, including noisy, adversarial, error-free multicast, and multiple-unicast networks; see [1, 2, 3, 4, 5, 6, 7, 8] among many others.
To our best knowledge, in most network coding references users are allowed to freely design both the _network code_ (i.e., how the intermediate vertices process information packets) and the _external codes_ of the sources. In particular, designing the network code is part of the communication problem.
This paper considers the multiple unicast problem when the network code is linear and _fixed_, and only the external network codes can be freely designed by the source-receiver pairs. In this context, source-receiver pairs compete for network resources and interfere with each
other. We argue that, in this regime, the multiple unicast problem corresponds to an _interference alignment problem_ over finite fields. While interference alignment over the complex field has been extensively studied in more classical information theory settings, methods do not extend to finite fields in any obvious way. We refer to [9] and [10] for the "classical" interference alignment problem.
The concept of interference alignment has also been considered over finite fields, in connection with network coding. For example, [6] and [7] propose interference alignment solutions, in combination with network coding, to construct schemes for coded, multiple unicast networks. Related contributions are [9, 11, 12, 13], all of which study a problem that is different from the one we address in this paper.
This work makes three main contributions: (1) it introduces a framework for investigating multi-shot interference alignment problems over finite fields; (2) it establishes an outer bound for the achievable rate regions in the context outlined above and provides examples where the bound is sharp; (3) it shows how the field characteristic plays a crucial role in the solution to this problem. Note that (3) already played an important role in the networks introduced in [4], but is in sharp contrast with what is typically observed in network coding results for unicast networks, where the field _size_, rather than the characteristic, is the main player.
The rest of the paper is organized as follows. In Section 2 we formally introduce the communication model and formulate the problem on which we focus. In Section 3 we formalize the concepts of achievable rate regions, considering the number of channel uses. Section 4 establishes an outer bound for said achievable rate regions, and Section 5 describes the role played by the field characteristic. The paper contains several examples that illustrate concepts and results.
## 2 Communication Model and Problem Formulation
Throughout this paper, \(q\) is a prime power and \(\mathbb{F}_{q}\) denotes the finite field with \(q\) elements. We denote the set of positive integers by \(\mathbb{N}=\{1,2,\ldots\}\). All vectors in this paper are row vectors. For \(\alpha_{1},\ldots,\alpha_{\ell}\in\mathbb{F}_{q}^{n}\), we denote by \(\langle\alpha_{1},\ldots,\alpha_{\ell}\rangle\) the \(\mathbb{F}_{q}\)-span of \(\alpha_{1},\ldots,\alpha_{\ell}\). When \(\mathbb{F}_{p}\subseteq\mathbb{F}_{q}\), we denote by \(\langle\alpha_{1},\ldots,\alpha_{\ell}\rangle_{p}\) the \(\mathbb{F}_{p}\)-span of \(\alpha_{1},\ldots,\alpha_{\ell}\).
We start by informally describing the problem studied in this paper, deferring rigorous definitions for later.
**Problem Formulation.** We consider \(n\) uncoordinated sources and terminals, denoted by \(S_{1},...,S_{n}\) and \(T_{1},...,T_{n}\), respectively. Terminal \(T_{i}\) is interested in decoding only the symbols emitted by source \(S_{i}\) (_multiple unicast_ problem). The sources are connected to the terminals via a network of intermediate nodes, \(\mathcal{N}\) (a directed, acyclic multigraph). The alphabet of the network is \(\mathbb{F}_{q}\) and each edge has a capacity of one symbol. Alphabet symbols combine linearly at the intermediate nodes of the network, i.e., _linear network coding_ is used; see [2, 3].
We are interested in describing the region of achievable rates for a network of the type we just described, assuming that the operations performed by the intermediate nodes are linear and _fixed_. In other words, sources and terminals cannot change how the network's nodes linearly combine symbols, but are free to agree on a codebook. Under these assumptions, source-receiver pairs compete for the network's resources.
The following example illustrates the problem at hand.
**Example 1**.: Figure 1 depicts a network with two sources and terminals. Terminal \(T_{i}\) is interested only in decoding messages from source \(S_{i}\). The operations performed by the gray vertices are fixed. We will return to this network in Example 4.
In this paper, we propose the following formal definition of a communication network, which will facilitate the analysis of the problem we have described above.
**Definition 2**.: A **multiple unicast network** is a 4-tuple
\[\mathcal{N}=(\mathcal{V},\mathcal{E},(S_{1},\dots,S_{n}),(T_{1},\dots,T_{n})),\]
where:
1. \((\mathcal{V},\mathcal{E})\) is a finite, directed, acyclic multigraph, which may include parallel edges;
2. \(n\geq 1\) is an integer;
3. \(S_{1},\dots,S_{n}\in\mathcal{V}\) are distinct vertices called **sources**;
4. \(T_{1},\dots,T_{n}\in\mathcal{V}\) are distinct vertices called **terminals** or **sinks**.
We also assume that the following hold:
1. \(\{S_{1},\dots,S_{n}\}\cap\{T_{1},\dots,T_{n}\}=\emptyset\);
2. for any \(i\in\{1,\dots,n\}\), there exists a directed path in \((\mathcal{V},\mathcal{E})\) connecting \(S_{i}\) to \(T_{i}\);
3. sources do not have incoming edges and terminals do not have outgoing edges;
4. for every vertex \(V\in\mathcal{V}\), there exists a directed path from \(S_{i}\) to \(V\) and from \(V\) to \(T_{j}\), for some \(i,j\in\{1,\dots,n\}\).
For \(V\in\mathcal{V}\) we denote by \(\partial^{-}(V)\), respectively \(\partial^{+}(V)\), the indegree, respectively outdegree, of \(V\), meaning the number of edges incident to, respectively from, \(V\). Moreover, let \(\mathcal{V}^{*}=\mathcal{V}\backslash(\{S_{1},\dots,S_{n}\}\cup\{T_{1},\dots, T_{n}\})\) denote the set of nonsource and nonterminal vertices, meaning the set of intermediate network nodes.
We assume that the intermediate nodes of a network process the alphabet symbols linearly. This is made rigorous by the following concept.
**Definition 3**.: Let \(\mathcal{N}\) be as in Definition 2. A **linear network code**, or simply **network code**, for \(\mathcal{N}\) is a tuple of matrices
\[\mathcal{F}=\left(\mathcal{F}_{V}\in\mathbb{F}_{q}^{\partial^{-}(V)\times \partial^{+}(V)}\mid V\in\mathcal{V}^{*}\right).\]
Figure 1: Network for Example 1.
Given a network code, the network operates as follows. Fix an intermediate vertex \(V\in\mathcal{V}^{*}\) and let \(a=\partial^{-}(V)\), \(b=\partial^{+}(V)\) be the indegree and outdegree of \(V\), respectively. Then \(V\) collects a vector of alphabet symbols \((x_{1},\ldots,x_{a})\) over the incoming edges, and emits the entries of
\[\begin{pmatrix}y_{1}&\ldots&y_{b}\end{pmatrix}=\begin{pmatrix}x_{1}&\ldots&x_{ a}\end{pmatrix}\cdot\mathcal{F}_{V}\in\mathbb{F}_{q}^{b}\]
over the outgoing edges. This fully specifies how \(V\) processes information, provided a linear extension of the partial order of the network edges is fixed. Thus, \(x_{1}\) is the symbol that arrives on the minimum edge (with respect to this linear order) at \(V\), \(x_{2}\) is the symbol that arrives on the minimum of the remaining edges, etc., and similarly for \(y_{1},\ldots,y_{b}\). In this paper, networks are delay-free, and communication is instantaneous.
**Example 4**.: The same network admits multiple network codes. For this example, consider the network \(\mathcal{N}\) from Example 1 with the intermediate nodes labeled as in Figure 2 and let us assume that the alphabet is \(\mathbb{F}_{3}\).
The following are two network codes defined on the network of Figure 2 where the order of the entries in the tuple follows the order of the intermediate nodes, and the order of the entries in the matrices agrees with the order of the outgoing and incoming channels in the nodes respectively.
\[\mathcal{F}_{1}=\left(\begin{pmatrix}1\end{pmatrix},\begin{pmatrix}1\end{pmatrix},\begin{pmatrix}1&1\end{pmatrix},\begin{pmatrix}1&1\end{pmatrix},\begin{pmatrix} 1\\ 1\end{pmatrix},\begin{pmatrix}1\end{pmatrix},\begin{pmatrix}1\end{pmatrix}\right)\]
\[\mathcal{F}_{2}=\left(\begin{pmatrix}1\end{pmatrix},\begin{pmatrix}1\end{pmatrix},\begin{pmatrix}1&1\end{pmatrix},\begin{pmatrix}1&1\end{pmatrix},\begin{pmatrix} 1\\ 1\end{pmatrix},\begin{pmatrix}1\end{pmatrix},\begin{pmatrix}1\end{pmatrix}\right)\]
The choice of a linear network code \(\mathcal{F}\) for a communication network \(\mathcal{N}\) induces end-to-end transfer matrices, one for each source-terminal pair. We denote by \(F_{i,j}\) the transfer matrix from \(S_{i}\) to \(T_{j}\).
**Example 5**.: Using the network \(\mathcal{N}\) from Example 1, the network code \(\mathcal{F}_{1}\) induces the transfer matrices
\[F_{1,1}=F_{2,2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\ F_{1,2}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix},\ \text{and}\ F_{2,1}=\begin{pmatrix}1&1\\ 1&1\end{pmatrix},\]
Figure 2: Network from Example 1 with labeled vertices.
whereas the network code \(\mathcal{F}_{2}\) induces transfer matrices
\[F_{1,1}=F_{2,2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\ F_{1,2}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix},\text{ and }F_{2,1}=\begin{pmatrix}1&1\\ 1&2\end{pmatrix}.\]
In one channel use, assuming that \(S_{i}\) transmits \(x_{i}\in\mathbb{F}_{q}^{\partial^{+}(S_{i})}\), terminal \(T_{j}\) observes the vector
\[y_{j}=\sum_{i=1}^{n}x_{i}F_{i,j}\in\mathbb{F}_{q}^{\partial^{-}(T_{j})}.\]
Since in this paper both the communication network \(\mathcal{N}\) and the linear network code \(\mathcal{F}\) are supposed to be _fixed_, the end-to-end transfer matrices induced by them fully specify the communication channel. We, therefore, propose the following definition.
**Definition 6**.: A \(q\)**-ary linear multiple unicast channel** (in short, \(q\)**-LMUC**) is a \(4\)-tuple \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\), where \(n\in\mathbb{N}\) is a positive integer, \(\boldsymbol{s}=(s_{1},\ldots,s_{n}),\boldsymbol{t}=(t_{1},\ldots,t_{n})\in \mathbb{N}^{n}\), and \(F\in\mathbb{F}_{q}^{s\times t}\), where \(s=\sum_{i=1}^{n}s_{i}\) and \(t=\sum_{i=1}^{n}t_{i}\). We call \(F\) the **transfer matrix** and regard it as a block matrix
\[F=\begin{pmatrix}F_{1,1}&\cdots&F_{1,n}\\ \vdots&\ddots&\vdots\\ F_{n,1}&\cdots&F_{n,n}\end{pmatrix},\]
where block \(F_{i,j}\) has size \(s_{i}\times t_{j}\).
Here \(n\) represents the number of source-terminal pairs, \(s_{i}=\partial^{+}(S_{i})\), and \(t_{i}=\partial^{-}(T_{i})\) for \(i=1,\ldots,n\). Moreover, \(F\) collects the transfer matrices for each source-terminal pair. Note that we do not need to remember the communication network or the entire network code. The matrix \(F\) fully describes the end-to-end channel laws for a single channel use.
We can extend this definition for multiple uses of the channel as follows. Suppose that the network is used \(m\geq 1\) times. The channel input is an element \(x=(x_{1},\ldots,x_{n})\in\mathbb{F}_{q^{m}}^{s_{1}}\times\cdots\times\mathbb{ F}_{q^{m}}^{s_{n}}=\mathbb{F}_{q^{m}}^{s}\). More precisely, for all \(i\in\{1,\ldots,n\}\), \(x_{i}\in\mathbb{F}_{q^{m}}^{s_{i}}\) is the input that source \(S_{i}\) emits. Then the channel output is \(y=(y_{1},\ldots,y_{n})\in\mathbb{F}_{q^{m}}^{t_{1}}\times\cdots\times\mathbb{ F}_{q^{m}}^{t_{n}}=\mathbb{F}_{q^{m}}^{t}\), where
\[y_{i}=x_{i}F_{i,i}+\sum_{j\neq i}x_{j}F_{j,i} \tag{1}\]
is the vector that terminal \(T_{i}\) receives on its incoming edges. Note that the field extension \(\mathbb{F}_{q^{m}}\) models \(m\) uses of the channel because the network code is assumed to be \(\mathbb{F}_{q}\)-linear.
**Remark 7**.: Given any \(q\)-LMUC as in Definition 6, it is always possible to construct a multiple unicast network \(\mathcal{N}\) and a network code \(\mathcal{F}\) for \(\mathcal{N}\) that induces the given transfer matrix. We illustrate the procedure with an example.
**Example 8**.: The \(11\)-LMUC
\[\mathcal{L}=\left(2,(1,2),(2,2),\begin{pmatrix}1&0&2&3\\ 0&4&5&0\\ 6&7&0&0\end{pmatrix}\right)\]
induces the matrices
\[F_{1,1}=\begin{pmatrix}1&0\end{pmatrix},\quad F_{1,2}=\begin{pmatrix}2&3\end{pmatrix},\quad F_{2,1}=\begin{pmatrix}0&4\\ 6&7\end{pmatrix},\quad F_{2,2}=\begin{pmatrix}5&0\\ 0&0\end{pmatrix}.\]
A network code that induces these matrices is
\[\mathcal{F}=\left(\begin{pmatrix}1&1&1\end{pmatrix},\begin{pmatrix}1&1\end{pmatrix},\begin{pmatrix}1&1\end{pmatrix},\begin{pmatrix}1\\ 6\end{pmatrix},\begin{pmatrix}4\\ 7\end{pmatrix},\begin{pmatrix}2\\ 5\end{pmatrix},\begin{pmatrix}3\end{pmatrix}\right),\]
realized by the network depicted in Figure 3. Note that
\[\mathcal{F}=\left(\begin{pmatrix}1&2&3\end{pmatrix},\begin{pmatrix}4&5\end{pmatrix},\begin{pmatrix}6&7\end{pmatrix},\begin{pmatrix}1\\ 1\end{pmatrix},\begin{pmatrix}1\\ 1\end{pmatrix},\begin{pmatrix}1\\ 1\end{pmatrix}\right),\]
also induces the given transfer matrix.
Given a \(q\)-LMUC \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\) representing a \(q\)-ary linear multiple unicast channel, we are interested in determining which rates can be achieved by carefully selecting the codebooks of the sources. Recall that each terminal \(T_{i}\) is interested in decoding _only_ its corresponding source \(S_{i}\). Therefore the information emitted by the sources \(S_{j}\) with \(j\neq i\) acts as interference for terminal \(T_{i}\).
## 3 Achievable Rate Regions and Their Properties
In this section, we propose a formal framework to describe the capacity of a \(q\)-LMUC, based on the concept of _fan-out set_ and _unambiguous code_. We start with the concept of code(book).
**Definition 9**.: Let \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\) be a \(q\)-LMUC and let \(m\geq 1\) be an integer. An \(m\)**-external code**, or simply an \(m\)**-code**, for \(\mathcal{L}\) is a Cartesian product \(C=C_{1}\times\cdots\times C_{n}\), where \(C_{i}\subseteq\mathbb{F}_{q^{m}}^{s_{i}}\) for all \(i\in\{1,\ldots,n\}\). The elements of each \(C_{i}\) are called **codewords**.
Note that we do _not_ require that \(C\) is a linear space. Recall that the parameter \(m\) in Definition 9 represents the number of channel uses, as already mentioned when introducing Equation (1). In the sequel, for a \(q\)-LMUC \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\) and for \(i\in\{1,\ldots,n\}\), we denote by \(\pi_{i}:\mathbb{F}_{q^{m}}^{t}\rightarrow\mathbb{F}_{q^{m}}^{t_{m}}\) the projection on the \(i\)th block of coordinates (recall that \(\boldsymbol{t}=(t_{1},\ldots,t_{n})\) and \(t=t_{1}+\cdots+t_{n}\)).
Figure 3: Network for Example 8.
**Definition 10**.: Let \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\) be a \(q\)-LMUC, \(i\in\{1,\ldots,n\}\), \(C\) an \(m\)-code for \(\mathcal{L}\), and \(x\in C_{i}\). We denote by
\[\mathrm{Fan}_{i}(x,C):=\{\pi_{i}((x_{1},\ldots,x_{i-1},x,x_{i+1},\ldots,x_{n})F )\mid x_{j}\in C_{j}\text{ for all }j\neq i\}\]
the \(i\)**-th fan-out set** of \(x\) with respect to terminal \(i\) and the code \(C\). The \(i\)**-th fan-out set** of \(C\) is \(\mathrm{Fan}_{i}(C)=\cup_{x\in C_{i}}\mathrm{Fan}_{i}(x,C)\subseteq\mathbb{F} _{q^{m}}^{t_{i}}\), for all \(i\in\{1,\ldots,n\}\).
Following the notation of Definition 10, \(\mathrm{Fan}_{i}(x,C)\) is the set of possible words that the \(i\)th terminal can possibly receive when the \(i\)th source emits \(x\) and the other sources emit their own codewords. Fan-out sets relate to the concept of interference as follows.
**Definition 11**.: Let \(\mathcal{L}\), \(m\), and \(C\) be as in Definition 9. We define the **interference set** of \(C\) at terminal \(T_{i}\) as
\[\mathrm{IS}_{i}(C)=\mathrm{Fan}_{i}(0,C)=\left\{\sum_{j\neq i}c_{j}F_{j,i}\mid c _{j}\in C_{j}\right\}.\]
**Remark 12**.: Note that using Definition 11 we can rewrite equation (1) as
\[\mathrm{Fan}_{i}(x,C)=xF_{i,i}+\mathrm{IS}_{i}(C)=\{xF_{i,i}+y\mid y\in \mathrm{IS}_{i}(C)\}. \tag{2}\]
for any \(m\)-code \(C\), any \(i\in\{1,\ldots,n\}\), and any \(x\in C_{i}\). Therefore the \(i\)-th fan-out set of \(x\in C_{i}\) is a **translate** (or a **coset**, if each \(C_{i}\) is linear) of the interference set \(\mathrm{IS}_{i}(C)\).
Communication is considered to be successful when each codeword can be uniquely decoded. The following definition models this concept.
**Definition 13**.: Let \(\mathcal{L}\), \(m\), and \(C\) be as in Definition 9. We say that \(C\) is **unambiguous** for \(\mathcal{L}\) if for all \(i\in\{1,...,n\}\) and for all codewords \(x_{1},x_{2}\in C_{i}\) with \(x_{1}\neq x_{2}\), we have \(\mathrm{Fan}_{i}(x_{1},C)\cap\mathrm{Fan}_{i}(x_{2},C)=\emptyset\).
An unambiguous code, as in the previous definition, uniquely defines decoder maps. More precisely, the \(i\)**-th decoder** is the map \(D_{i}:\mathrm{Fan}_{i}(C)\rightarrow\mathbb{F}_{q^{m}}^{s_{i}}\) defined by \(D_{i}(v_{i})=x_{i}\) for all \(v_{i}\in\mathrm{Fan}_{i}(C)\), where \(x_{i}\in C_{i}\) is the only element with \(v_{i}\in\mathrm{Fan}_{i}(x_{i},C)\).
**Remark 14**.: Following the notation of Definition 13, if \(C\) is an unambiguous \(m\)-code for \(\mathcal{L}\) and \(C^{\prime}\) is an \(m\)-code for \(\mathcal{L}\) with \(C^{\prime}\subseteq C\), then \(C^{\prime}\) is unambiguous as well.
We are now ready to give a rigorous definition of _the achievable rate region_ of a \(q\)-LMUC \(\mathcal{L}\). For convenience, for \(m\geq 1\) we define the set
\[\log_{q^{m}}(\mathbb{N}^{n})=\{(\log_{q^{m}}(u_{1}),...,\log_{q^{m}}(u_{n})) \mid(u_{1},...,u_{n})\in\mathbb{N}^{n}\}.\]
**Definition 15**.: The \(m\)**-shot achievable rate region** of a \(q\)-LMUC \(\mathcal{L}\) as in Definition 6 is the set
\[\mathcal{R}_{m}(\mathcal{L})=\{\alpha\in\log_{q^{m}}(\mathbb{N}^{ n})\mid\exists\,C=C_{1}\times\cdots\times C_{n}\text{ unambiguous $m$-code for $\mathcal{L}$}\\ \text{with }\log_{q^{m}}(|C_{i}|)=\alpha_{i}\ \forall\ 1\leq i\leq n\} \subseteq\mathbb{R}_{\geq 0}^{n}.\]
The **achievable rate region** of \(\mathcal{L}\) is the set
\[\mathcal{R}(\mathcal{L})=\overline{\bigcup_{m\geq 1}\mathcal{R}_{m}(\mathcal{L})},\]
where the overline indicates the closure operator with respect to the Euclidean topology on \(\mathbb{R}^{n}\). The elements of \(\mathcal{R}(\mathcal{L})\) elements are called **achievable rates**.
The following example illustrates that different LMUCs supported on the same network might have very different achievable rate regions.
**Example 16**.: The two 3-LMUCs induced by Example 4 are
\[\mathcal{L}_{1}=\left(2,(2,2),(2,2),\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 1&1&1&0\\ 1&1&0&1\end{pmatrix}\right),\ \ \mathcal{L}_{2}=\left(2,(2,2),(2,2), \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 1&1&1&0\\ 1&2&0&1\end{pmatrix}\right).\]
Although the two 3-LMUC have the same underlying network, they have different 1-shot achievable rate regions. Indeed, for \(\mathcal{L}_{1}\), the code \(\langle(1,2)\rangle_{\mathbb{F}_{3}}\times\mathbb{F}_{3}^{2}\) is unambiguous, meaning that \((1,2)\in\mathcal{R}_{1}(\mathcal{L})\). On the other hand, \((1,2)\notin\mathcal{R}_{1}(\mathcal{L}_{2})\), as we now briefly explain. In \(\mathcal{L}_{2}\), the matrix \(F_{2,1}\) is invertible. Moreover, a rate of the form \((\alpha_{1},2)\) can be achieved only by a code of the form \(C=C_{1}\times\mathbb{F}_{3}^{2}\). Together with the invertibility of \(F_{2,1}\), this implies that \(\mathrm{IS}_{1}(C)=\mathbb{F}_{3}^{2}\), making \(C=\{0\}\times\mathbb{F}_{3}^{2}\) the only unambiguous code with \(C_{2}=\mathbb{F}_{3}^{2}\).
The following result states a quite intuitive property of achievable rate regions.
**Proposition 17**.: Let \(\mathcal{L}\) be a \(q\)-LMUC as in Definition 6 and let \(m\geq 1\). If \((\alpha_{1},\ldots,\alpha_{n})\in\mathcal{R}_{m}(\mathcal{L})\) and \((\beta_{1},\ldots,\beta_{n})\in\log_{q^{m}}(\mathbb{N}^{n})\) satisfies \(\beta_{i}\leq\alpha_{i}\) for all \(i\in\{1,\ldots,n\}\), then we have \((\beta_{1},\ldots,\beta_{n})\in\mathcal{R}_{m}(\mathcal{L})\).
Proof.: Since \((\alpha_{1},\ldots,\alpha_{n})\in\mathcal{R}_{m}(\mathcal{L})\), there exists an \(m\)-code \(C=C_{1}\times\cdots\times C_{n}\) unambiguous for \(\mathcal{L}\) with \(|C_{i}|=q^{m\alpha_{i}}\) for all \(i\). Since \((\beta_{1},\ldots,\beta_{n})\in\log_{q^{m}}(\mathbb{N}^{n})\), for all \(i\) there exists \(D_{i}\subseteq C_{i}\) with \(|D_{i}|=q^{m\beta_{i}}\). By Remark 14, the \(m\)-code \(D=D_{1}\times\cdots\times D_{n}\) is unambiguous for \(\mathcal{L}\). This establishes the desired result.
We now investigate how the various achievable rate regions relate to each other. The following two results are inspired by (but do not immediately follow from) the concept of time sharing.
**Proposition 18**.: Let \(\mathcal{L}\) be a \(q\)-LMUC as in Definition 6 and let \(m,m^{\prime}\geq 1\). We have
\[\mathcal{R}_{m+m^{\prime}}(\mathcal{L})\supseteq\frac{m\,\mathcal{R}_{m}( \mathcal{L})+m^{\prime}\,\mathcal{R}_{m^{\prime}}(\mathcal{L})}{m+m^{\prime}}.\]
Proof.: Let \(C\) and \(C^{\prime}\) be an \(m\)-code and an \(m^{\prime}\)-code, respectively, for \(\mathcal{L}\). Let \(\{\gamma_{1},...,\gamma_{m}\}\) and \(\{\gamma_{1}^{\prime},...,\gamma_{m^{\prime}}^{\prime}\}\) be ordered bases of \(\mathbb{F}_{q^{m}}\) and \(\mathbb{F}_{q^{m^{\prime}}}\) over \(\mathbb{F}_{q}\), respectively. We denote by \(\varphi_{m}:\mathbb{F}_{q}^{m}\rightarrow\mathbb{F}_{q^{m}}\) the \(\mathbb{F}_{q}\)-isomorphism \((a_{1},\ldots,a_{m})\mapsto\sum_{i=1}^{m}a_{i}\gamma_{i}\). Define \(\varphi_{m^{\prime}}:\mathbb{F}_{q}^{m^{\prime}}\rightarrow\mathbb{F}_{q^{m^{ \prime}}}\) analogously. Fix an ordered basis \(\{\beta_{1},\ldots,\beta_{m+m^{\prime}}\}\) of \(\mathbb{F}_{q^{m+m^{\prime}}}\) over \(\mathbb{F}_{q}\). We extend these three
\(\mathbb{F}_{q}\)-linear maps coordinate-wise. Take
\[D:=\varphi_{m}^{-1}(C)\times\varphi_{m^{\prime}}^{-1}(C^{\prime}).\]
Then the \((m+m^{\prime})\)-code \(\varphi_{m+m^{\prime}}(D)\) is unambiguous and it satisfies
\[\log_{q^{m+m^{\prime}}}\left(\varphi_{m+m^{\prime}}(D)\right)=\frac{m}{m+m^{ \prime}}\log_{q^{m}}(|C|)+\frac{m^{\prime}}{m+m^{\prime}}\log_{q^{m^{\prime}} }(|C^{\prime}|).\]
This establishes the desired result.
The following result shows that \(\mathcal{R}(\mathcal{L})\) contains the convex hull of \(\mathcal{R}_{1}(\mathcal{L})\).
**Theorem 19**.: Let \(\mathcal{L}\) be a \(q\)-LMUC as in Definition 6. Denote by \(\operatorname{conv}(\mathcal{R}_{1}(\mathcal{L}))\) the convex hull of \(\mathcal{R}_{1}(\mathcal{L})\), i.e., the set of convex combinations of the points of \(\mathcal{R}_{1}(\mathcal{L})\). Then
\[\mathcal{R}(\mathcal{L})\supseteq\operatorname{conv}(\mathcal{R}_{1}( \mathcal{L})).\]
Proof.: Define \(Z=|\mathcal{R}_{1}(\mathcal{L})|\) and let \(\mathcal{R}_{1}(\mathcal{L})=\{\alpha^{(\ell)}\ |\ 1\leq\ell\leq Z\}\). Let \(A:=\{(a_{1},...,a_{Z})\in[0,1]^{Z}\ |\ a_{1}+\cdots+a_{Z}=1\}\), and let \(a\in A\). We will show that for any real number \(\varepsilon>0\), there exists \(m\geq 1\) and \(\beta\in\mathcal{R}_{m}(\mathcal{L})\) with
\[\left\|\sum_{\ell=1}^{N}a_{\ell}\alpha^{(\ell)}-\beta\right\|\leq\varepsilon.\]
This will imply the desired theorem by the definition of closure in the Euclidean topology.
Since the result is clear if \(\mathcal{R}_{1}(\mathcal{L})=\{(0,\ldots,0)\}\), we shall assume \(\nu:=\max_{\ell}\left\|\alpha^{(\ell)}\right\|>0\). For all \(\ell\in\{1,\ldots,Z\}\), fix an unambiguous \(1\)-code \(C^{(\ell)}=C_{1}^{(\ell)}\times\cdots\times C_{n}^{(\ell)}\) that achieves rate \(\alpha_{\ell}\), i.e., such that
\[\log_{q}\left(C_{i}^{(\ell)}\right)=\alpha_{i}\text{ for all }i.\]
Since \(A\cap\mathbb{Q}^{n}\) is dense in \(A\), there exists a sequence \((s^{k})_{k\in\mathbb{N}}\subseteq A\cap\mathbb{Q}^{Z}\) with the property that \(\lim_{k\to\infty}s^{k}=a\). There exists \(k_{\varepsilon}\in\mathbb{N}\) such that \(\left\|s^{k}-a\right\|\leq\varepsilon/(\nu\sqrt{Z})\) for all \(k\geq k_{\varepsilon}\). Let \(s=s^{k_{\varepsilon}}\) and \(\beta=\sum_{\ell=1}^{Z}s_{\ell}\alpha_{\ell}\) and observe that
\[\left\|\sum_{\ell=1}^{Z}a_{\ell}\alpha_{\ell}-\beta\right\|\leq\nu\sum_{\ell =1}^{Z}(s_{\ell}-a_{\ell})\leq\nu\sum_{\ell=1}^{Z}|s_{\ell}-a_{\ell}|\leq\nu \sqrt{Z}\left\|s-a\right\|\leq\varepsilon.\]
Therefore, it suffices to show that \(\beta\in\mathcal{R}_{m}(\mathcal{L})\) for some \(m\geq 1\). Write \(s=(s_{1},...,s_{Z})=(b_{1}/c_{1},...,b_{Z}/c_{Z})\), where the \(b_{\ell}\)'s and the \(c_{\ell}\)'s are integers, and let \(m=c_{1}\cdots c_{Z}\). Then \(\beta\) can be achieved in \(m\) rounds by using code \(C^{(\ell)}\) for \(mb_{\ell}/c_{\ell}\) rounds, \(\ell\in\{1,...,Z\}\), in any order. A more precise formulation can be obtained using field extension maps as in the proof of Proposition 18 (we do not go into the details).
An Outer Bound for the Achievable Rate Region
In this section, we establish an outer bound for the achievable rate region of a \(q\)-LMUC. Our proof technique derives a lower bound for the size of the fan-out sets introduced in Definition 10, which we obtain by estimating the "amount" of interference that the users cause to each other. The outer bound is stated in Theorem 24, which relies on two preliminary results.
**Remark 20**.: Let \(M\in\mathbb{F}_{q}^{a\times b}\) be any matrix. Embed \(\mathbb{F}_{q}\) into \(\mathbb{F}_{q^{m}}\), where \(m\geq 1\). It is well-known that the \(\mathbb{F}_{q}\)-rank of \(M\) is the same as its \(\mathbb{F}_{q^{m}}\)-rank.
We start with the following simple observation.
**Proposition 21**.: Let \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\) be a \(q\)-LMUC and let \(m\geq 1\) be an integer. If \(n=1\), then \(\mathcal{R}_{m}(\mathcal{L})=\{\alpha\in\log_{q}(\mathbb{N})\mid 0\leq\alpha \leq\operatorname{rank}F\}\).
Proof.: Let \(C\subseteq\mathbb{F}_{q^{m}}^{s_{1}}\) be an \(m\)-code. Then \(C\) is unambiguous if and only if \(F(x)\neq F(y)\) for all \(x,y\in C\) with \(x\neq y\), i.e., if and only if the elements of \(C\) belong to distinct equivalent classes of \(\mathbb{F}_{q^{m}}^{r_{m}}/\operatorname{ker}(F)\). This shows that if \(C\) is unambiguous, then \(|C|\leq q^{m\cdot\operatorname{rank}(F)}\). Vice versa, taking one representative for each class of \(\mathbb{F}_{q^{m}}^{r_{1}}/\operatorname{ker}(F)\) produces an unambiguous \(m\)-code.
In the remainder of the section, we show how the argument in the previous proposition extends to an arbitrary number of users. We will need the following preliminary result.
**Lemma 22**.: Let \(V,W\) be linear spaces over \(\mathbb{F}_{q^{m}}\), \(m\geq 1\), and let \(L:V\to W\) be an \(\mathbb{F}_{q^{m}}\)-linear map. For all non-empty sets \(A\subseteq V\), we have \(|L(A)|\cdot|\operatorname{ker}(L)|\geq|A|\).
Proof.: Define an equivalence relation on \(A\) by setting \(a\sim b\) if and only if \(L(a)=L(b)\), i.e., if and only if \(a-b\in\operatorname{ker}(L)\). Then we know from elementary set theory that \(|A/_{\sim}|=|f(A)|\). The equivalence class of \(a\in A\) is \((a+\operatorname{ker}(L))\cap B\) and, therefore, it has size at most \(|\operatorname{ker}(L)|\). Since the equivalence classes partition \(A\), we have \(|A|\leq|A/_{\sim}|\cdot|\operatorname{ker}(L)|\). This concludes the proof.
We are now ready to state the main result of this section, providing an outer bound for the achievable rate region of a \(q\)-LMUC.
**Notation 23**.: In the sequel, for a non-empty subset \(I\subseteq\{1,...,n\}\) and \(j\in I\), we denote by \(F_{I,j}\) the submatrix of \(F\) formed by the blocks indexed by \((i,j)\), as \(i\) ranges over \(I\).
The main result of this section is the following outer bound.
**Theorem 24**.: Let \(\mathcal{L}=(n,\boldsymbol{s},\boldsymbol{t},F)\) be a \(q\)-LMUC, and let \(m\geq 1\) be an integer. Let \((\alpha_{1},...,\alpha_{n})\in\mathcal{R}_{m}(\mathcal{L})\). Then for all non-empty \(I\subseteq\{1,\ldots,n\}\) and for all \(j\in I\), we have
\[\sum_{i\in I}\alpha_{i}\leq\operatorname{rank}(F_{I,j})-\operatorname{rank}(F_ {I\setminus\{j\},j})+\sum_{\begin{subarray}{c}k\in I\\ k\neq j\end{subarray}}s_{k}.\]
Therefore, for all non-empty \(I\subseteq\{1,\ldots,n\}\), we have
\[\sum_{i\in I}\alpha_{i}\leq\min_{j\in I}\left\{\operatorname{rank}(F_{I,j})- \operatorname{rank}(F_{I\setminus\{j\},j})+\sum_{\begin{subarray}{c}k\in I\\ k\neq j\end{subarray}}s_{k}\right\}.\]
Proof.: The result easily follows the definitions if \(|I|=1\). Now fix an index set with \(|I|\geq 2\), a tuple \((\alpha_{1},...,\alpha_{n})\in\mathcal{R}_{m}(\mathcal{L})\), and an unambiguous \(m\)-code \(C=(C_{1},\ldots,C_{n})\) with \(\alpha_{i}=\log_{q^{m}}(|C_{i}|)\) for all \(i\). For \(i\notin I\), we replace \(C_{i}\) with an arbitrary subset of cardinality one. The resulting code is unambiguous by Remark 14, and it has
\[\log_{q^{m}}(|C_{i}|)=\begin{cases}\alpha_{i}&\text{ if }i\in I,\\ 0&\text{ if }i\notin I.\end{cases}\]
The remainder of the proof uses the following lower bound, which we will establish later.
**Claim A**.: For all \(j\in I\) and \(x\in C_{j}\), we have
\[|\text{Fan}_{j}(x,C)|\geq\frac{\prod_{\begin{subarray}{c}k\in I\\ k\neq j\end{subarray}}|C_{k}|}{|\ker F_{\Gamma\setminus\{j\},j}|}. \tag{3}\]
Note that by fixing \(j\in I\) and summing the inequality in Claim A over all \(x\in C_{j}\) one obtains
\[\sum_{x\in C_{j}}|\text{Fan}_{j}(x,C)|\geq\frac{\prod_{k\in I}|C_{k}|}{|\ker F _{\Gamma\setminus\{j\},j}|}\quad\text{for all }j\in I. \tag{4}\]
Since \(C\) is unambiguous, we have
\[\sum_{x\in C_{j}}|\text{Fan}_{j}(x,C)|=\left|\bigcup_{x\in C_{j}}\text{Fan}_{ j}(x,C)\right|\leq q^{m\cdot\text{rank}(F_{I,j})},\]
where the latter inequality follows from the fact that \(\bigcup_{x\in C_{j}}\text{Fan}_{j}(x,C)\) is contained in the image of \(F_{I,j}\). Therefore, the inequality in (4) implies
\[q^{m\cdot\text{rank}(F_{I,j})}\geq\frac{\prod_{k\in I}|C_{k}|}{|\ker(F_{ \Gamma\setminus\{j\},j})|}\quad\text{for all }j\in I. \tag{5}\]
We also have
\[\dim_{\mathbb{F}_{q^{m}}}\ker F_{\Gamma\setminus\{j\},j}=\sum_{\begin{subarray} {c}k\in I\\ k\neq j\end{subarray}}s_{k}-\text{rank}(F_{\Gamma\setminus\{j\},j}).\]
Therefore, taking the logarithm with base \(q^{m}\) in (5) yields
\[\text{rank}(F_{I,j})\geq\alpha_{1}+\ldots+\alpha_{n}-\sum_{\begin{subarray}{c }k\in I\\ k\neq j\end{subarray}}s_{k}+\text{rank}(F_{I\setminus\{j\},j})\quad\text{for all }j\in I, \tag{6}\]
which is the desired bound.
It remains to show that Claim A holds. We only prove it for \(I=\{1,...,n\}\) and \(j=1\), as the proof for all other cases is the same (but more cumbersome notation-wise). Fix an arbitrary \(x\in C_{1}\) and view \(F_{1,\{1,...,n\}}\) as an \(\mathbb{F}_{q^{m}}\)-linear map \(\mathbb{F}_{q^{m}}^{s_{1}+\cdots+s_{n}}\to\mathbb{F}_{q^{m}}^{t_{1}}\). Then \(\text{Fan}_{1}(x,C)\)
is the image of \(C_{2}\times\cdots\times C_{n}\) under the map
\[f:\mathbb{F}_{q^{m}}^{s_{2}+\cdots+s_{n}}\to\mathbb{F}_{q^{m}}^{t_{1}},\qquad(x_{ 2},...,x_{n})\mapsto(x,x_{2},...,x_{n})F_{\{1,...,n\},1}.\]
We have \(f=F_{11}x+g\) as functions, where
\[g:\mathbb{F}_{q^{m}}^{s_{2}+\cdots+s_{n}}\to\mathbb{F}_{q^{m}}^{t_{1}},\qquad( x_{2},...,x_{n})\mapsto(x_{2},...,x_{n})F_{\{2,...,n\},1}.\]
Therefore, the images of \(f\) and \(g\) have the same cardinality, \(|\mathrm{Fan}_{1}(x,C)|\). Finally, by applying Lemma 22 to the \(\mathbb{F}_{q^{m}}\)-linear function \(g\), we obtain
\[|\mathrm{Fan}_{1}(x,C)|\cdot|\ker(g)|\geq|C_{2}\times\cdots\times C_{n}|,\]
which is the inequality in Claim A.
We illustrate Theorem 24 with two examples.
**Example 25**.: Let \(q\) be arbitrary and consider the \(q\)-LMUC \(\mathcal{L}=(2,(2,2),(2,2),F)\), where
\[F=\begin{pmatrix}1&0&0&0\\ 0&1&1&0\\ 0&1&1&0\\ 0&0&0&1\end{pmatrix}.\]
Note that \(\mathcal{L}\) is induced, for example, by the network in Figure 4.
By applying Theorem 24, we obtain that for all \(m\geq 1\) and all \((\alpha_{1},\alpha_{2})\in\mathcal{R}_{m}(\mathcal{L})\) we have \(\alpha_{1}+\alpha_{2}\leq 3\), since
\[\mathrm{rank}\begin{pmatrix}F_{11}\\ F_{21}\end{pmatrix}-\mathrm{rank}\,F_{21}+s_{2} =2-1+2=3,\] \[\mathrm{rank}\begin{pmatrix}F_{12}\\ F_{22}\end{pmatrix}-\mathrm{rank}\,F_{12}+s_{1} =2-1+2=3.\]
For \(m=1\), the \(1\)-codes \(C=\left(\mathbb{F}_{q}^{2},\langle(0,1)\rangle\right)\) and \(C=\left(\langle(1,0)\rangle,\mathbb{F}_{q}^{2}\right)\) are both unambiguous, meaning that the rates \((2,1)\) and \((1,2)\) are achievable in one shot. This implies that the upper bound of Theorem 24 is tight in this case.
Figure 4: Network for Example 25.
**Example 26**.: Let \(q\) be arbitrary and consider the \(q\)-LMUC \(\mathcal{L}=(2,(1,2),(1,2),F)\), where
\[F=\begin{pmatrix}1&1&1\\ 1&1&0\\ 1&0&1\end{pmatrix}.\]
Note that \(\mathcal{L}\) is induced, for example, by the network in Figure 5.
By applying Theorem 24 we obtain that for all \(m\geq 1\) and for all \((\alpha_{1},\alpha_{2})\in\mathcal{R}_{m}(\mathcal{L})\) we have \(\alpha_{1}+\alpha_{2}\leq 2\). Indeed,
\[\operatorname{rank}\begin{pmatrix}F_{11}\\ F_{21}\end{pmatrix}-\operatorname{rank}F_{21}+s_{2} =1-1+2=2,\] \[\operatorname{rank}\begin{pmatrix}F_{12}\\ F_{22}\end{pmatrix}-\operatorname{rank}F_{12}+s_{1} =2-1+1=2.\]
In the next section, we will show that whether or not this bound is sharp depends on the _characteristic_ of the finite field \(\mathbb{F}_{q}\) and not, for example, on \(m\).
## 5 The Role of the Field Characteristic
The goal of this section is to illustrate the role that the field characteristic plays in the problem we are considering. Note that the problem we are studying in this section is an extension of the problem studied until now. When defining a \(q\)-LMUC, since \(q\) is a given parameter of the network, the characteristic of the field is, in general, fixed. The problem we consider in this section is based on the remark that any field contains the neutral elements for addition and multiplication and that in general those are denoted by \(0\) and \(1\). This implies that any matrix with entries only in \(\{0,1\}\) can be the transfer matrix of a \(q\)-LMUC for any prime power \(q\). It is natural to look into the achievable rates regions across different fields for these types of \(q\)-LMUCs.
It is well known that the size of the field plays an important role in achieving the network capacity for multicast networks when the network code design is part of the problem, see [2, 3]. We show that in sharp contrast with this scenario, the _characteristic_ of the underlying field plays an important role in our model. Note that it is not the first time the characteristic of the field has played an important role in network coding. In [4], the authors show networks for which capacities are achievable in either even or odd characteristic fields. It is worth repeating that our problem differs from the one studied in [4] since the authors focus on constructing a
Figure 5: Network for Example 26.
network code, whereas in our case, the network code is frozen.
We show that given the \(q\)-LMUC from Example 26, the achievable rates region of the network over an odd characteristic field strictly contains the achievable rates region over an even characteristic field.
**Theorem 27**.: Let \(\mathcal{L}=(2,(1,2),(1,2),F)\) be the \(q\)-LMUC from Example 26. Recall that
\[F=\begin{pmatrix}1&1&1\\ 1&1&0\\ 1&0&1\end{pmatrix}.\]
If \(q\) is odd, then \((1,1)\in\mathcal{R}_{1}(\mathcal{L})\subseteq\mathcal{R}(\mathcal{L})\). If \(q\) is even, then for any \(m\geq 1\) and any \((\alpha_{1},\alpha_{2})\in\mathcal{R}_{m}(\mathcal{L})\) we have
\[2\alpha_{1}+\alpha_{2}\leq 2. \tag{7}\]
In particular, \((1,1)\notin\mathcal{R}(\mathcal{L})\).
**Remark 28**.: Note that this result not only proves that the achievability regions of the network when using even or odd characteristics are different, but it also shows that the achievability region when communicating using an even characteristic field is strictly contained in the one obtained using an odd characteristic field. More specifically, an interested reader will be able to see that, when using even characteristic fields, the rate \((1,1)\) not only is not achievable but is bounded away, meaning that it cannot be achieved even with infinitely many uses of the network.
Proof.: Observe that if \(q\) is odd, then the \(1\)-code \(C=\mathbb{F}_{q}\times\langle(1,-1)\rangle\) is unambiguous and therefore \((1,1)\in\mathcal{R}_{1}(\mathcal{L})\), as claimed.
To prove the second part of the theorem, denote by \(f_{i,j}\) the multiplication by \(F_{i,j}\) on the right, for \(i,j\in\{1,2\}\). Let \((\alpha_{1},\alpha_{2})\in\mathcal{R}_{m}(\mathcal{L})\) and let \(C=C_{1}\times C_{2}\) be an unambiguous \(m\)-code with \(\log_{q^{m}}(C_{1})=\alpha_{1}\) and \(\log_{q^{m}}(C_{2})=\alpha_{2}\). Recall that by Equation (2), we have that \(|\mathrm{Fan}_{1}(x,C)|=|\mathrm{IS}_{1}(C)|\) for all \(x\in C_{1}\). Since \(C\) is unambiguous, we have \(|C_{1}|\cdot|\mathrm{IS}_{1}(C)|\leq|\mathbb{F}_{q^{m}}|=q^{m}\), or equivalently
\[|\mathrm{IS}_{1}(C)|\leq\frac{q^{m}}{|C_{1}|}. \tag{8}\]
Recall that, by definition, \(\mathrm{IS}_{1}(C)=f_{2,1}(C_{2})\). Observe that for all \(x\in\mathbb{F}_{q^{m}}\) we have \(f_{2,1}^{-1}(x)=\{(y,x-y)\mid y\in\mathbb{F}_{q^{m}}\}\) and therefore it holds that
\[C_{2}\subseteq f_{2,1}^{-1}(f_{2,1}(C_{2}))=f_{2,1}^{-1}(\mathrm{IS}_{1}(C)) =\bigcup_{x\in\mathrm{IS}_{1}(C)}f_{2,1}^{-1}(x)=\bigcup_{x\in\mathrm{IS}_{1} (C)}\{(y,x-y)\mid y\in\mathbb{F}_{q^{m}}\}.\]
In particular, all the elements of \(C_{2}\) are of the form \((y,x-y)\) for some \(x\in\mathrm{IS}_{1}(C)\) and \(y\in\mathbb{F}_{q^{m}}\). Now fix any \(y\in\mathbb{F}_{q^{m}}\) and \(x\in\mathrm{IS}_{1}(C)\) with \((y,x-y)\in C_{2}\), and observe that
\[\mathrm{Fan}_{2}((y,x-y),C)=f_{2,2}(y,x-y)+\mathrm{IS}_{2}(C)=(y,x-y)+\langle( 1,1)\rangle. \tag{9}\]
Here is where the characteristic of the field starts to play a crucial role. If \(q\) is even, then
Equation (9) can be rewritten as
\[\text{Fan}_{2}((y,x+y),C)=(y,x+y)+\langle(1,1)\rangle=\{(z,x+z)\mid x\in\mathbb{F} _{q^{m}}\}=f_{2,1}^{-1}(x).\]
It follows that
\[\bigcup_{(y,x+y)\in C_{2}}\text{Fan}_{2}((y,x+y),C)\subseteq\bigcup_{x\in \text{IS}_{1}(C)}f_{2,1}^{-1}(x)=f_{2,1}^{-1}(\text{IS}_{1}(C)). \tag{10}\]
Again by Equation (2) and that \(f_{1,2}\) is injective, we have \(|\text{Fan}_{2}((y,x+y),C)|=|\text{IS}_{2}(C)|=|C_{1}|\). Combining this fact with Equation (10) and with the unambiguity of \(C\) we obtain
\[|C_{2}|\leq\frac{|f_{2,1}^{-1}(\text{IS}_{1}(C))|}{|\text{Fan}_{2}((y,x+y),C)|} =\frac{|\text{IS}_{1}(C)|\cdot|\mathbb{F}_{q^{m}}|}{|C_{1}|}\leq\left(\frac{q^ {m}}{|C_{1}|}\right)^{2},\]
where the last equality is a consequence of Equation (8). Taking the logarithm with base \(q^{m}\) of the inequality above, one gets \(\alpha_{2}\leq 2(1-\alpha_{1})\), as desired.
The following proposition shows that some of the rates satisfying Equation (7) are achievable.
**Proposition 29**.: Let \(\mathcal{L}=(2,(1,2),(1,2),\mathbb{F}_{2^{m}})\) be the \(2^{m}\)-LMUC from Example 26. Then for any \(n\leq m\) we have \(\left(\frac{n}{m},2\left(1-\frac{n}{m}\right)\right)\in\mathcal{R}(\mathcal{L})\).
Proof.: Let \(\{x_{1},\ldots,x_{m}\}\) be an ordered basis of \(\mathbb{F}_{2^{m}}\) over \(\mathbb{F}_{2}\). Define \(C_{1}=\langle x_{1},\ldots,x_{n}\rangle_{\mathbb{F}_{2}}\) and \(C_{2}=\langle(x_{i},0),(0,x_{i})\mid i=n+1,\ldots,m\rangle_{\mathbb{F}_{2}}\). We can compute the interference sets \(\text{IS}_{1}(C)=f_{2,1}(C_{2})=\langle x_{n+1},\ldots,x_{m}\rangle_{\mathbb{F }_{2}}\) and \(\text{IS}_{2}(C)=f_{1,2}(C_{1})=\langle(x_{1},x_{1}),\ldots,(x_{n},x_{n}) \rangle_{\mathbb{F}_{2}}\). Thus \(C_{1}\times C_{2}\) is unambiguous because \(C_{1}\cap\text{IS}_{1}(C)=C_{2}\cap\text{IS}_{2}(C)=\{0\}\).
## 6 Conclusions
We considered the multiple unicast problem over a coded network where the network code is linear over a finite field and fixed. We introduced a framework to define and investigate the achievable rate region in this context. We established an outer bound on the achievable rate region and provided examples where the outer bound is sharp. Finally, we illustrated the role played by the field characteristic in this problem, which is different than what is generally observed in the context of linear network coding.
|
2301.00075 | Optimal Motion Generation of the Bipedal Under-Actuated Planar Robot for
Stair Climbing | The importance of humanoid robots in today's world is undeniable, one of the
most important features of humanoid robots is the ability to maneuver in
environments such as stairs that other robots can not easily cross. A suitable
algorithm to generate the path for the bipedal robot to climb is very
important. In this paper, an optimization-based method to generate an optimal
stairway for under-actuated bipedal robots without an ankle actuator is
presented. The generated paths are based on zero and non-zero dynamics of the
problem, and according to the satisfaction of the zero dynamics constraint in
the problem, tracking the path is possible, in other words, the problem can be
dynamically feasible. The optimization method used in the problem is a
gradient-based method that has a suitable number of function evaluations for
computational processing. This method can also be utilized to go down the
stairs. | Aref Amiri, Hassan Salarieh | 2022-12-31T00:01:57Z | http://arxiv.org/abs/2301.00075v1 | # Optimal Motion Generation of the Bipedal Under-Actuated Planar Robot for Stair Climbing
###### Abstract
The importance of humanoid robots in today's world is undeniable, one of the most important features of humanoid robots is the ability to maneuver in environments such as stairs that other robots can not easily cross. A suitable algorithm to generate the path for the bipedal robot to climb is very important. In this paper, an optimization-based method to generate an optimal stairway for under-actuated bipedal robots without an ankle actuator is presented. The generated paths are based on zero and non-zero dynamics of the problem, and according to the satisfaction of the zero dynamics constraint in the problem, tracking the path is possible, in other words, the problem can be dynamically feasible. The optimization method used in the problem is a gradient-based method that has a suitable number of function evaluations for computational processing. This method can also be utilized to go down the stairs.
Bipedal robot, under-actuated, optimization, motion planning +
Footnote †: journal: Computer Vision and Pattern Recognition
**Introduction**
Inspired by human body physics, bipedal robots have many degrees of freedom and can perform various actions with their joint movements. Bipedal robots can adapt to different environments that other wheeled robots are unable to move. The study of path (trajectory) generation methods as a reference for the output of the control problem of bipedal robots in this regard is essential. For the bipedal robot to climb the stairs, it is necessary to analyze the movement of them ascending the stairs and to examine the method of planning the bipedal robot to move and to determine the position of feet for walking on the stairs [1].
So far, researches have been done on how to go up and downstairs and find a suitable or optimal path for bipedal robots. Various papers using optimization algorithms and considering the robot angles as polynomial functions tried to design an optimal path for a 6-degree bipedal robot [2]. Some articles have even paths planned for multi-legged robots to cross the stairs [3]. Some articles also used stability criteria such as ZMP in designing their paths [4-7]. But this method is only applicable for robots that have feet (soles) with ankle joint actuators, which often have much lower speed in maneuvering than under-actuated robots without feet, and of course, due to the relatively large feet have more wasted energy. Some articles also derive their initial path using data based on motion capturing and then try to optimize their results by combining optimization methods [8]. However, according to the existing literature, few articles have attempted to design a holonomic path for under-actuated bipedal robots without feet. Due to the importance of optimal motion planning, a lot of work has been done in recent years in this area.
In this paper, the problem of motion planning is investigated to find the optimal paths for under-actuated bipedal robots to step on the stairs, the results obtained as a control output will cause the robot to move properly and optimally. This article consists of three sections. In the first part, the dynamic model of the bipedal robot is derived. In the second part, the constraints of the optimization problem are examined, in the third part, the cost function and method of optimal problem solving and finding a suitable movement gate are examined. In the fourth section, the results are presented and discussed, and at the end, the research of this article is summarized as the conclusion.
## Dynamics equation
The dynamic model of the robot is shown in Figure 1. The robot has 7 degrees of freedom and 5 links, each leg has two joints (one in the knee and the other in the hip) and 3 degrees of freedom. We assume that the contact of the tip of the leg is the point.
Figure 1: Planar bipedal robot
The robot's motion is planar and the robot has 4 actuators, two actuators at the knees and two actuators at the junction of the hip and the trunk so that there is one actuator between each leg and trunk. It is assumed that by hitting the tip of the swing leg on the ground, the other leg rises from the ground, in other words, the robot has no double support phase. So, when moving on the stairs, no time is wasted for placing both feet on the ground. Therefore, the hybrid dynamic equations of a robot are a combination of a single support phase and collision phase. The equations of the hybrid model are as follows:
\[\Sigma:\begin{cases}\dot{x}=f(x)+g(x)u&x^{\prime}\notin\Gamma\\ x^{*}=\Delta(x^{-})&x^{-}\in\Gamma\end{cases} \tag{1}\]
The vector \(x\!=\!(q^{T},\dot{q}^{T})^{T}\) consists of the vector of generalized coordinates and their derivatives. \(\Delta\) is a map to find the states of the system exactly after the collision, and the positive and negative symbols indicate the states of the system before and after the collision. The switch condition is as follows:
\[\Gamma=\left\{(q,\dot{q})\in x\,|\,P_{z}^{*}(q)=0,P_{z}^{*}(q)>0\right\} \tag{2}\]
In equation (2), \(P_{z}^{*}\) represents the horizontal position of the swing leg and \(P_{z}^{*}\) represents its vertical position.
The dynamic equations of the robot before and after the collision and in the single support phase can be written as follows:
\[M_{(q)}\ddot{q}+C_{(q)}\dot{q}+G_{(q)}=B_{(q)}u \tag{3}\]
Matrix B is also a pre-multiplication matrix in the torque vector and is not a square matrix due to the under-actuation of the system.
In Equation 1, there is an expression called zero dynamics, and it is easy to separate this term if the generalized coordinates of the system are written in relative terms (as has been done in this paper). The satisfaction of this constraint is important in two ways. First, if this constraint is not satisfied, the problem of optimizing the input torques is practically ambiguous, because these torques are not really applicable to the problem. Although it may lead to a feasible kinematic equation (kinematically possible), it is not feasible in terms of control (open-loop), i.e. it is not dynamically possible.
#### 2.2.1 Optimization problem
The most important constraint of the problem, called zero dynamics, was introduced in the previous section. Other constraints in this issue are important to plan the robot movement in the best way; the constraints of the optimization problem are generally classified into two general modes of constraints based on dynamics and constraints based on kinematics.
### 2.2.2.1. Dynamic constraints:
Torque limit: because the torque generators have a certain limit (inequality constraint).
Zero dynamic: the importance of which was mentioned earlier (equality constraint).
Coefficient of friction limit: for the robot to move on real environments, the ratio of horizontal force to vertical force should not be more or less than a certain limit. In other words, the coefficient of friction required for stepping should not exceed a certain limit that can not be implemented in real environments. (inequality constraint).
2.2.3.1.2.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.2.3.3.2.3.3.2.3.3.2.3.3.2.3.3.2.3.3.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.33.3.3.3.3.3.3.3.3.33.3.3.33.3.3.3.3.3.33.3.3.33.3.33.3.33.3.3.33.3.3.3.33.3.33.3.33.3.33.3.33.3.33.33.3.33.3.33.33.33.33.33.33.33.33.33.33.33.33.
uniform and smooth, and they are also simple for deriving.
\[q_{k}\left(t\right)=\sum_{i=0}^{m=4}\alpha_{k,i}t^{i} \tag{5}\]
The degree of this polynomial must be chosen in such a way that the number of optimization parameters, which are the same as the number of polynomial coefficients, are appropriate (minimum value to have a smooth motion satisfied the mentioned constraints). In this article, we choose the function of order 4 to have freedom of action in terms of the optimization problem and also not to make the number of optimization parameters of the problem irrational and complicated.
Method of solving the optimization problem: This optimization problem is solved by Variable Metric methods for constrained optimization. This method is a gradient-based method, which provides a desirable and fast solution. Another advantage of this method is to not get out easily from the feasible area [9].
**Results and Discussion**
Following the model and algorithm presented above, a bipedal robot has been simulated to climb the stairs. The height of the stairs is considered 20cm and the width of the stairs is 40cm. The robot model specifications are in accordance with Table 1. The initial and final angles of the bipedal robot as a configuration are given in Table 2. Here the initial and final configurations are intuitively obtained from the human configuration. The speed of crossing each step is.5 seconds. The torque limit applied to the system is 150 N.m and the maximum angular velocity of the motors 10 rad/sec can be.
According to Figure 3, the torques have a good margin from the saturation and compared to other articles and research reviewed in the introduction, more optimal results have been obtained, also zero dynamics (\(\tau_{v}\)) in a very good way is satisfied.
According to Figure 4, it is clear that the generated path needs the maximum coefficient of friction.69 to slip, so on all surfaces that have a coefficient of friction higher than.69 there is the ability to move.
According to Figure 5, the generated paths, due to the nature of the polynomial functions, have a smooth
\begin{table}
\begin{tabular}{|l|l|} \hline
**Symbol** & **Value** \\ \hline \(\mathbf{m_{i},m_{s}}\) & 3.2 kg \\ \hline \(\mathbf{m_{i},m_{s}}\) & 6.8 kg \\ \hline \(\mathbf{m_{i},m_{s}}\) & 20 kg \\ \hline \(\mathbf{I_{i},I_{s}}\) & 0.93 kg-m\({}^{2}\) \\ \hline \(\mathbf{I_{i},I_{s}}\) & 1.08 kg-m\({}^{2}\) \\ \hline \(\mathbf{I_{s}}\) & 2.22 kg-m\({}^{2}\) \\ \hline \(\mathbf{I_{i},I_{s}}\) & 0.4 m \\ \hline \(\mathbf{I_{i},I_{i}}\) & 0.4 m \\ \hline \(\mathbf{I_{s}}\) & 0.625 m \\ \hline \(\mathbf{d_{i},d_{s}}\) & 0.128 m \\ \hline \(\mathbf{d_{i},d_{s}}\) & 0.163 m \\ \hline \(\mathbf{d_{i}}\) & 0.2 m \\ \hline \end{tabular}
\end{table}
Table 1: Rabbit robot properties [10]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameters** & **Initial value(rad)** & **Final value(rad)** \\ \hline \(\mathbf{q_{i}}\) & 0.2618 & 0.1964 \\ \hline \(\mathbf{q_{i}}\) & -1.3140 & 0 \\ \hline \(\mathbf{q_{i}}\) & -1.2267 & 0.0219 \\ \hline \(\mathbf{q_{i}}\) & -0.0219 & 1.2267 \\ \hline \(\mathbf{q_{i}}\) & 0 & 1.3140 \\ \hline \end{tabular}
\end{table}
Table 2: The initial and final configuration
Figure 4: Friction coefficient
Figure 5: Angles vs. angular velocities
Figure 3: Input torques
and non-breaking behavior, and the angular velocities are far from their saturation limit.
As can be seen in Figure 6, the robot's movement is quite normal and very similar to human movement. The trunk is kept in a good position and also the tip of the feet and other links do not touch the surfaces except at the beginning and at the end of the movement. According to the sum of the presented results, the generated path is an optimal path for the proper gait of the under-actuated bipedal robot.
## Conclusions
In this article, we present a method to generate optimal motion for a bipedal robot, we used this method to find the paths that the 'rabbit' robot by tracking them can optimally climb stairs. This process consists of 3 parts: robot dynamic extraction (because optimization is based on the model), design of constraints based on dynamics and kinematics, and optimization. As a result of the problem, a series of virtual holonomic paths were extracted in which the zero hybrid dynamics of the problem is also satisfied, so tracking the paths are possible for under-actuated robots.
In the future, we plan to use a new method called impact invariance to design the above path, which guarantees the periodicity of the proposed paths.
|
2303.18043 | Online dynamic flat-field correction for MHz Microscopy data at European
XFEL | The X-ray microscopy technique at the European X-ray free-electron laser
(EuXFEL), operating at a MHz repetition rate, provides superior contrast and
spatial-temporal resolution compared to typical microscopy techniques at other
X-ray sources. In both online visualization and offline data analysis for
microscopy experiments, baseline normalization is essential for further
processing steps such as phase retrieval and modal decomposition. In addition,
access to normalized projections during data acquisition can play an important
role in decision-making and improve the quality of the data. However, the
stochastic nature of XFEL sources hinders the use of existing flat-flied
normalization methods during MHz X-ray microscopy experiments. Here, we present
an online dynamic flat-field correction method based on principal component
analysis of dynamically evolving flat-field images. The method is used for the
normalization of individual X-ray projections and has been implemented as an
online analysis tool at the Single Particles, Clusters, and Biomolecules and
Serial Femtosecond Crystallography (SPB/SFX) instrument of EuXFEL. | Sarlota Birnsteinova, Danilo E. Ferreira de Lima, Egor Sobolev, Henry J. Kirkwood, Valerio Bellucci, Richard J. Bean, Chan Kim, Jayanath C. P. Koliyadu, Tokushi Sato, Fabio Dall'Antonia, Eleni Myrto Asimakopoulou, Zisheng Yao, Khachiwan Buakor, Yuhe Zhang, Alke Meents, Henry N. Chapman, Adrian P. Mancuso, Pablo Villanueva-Perez, Patrik Vagovic | 2023-03-31T13:27:37Z | http://arxiv.org/abs/2303.18043v1 | # Online dynamic flat-field correction for MHz Microscopy data at European XFEL
###### Abstract
The X-ray microscopy technique at the European X-ray free-electron laser (EuXFEL), operating at a MHz repetition rate, provides superior contrast and spatial-temporal resolution compared to typical microscopy techniques at other X-ray sources. In both online visualization and offline data analysis for microscopy experiments, baseline normalization is essential for further processing steps such as phase retrieval and modal decomposition. In addition, access to normalized projections during data acquisition can play an important role in decision-making and improve the quality of the data. However, the stochastic nature of XFEL sources hinders the use of existing flat-filed normalization methods during MHz X-ray microscopy experiments. Here, we present an online dynamic flat-field correction method based on principal component analysis of dynamically evolving flat-field images. The method is used for the normalization of individual X-ray projections and has been implemented as an online analysis tool at the Single Particles, Clusters, and Biomolecules and Serial Femtosecond Crystallography (SPB/SFX) instrument of EuXFEL.
## 1 Introduction
In the past few decades, X-ray microscopy was established as an important tool for the study of a large range of applications, from biology to material research, at third and fourth-generation X-ray synchrotron sources, where the achievable spatial resolution and sensitivity have been pushed to the limits [1]. Recently, fast megahertz (MHz)-rate X-ray microscopy synchronized to individual pulses has been pioneered at the Advanced Photon Source (APS) [2] and at the ESRF - the European Synchrotron [3, 4], which made it possible to image fast stochastic phenomena. MHz X-ray microscopy experiment with frame acquisition synchronized to individual X-ray pulses has been demonstrated recently at European XFEL (EuXFEL) [5]. This development was
enabled by the unique properties of X-ray Free-Electron Lasers (XFELs). In particular, EuXFEL is currently the only XFEL source providing intense pulses at MHz rates [6], which opens up the potential for novel methods and applications. EuXFEL provides two to four orders of magnitude more photons per pulse (\(\sim 10^{12}-10^{13}\)) than the most brilliant synchrotron, i.e ESRF. EuXFEL also provides high spacial coherence, with a beam path of 1 km and a maximum of 4 \(\mu\)rad divergence, with photon energy up to 30 keV. The unique properties of EuXFEL come with unique issues, which complicate the acquisition of high-quality MHz microscopic data. XFEL's X-ray pulses are generated through the SASE process [7], which leads to strongly stochastic spatial and temporal fluctuations. Another type of aberration appearing in microscopy data is often referred to as fixed-pattern noise. It originates from the different responses of the detector's pixels and imperfections in the optic systems, such as scintillator screens, lenses, etc. Ideally, image aberrations should be corrected, so that only the signal variations originating from the X-rays' interaction with a sample are observed. Methods aiming to achieve this goal are often referred to as "flat-field correction" methods.
Flat-field correction methods rely on using reference images averaged over many realizations of acquired images. Reference images acquired with beam illumination are referred to as "flat-field images", while images acquired without beam illumination are referred to as "dark-field images". However, as a stationary method, traditional flat-field correction, which assumes a flat-filed that is stable with time, is unable to sufficiently normalize stochastic effects, such as those stemming from SASE processes which cause variations in beam filed from XFEL pulse to pulse.
There are several methods used for correcting both these effects [8, 9, 10, 11]. The issue of normalization of a dynamically changing flat-field has been tackled by a method [8], in which the reference flat-fields are represented in a latent lower-dimensional vector space and an "effective" flat-field is chosen per input image in the latent space. This method, called dynamic flat-field correction, was introduced in Ref. [8] for synchrotron data. The application of a similar method was demonstrated on EuXFEL's full-field imaging data [9] of high magnification and in combination with deep-learning approaches [10].
In this paper, we present an _online_ method used at EuXFEL to perform dynamic flat-field normalization for MHz microscopy data. EuXFEL users often require fast feedback when performing an experiment, such that they may adapt their configurations to improve the quality of the data taken. In such conditions, obtaining corrected images during the experiment may be the deciding factor between a successful experiment, or a failure to obtain reliable data. The online flat-field correction method described in this paper is optimized to provide output to the user as data arrives, minimising the delay between data-acquisition and analysis. The method proposed here also provides an indication of whether the flat-field images collected as references are sufficient, or whether a new dataset must be collected due to changes in the experimental conditions. This helps to enhance the method's performance, as a dataset of flat-fields that sufficiently describes the illumination at a given moment of measurement has a crucial impact on the resulting quality of the normalization when affected by the fluctuations of SASE illumination. Moreover, the normalized data provided during experiments can be further re-used as input to other processing tools, for which pre-processed data are essential. Our main objective is to ensure high quality image data at high speeds. We are using a software tool provided by EuXFEL, enabling us to use online data and follow with the normalization procedure in near real-time. The resulting online normalization tool for MHz microscopy was developed and implemented at the Single Particles, Clusters, and Biomolecules and Serial Femtosecond Crystallography (SPB/SFX) instrument of EuXFEL. Our online implementation based on the dynamic method led to a major improvement in the quality of corrected images compared to results using conventional flat-field correction approach.
The manuscript is organised as follows: in Section 2, we describe flat-field correction methods. The normalization algorithm and implementation are described in Section 3. We discuss the results and the performance of implementation using test data in Section 4, followed
by conclusion and discussion that are given in Section 5.
## 2 Flat-field correction
In this section, we briefly review of the conventional flat-field correction method. We then expand on the dynamic flat-field correction method, which is implemented in this work. While we closely follow [8, 10], differences in the methods are pointed out as needed.
A stationary, conventional flat-field correction removes the effects of spatially uneven illumination functions. Such effects may be caused by variations in the detector response in different pixels or modules, or effects arising from the scintillator and optics configurations. One assumes that there has been a previous data acquisition of \(N_{f}\) flat-field images \(f_{j}\) and \(N_{d}\) dark-fields \(d_{k}\), where \(j\in[1,N_{f}]\) and \(k\in[1,N_{d}]\) refer to the reference images' index. In a stationary flat-field correction, one takes advantage only of the average flat-field and dark-field images, \(\bar{f}=\frac{1}{N_{f}}\sum_{j}f_{j}\) and \(\bar{d}=\frac{1}{N_{d}}\sum_{k}d_{k}\).
A simple procedure to correct the uneven detector response would be to correct each raw input image \(p_{i}\) by calculating:
\[p_{i}^{c}=\frac{p_{i}-\bar{d}}{\bar{f}-\bar{d}}. \tag{1}\]
As mentioned previously, due to the character of SASE processes, noise is shot-to-shot-dependent, introducing a change in the intensity profile on the scintillator screen between shots. This behaviour cannot be captured solely by the average flat-field image, causing the conventional method to fail for such conditions. To correct for the stochastic effects observed in SASE sources such as EuXFEL, a dynamic flat-field correction substitutes the average flat-field with an effective flat-field \(f_{i}^{\prime}\), which depends on the collected sample image \(p_{i}\) itself. With this alternative procedure, the corrected image is
\[p_{i}^{c}=\frac{p_{i}-\bar{d}}{f_{i}^{\prime}-\bar{d}}. \tag{2}\]
In principle, the true illumination function may be any image. However, some simplifying assumptions allow for an approximate estimate of \(f_{i}^{\prime}\). Firstly, it is natural to assume that such flat-field could be expressed as a function of the previously collected flat-fields \(f_{j}\). While such a function may be extremely complex, a simple first-order approximation would be that \(f_{i}^{\prime}\) is a linear combination of the collected flat fields. Namely, \(f_{i}^{\prime}=\sum_{j}w_{j}f_{j}\). For the purposes of an online flat-field correction, this initial assumption leads to a fast and reliable implementation, which may be supplemented by further corrections offline.
While one may approach the issue of the approximation of an effective flat-field with as many weights as there are collected flat-fields, one may further simplify such a description by assuming that only a few collected flat-fields are the main contributors to the sum. A data-reduction technique could, therefore, be of use to reduce the number of coefficients required. Assuming the flat-fields collected are samples of a Gaussian distribution, one may use Principal Components Analysis (PCA) [12, 13] to rewrite the linear flat-field combination as a linear combination of the principal components and keep only the components with the largest variance. Discarded components would then contribute little to the linear combination. In general, it is feasible to use \(M\) components, where \(M\ll N_{f}\). In such a setting, we expand the effective flat-field \(f_{i}^{\prime}\) as:
\[f_{i}^{\prime}=\bar{f}+\sum_{m}^{M}u_{m}w_{i\,m}, \tag{3}\]
where \(u_{m}\) is the \(m\)-th principal component of the mean-subtracted flat-fields, \(\{f_{j}-\bar{f}\}\), and \(w_{i\,m}\) are free parameters to be identified online.
While a functional form is available to parameterize the effective flat-field \(f^{\prime}_{i}\), one must determine a procedure to select the weights \(w_{i\,m}\), such that the effective flat-field is uniquely chosen for a given sample image \(p_{i}\). A constraint must be imposed on the corrected image for a meaningful choice of weights.
A key constraint is to preserve details originating from the interaction of X-rays with the sample. Such details appear through sharp transitions or edges in the image, separating the signal from the background content. The free parameters \(w_{i\,m}\) must, therefore, preserve such edge effects, while removing spurious variations in the images caused by the uneven detector response. Additionally, one would like to remove spurious variations in the images added by an uneven flat-field. The regularization technique of Total Variation (TV) denoising is often used in image processing to achieve exactly those goals, by minimizing the TV of the corrected image, while aiming to obtain an image as similar as possible to the uncorrected one. Details concerning calculation of weights \(w_{i}\) can be found in the following articles [8, 14].
The method described here consists of two separate steps. Initially, reference flat-fields and dark-fields are acquired and PCA is used to obtain the most relevant principal components of the flat-field dataset. During data-acquisition with a sample, for each individual frame, weights appropriate for the effective flat-field are found. After such calculation, the final corrected frame is obtained by applying the weights in the linear combination in Eq. 2.
Such a procedure must be implemented in a way that it can be run online at adequately fast rates to keep up with the data-acquisition rate of EuXFEL. We expand on the technical implementation in the next section.
## 3 Implementation
This section provides a schematic overview of the two steps of the main online normalization algorithm outlined in the previous section. Selected details of the implementation of the algorithm and specific features for visualization applied at EuXFEL are detailed here.
The schematic overview of the first part of the algorithm, employing PCA, can be found in Fig. 1 (a), whereas Fig. 1 (b) summarises the second part which estimates an individual flat-field for every sample image and visualizes the resulting normalized data.
Next, we list some of the specific functions and details used in the implementation of the algorithm. For the total variation minimization of the objective function a quasi-Newton algorithm is applied [15]. The precision parameter was tuned to maximise processing rates of the correction algorithm. Adjustments have been made to preserve the quality of the corrected images while increasing the processing speed.
The speed of the algorithm is further increased by down-sampling all the images entering the minimization procedure [8, 10], causing also reduction of noise in corrected images. Here, in order not to introduce more time-consuming steps, we omitted any additional algorithms aiming exclusively for noise reduction, which may be included at later stages for offline processing.
This work demonstrates a novel implementation of the dynamic flat-field correction algorithm to be used at EuXFEL's infrastructure, which can be used as a normalization technique during experiments. Moreover, considering the goal of user-friendliness of the online normalization tool, EuXFEL's software framework Karabo [16] is used here for online visualization. One of the two implementations uses the _metropc_ framework and the _extra-metro_ package [17], which is integrated within the Karabo framework and its Graphical User Interface (GUI). The _metropc_ framework allows for flexible adjusting of a script during an experiment. It allows the visualization of the corrected incoming data, in addition to raw sample images. The second implementation of the online dynamic flat-field correction algorithm is done using Karabo bridge [17] to access online data in the Karabo pipeline and perform parallelized analysis and further visualization using the Qt toolkit [18].
## 4 Results
We have described here the results obtained by our dynamic flat-field correction implementation at EuXFEL. We demonstrated our algorithm on a Venturi tube dataset [19]. Data were acquired in March 2021 at the SPB/SFX instrument [20] of EuXFEL with a photon energy of 9.3 keV. A fast-frame-rate Shimadzu HPV-X2 camera, coupled to a \(8\mu m\) thick LYSO:Ce scintillator, was used as an imaging system with a \(10\times\) magnification [5]. Datasets consisting of 68 dark-field and 67 flat-field trains were taken to test our method. Each train consisted of 128 images with a size of \(250\times 400\) pixels. The effective pixel size was \(3.2\mu m\). The exposure time of an image was ultra-fast and was given by the X-ray pulse duration translated into the latent image at the scintillator. The duration of the signal emitted by the scintillator was given by the scintillator emission time \(\tau\) (LYSO:Ce \(\tau\sim 40\) ns). The frequency of acquired frames within a train was 1.128 MHz and trains were collected at frequencies from \(0.08-0.1\) Hz, limited by the camera idle time. The performance of the dynamic flat-field correction algorithm on the aforementioned dataset is demonstrated in this section.
The first step toward the normalization of online data is the processing of flat-field and dark-field data. Our aim was to acquire hundreds to thousands of flat-field images, which, at the current acquisition speed, requires up to 10 minutes of measurements. After the first step of the algorithm, as described in Fig. 1 (a), a preview of the cumulative sum of explained variance ratio of principal components can be viewed. This allows monitoring of the rate at which the selected number of principal components explain variance in the flat-field dataset. In addition, it enables the increase or decrease of the number of components before proceeding with the normalization of sample images.
The flat-field dataset consists of approximately 10 000 flat-field images with a few instances shown in Fig. 2 (a), part (b) shows the mean flat-field image calculated over the whole dataset. In Fig. 3, selected principal components are depicted for the index of principal component \(m=\{1,2,3,16\}\). The first principal component, labeled \(m=1\), is similar to the mean flat-field image in Fig. 2 (b) and explains the largest fraction of the variance in the dataset. The following two components, \(m\in\{2,3\}\), capture the intensity variation in horizontal and vertical directions. Components \(m\leq 4\), show mostly horizontal changes in intensity with pronounced stri
Figure 1: Algorithm overviews of the first part with principal component analysis of flat-field dataset (a) and the second part performing the online dynamic flat-field correction and visualization (b).
features, as depicted on the component \(m=16\). The cumulative sum of explained variance ratio of the flat-field dataset is shown in Fig. 3 for the first 25 principal components. Figure 3 (b) shows that the first three components explain approximately 83% of the variance. Adding more components does not increase it significantly, and after component 3 the increases in cumulative sum of explained variance are minor. However, as will be discussed in more detail later in this chapter, in order to correct for the majority of vertical features in sample images, one needs to include also those components with seemingly low values of explained variance, as they obtain variations needed for a high-quality correction.
After identifying the most important components describing flat-field illumination, one can continue with the normalization procedure of sample images described in Section 3. To better visualize the performance of dynamic flat-field correction on the test data, in Fig. 4 we plot the sample image (a), its counterparts corrected by both conventional (b) and dynamic method (c). The default value for a number of principal components taken into consideration here is 20. Fig. 4 illustrates that the dynamic method (c) more successfully removes vertical features than the conventional correction method (b). We can notice vertical features along the whole width of an image changing position from frame to frame, which is captured in examples (a) Fig.2. Moreover, the lack of those features on the average flat-field image (b) Fig.2 causes the inability of conventional method successfully remove such artefacts from acquired images. Even fine stripes, appearing on average flat-field (b) Fig.2 on the left side, are not removed using conventional normalisation method, in Fig.4 (b), due to the slight change of their position during measurement. Another challenging detail to eliminate is the circular spot from the middle of the sample image (a). Similar to the fine stripes from the previous example, better, however not perfect, reduction of the spot is reached only with the dynamic method (c) Fig.2.
The quality of flat-field corrected images was mostly assessed visually. To estimate the method's performance quantitatively, we calculated pixel value spread (pvs) as a deviation of an image from its average pixel value
\[\text{pvs}=\sqrt{\frac{\sum_{l}^{N^{\prime}}(p_{l}-\mu)^{2}}{N^{\prime}}},\ \mu=\sum_{l}^{N^{\prime}}p_{l} \tag{4}\]
where index \(l\) iterates over pixels of image \(p\). We were using only "flat" parts of images, without any sample, in order to estimate the amount of noise present in an image. Higher pvs values are expected for uncorrected images with uneven intensity profile and lower values for images, which were corrected in a way, that reduces most of the uneven structures and noisy pixels.
In order to assess the quality of correction of images of the same position within a train we calculated TV and pvs for each of 128 images acquired in one train and averaged over 15 trains. Average values for both variables and their standard deviations are shown in Fig.5. While the pvs and TV value varies for each frame, one may see in Fig. 5 that the dynamic flat-field corrected images have a consistently lower TV (a) and spread (b), suggesting the lowering of noise level in the chosen sample-free part of the images. In contrast, the conventional flat-field correction shows very little benefit, in comparison to no correction at all, as for TV and pvs values no statistically significant difference was found for uncorrected image and images corrected using conventional method. This result was expected based on the visual comparison in Fig.4, in which no visible removal of vertical features was observed. Standard deviation expressed as colored area for uncorrected images and both normalization methods in Fig. 5 describes train-to-train changes in TV and pvs value for a given frame in dataset. On average, the pvs value for a dynamically corrected image was found to be 0.069, while the spread for conventionally corrected images was 0.098. Uncorrected images have a pvs of 0.102.
Additionally, we studied the quality of normalized images estimated by TV values as a function of a varying number of principal components. Resulting corrected images were also assessed visually. Fig. 6 describes changes in TV value due to a varying number of principal components used in the normalization procedure. One can notice a decrease in average value of
TV for numbers of principal components 15 and 20, followed by an increase starting after the 20 components mark. For a visual comparison, Fig. 7 provides examples of flat-field corrected images using a varying number of principal components in the normalization algorithm. Fig. 7 illustrates that including 2-10 principal components in the normalization procedure leads to non-optimal flat-field correction results. Fine line structures are still visible on the left part of images for 2 and 4 component cases. Vertical features along the whole width are not fully eliminated using 10 principal components. Increasing number of principal components up to 10 results in decreased visibility of vertical features, which also supports observation of decreasing TV values in Fig. 6. The circular spot in the middle is becoming less noticeable with higher number of principal components, although can be detected on all corrected images up to the case using 100 components. Including around twenty principal component seems to remove majority of the unwanted features and results for 40 and 100 components in Fig. 6 are not leading to visibly better results. TV values in Fig. 6 are growing after 20 components, which may be caused by overfitting. The results listed in Figs. 6 and 7 indicate that the optimal number of principal components is around 20, where most of the features appearing on flat fields are removed.
Considering the idle time of the camera, our goal was to reach a correction before the next acquired train arrives. The idle time of the camera is around 15s, meaning the camera is able to acquire one in 150 of EuXFEL's trains. This consideration does not take into account set up time spent for the analysis of flat- and dark-field datasets, as this process usually takes a few minutes and needs to be repeated on an hourly basis, whenever new flat- and dark-fields are taken. The proposed algorithm and both its implementations mentioned in Sec. 3 were able to be faster and keep up with the volume of incoming data. A parallelized version reached correction time of approximately 1s for a dataset consisting of one train with 128 images. Tests were performed at online cluster INTEL, Gold-6140 CPU @ 2.30 GHz, 768G. Moreover, the processing speeds should be able to sustain the load with anticipated updates, which aim to shorten the idle time of the camera. The current performance of the algorithm is shown to be suitable for its use during experiments as a near real-time analysis tool.
Figure 2: Example instances (a) of the test flat-field dataset consisting of approximately 10 000 images and the average flat-field image (b) calculated over the whole test dataset.
Figure 4: Comparison of a sample image (a) without any normalization, conventionally flat-field corrected image (b) and dynamic flat-field corrected data (c). Only the bottom section of images (\((50,400)\) pixels) is shown here from originally sized images (\((250,400)\) pixels).
Figure 3: An example of principal components \(u_{m}\) for \(m=\{1,2,3,16\}\) (a), obtained from the test flat-field data shown in Fig. 2. The cumulative sum of explained variance ratio of the first 25 principal components is depicted in subfigure (b).
Figure 5: Comparison of total variation (a) and pixel value spread (b) for uncorrected images and images corrected by conventional and dynamical method. Values of TV and pvs are calculated depending on their position within a train, where the maximum frame number is 128. Both TV and pvs for each frame number are averaged over 15 trains and train-to-train changes are captured by their standard deviation expressed as a colored area.
Figure 6: Dependence of TV of images corrected by the dynamical method using a different number of principal components. Values are calculated for one train and are averaged over 128 images with gray colored area given by standard deviation of frame-to-frame changes within a train.
Figure 7: Comparison of uncorrected sample image and flat-field corrected images using the dynamical method with \([2,4,10,20,40,100]\) principal components.
Conclusion and Discussion
We have demonstrated a method for dynamic flat-field correction on MHz repetition rate EuXFEL data. It has been shown that, for X-ray MHz microscopy data [10], the dynamic method results in an improved correction compared to the conventional flat-field correction method, taking into account temporal variations in the flat-field dataset, which is characteristic for XFEL facilities. To compare the performance of conventional and dynamic methods applied to test data, we have calculated total variation, and the pixel value spread for the area of images without any sample. Both quantitative comparison and visual assessment have shown improved results with the dynamic normalization method compared to conventional method. The normalisation algorithm has been implemented online to visualize data and flat-field corrected data during experiments at the SPB/SFX instrument of European XFEL. The implementation was able to correct and visualize incoming data in near real-time, before the next data from the MHz-frame-rate camera arrived. In future, we plan to extend the use by implementing new data analysis tools such as data decomposition methods and adding more features to the GUI.
## Acknowledgments
This work was performed within R&D project "MHz microscopy at EuXFEL: From demonstration to method", 2020 - 2022.
|
2309.12253 | SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning | We introduce an extension to the CLRS algorithmic learning benchmark,
prioritizing scalability and the utilization of sparse representations. Many
algorithms in CLRS require global memory or information exchange, mirrored in
its execution model, which constructs fully connected (not sparse) graphs based
on the underlying problem. Despite CLRS's aim of assessing how effectively
learned algorithms can generalize to larger instances, the existing execution
model becomes a significant constraint due to its demanding memory requirements
and runtime (hard to scale). However, many important algorithms do not demand a
fully connected graph; these algorithms, primarily distributed in nature, align
closely with the message-passing paradigm employed by Graph Neural Networks.
Hence, we propose SALSA-CLRS, an extension of the current CLRS benchmark
specifically with scalability and sparseness in mind. Our approach includes
adapted algorithms from the original CLRS benchmark and introduces new problems
from distributed and randomized algorithms. Moreover, we perform a thorough
empirical evaluation of our benchmark. Code is publicly available at
https://github.com/jkminder/SALSA-CLRS. | Julian Minder, Florian Grötschla, Joël Mathys, Roger Wattenhofer | 2023-09-21T16:57:09Z | http://arxiv.org/abs/2309.12253v2 | # SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning
###### Abstract
We introduce an extension to the CLRS algorithmic learning benchmark, prioritizing scalability and the utilization of sparse representations. Many algorithms in CLRS require global memory or information exchange, mirrored in its execution model, which constructs fully connected (_not sparse_) graphs based on the underlying problem. Despite CLRS's aim of assessing how effectively learned algorithms can generalize to larger instances, the existing execution model becomes a significant constraint due to its demanding memory requirements and runtime (_hard to scale_). However, many important algorithms do not demand a fully connected graph; these algorithms, primarily distributed in nature, align closely with the message-passing paradigm employed by Graph Neural Networks. Hence, we propose SALSA-CLRS, an extension of the current CLRS benchmark specifically with _scalability_ and _sparseness_ in mind. Our approach includes adapted algorithms from the original CLRS benchmark and introduces new problems from distributed and randomized algorithms. Moreover, we perform a thorough empirical evaluation of our benchmark. Code is publicly available at [https://github.com/jkminder/salsa-clrs](https://github.com/jkminder/salsa-clrs).
## 1 Introduction
Neural algorithmic reasoning combines the learning power of neural networks with the principles of algorithmic thinking. This fusion aims to promote logical reasoning and the ability to extrapolate. This is widely considered a weak spot for neural methods. Algorithms take various shapes and deal with sets, strings, images, or geometry. Several prominent and beautiful algorithms are concerned with graphs and networks. Graph algorithms usually take up a significant portion of algorithmic textbooks such as the CLRS textbook [1] and the CLRS benchmark [2], which is based on that textbook. Interestingly, the CLRS benchmark translates _every_ algorithmic problem into a common graph-based format. This approach yields the significant advantage of utilizing a single architecture across various scenarios. However, the emphasis on algorithmic diversity and unification in CLRS introduces significant constraints that hinder scalability.
The CLRS-30 dataset contains 30 algorithms operating within a centralized execution model that facilitates global information exchange, which is essential for numerous algorithms. This global information exchange is enabled by enforcing all problems to operate on a complete graph - each node can communicate with every other node, resulting in quadratic communication costs. To maintain information on the original topology, the CLRS framework augments these complete graphs with flags on each edge to indicate whether the edge exists in the input. This strategy has several limitations. While CLRS highlights its proficiency in assessing _out-of-distribution_ (OOD) capabilities, the reliance on a fully connected graph execution model imposes significant memory and computation constraints. This challenge is particularly pronounced as _graph_ algorithms are often designed with sparse graphs in mind [1].
Furthermore, when learning algorithms that guarantee correctness for any input size, evaluating models across a diverse range of large-scale inputs is crucial, as many studies have highlighted [3, 4, 5, 6, 7, 8]. Apart from considering large-scale test graphs, relying solely on a single graph generation mechanism can yield false conclusions about OOD performances [7]. The CLRS library in principle allows more flexibility and a custom generation. However, the default CLRS-30 dataset used for benchmarking provides OOD test graphs, limited to only four times the size of the training graphs, and both training and test graphs stem from the same graph generation mechanism. While under the CLRS execution model, moderately larger graphs (\(10\)x) might still be feasible on modern hardware, much larger graphs - in the order of \(100\)-fold scaling - become impossible to run due to their demanding memory requirements (Figure 1).
To address these challenges, we propose a more concise strategy. We focus solely on _graph_ algorithms, which can follow a distributed execution model, thus reducing reliance on global memory and information flow. This allows a transition to a sparse execution model. Furthermore, building upon the findings presented in [6], which underscore the superior learning and OOD performance of parallelized algorithms compared to their sequential counterparts, we also emphasize the importance of encompassing problems from the realm of distributed and randomized algorithms. Towards this end, we introduce SALSA-CLRS, a Sparse **A**lgorithmic **L**earning benchmark for **S**calable Architectures. Extending CLRS, our benchmark _i)_ leverages a sparse execution mode to enable OOD test sets that cover graphs \(100\) times the size of training sets, _ii)_ adds new graph generators for sparse and diverse graphs, thus enabling a more thorough OOD evaluation and _iii)_ incorporates distributed and randomized algorithms that align more closely with the execution models used by Graph Neural Networks (GNNs).
## 2 SALSA-CLRS Benchmark
The SALSA-CLRS Benchmark follows the structure of CLRS [2]. Each data point comprises a graph with \(n\) nodes and an algorithm trajectory. Each trajectory comprises a set of input, intermediate, and output features. Specifically, the input features capture the input state of the algorithm, along with positional identifiers for nodes to resolve tie-breaking scenarios. The intermediate features, referred to as hints, correspond to interim values of algorithm variables. These hints provide insight into the algorithm's inner workings and act as a means to encourage models to adhere closely to the algorithm's execution. It is worth noting that execution without hints is possible and may even be beneficial, as demonstrated in Section 3. Lastly, the output features directly relate to the solution of the given problem. Moreover, each data point contains a trajectory length, defining the number of steps required to solve the algorithm. Every feature is associated with a location - either a _node_, an _edge_, or the entire _graph_ - and possesses a corresponding type. SALSA-CLRS provides both pre-defined train-validation-test splits, facilitating model comparison and the capability to generate new data tailored to individual requirements. Beyond what CLRS-30 offers, SALSA-CLRS comes
Figure 1: A visualization of the difference between the graph representation in SALSA-CLRS and CLRS. When dealing with large complete graphs, the memory demands become exceedingly impractical, leading to occurrences of _Out-Of-Memory_ (OOM) errors. SALSA-CLRS enables the evaluation of scalable architectures on graphs up to 100 times the size of the training graphs.
with diverse graph types to explore OOD capabilities further. For comprehensive information, see Appendix A.3.2. The benchmark is implemented in PyG [9] and built with extendability in mind.
### Algorithms
SALSA-CLRS encompasses a set of six algorithms, adapting four from the original CLRS paper and introducing two novel additions from the field of distributed and randomized algorithms. The four CLRS algorithms were selected to ensure the representation of input, hint, and output features on the sparse graph: Breadth-first search (BFS), Depth-first search (DFS), Dijkstra, and Maximum Spanning Tree (MST). Please refer to Appendix A.2 for more details. While the algorithms introduced by CLRS-30 are inspired by sequential algorithms in the CLRS textbook, although in some cases heavily parallelized, the message-passing paradigm - essentially the driving mechanism behind GNNs - aligns closely with distributed computing principles. To encompass this perspective, we extend our benchmark by introducing two new distributed algorithms, drawn from _Mastering Distributed Algorithms_[10]. Numerous distributed algorithms incorporate randomness as a crucial component of their computation. In light of this, we enhance the CLRS framework by including the concept of randomness. In cases where an algorithm necessitates randomness, we precompute random values and treat them as regular input to the algorithm. We introduce two new algorithms: Distributed Maximal Independent Set (MIS) and Distributed Eccentricity. A description of both can be found in Appendix A.2.
### Graph Types
Building upon investigations [7] of CLRS and different graph types, we enrich the diversity of graph types compared to CLRS-30. While CLRS-30 works exclusively on Erdos-Renyi (ER) random graphs, the study by Mahdavi et al. [7] underscores the limitation of relying solely on ER graphs to assess the OOD capabilities of architectures. Recognizing this, we propose that broadening the spectrum of graph types is pivotal for a more comprehensive OOD evaluation. SALSA-CLRS comes with three distinct graph generation mechanisms: Erdos-Renyi graphs (ER) [11], Watts Strogatz graphs (WS) [12] -- and Delaunay Graphs. In contrast to CLRS-30, we reduce the ER edge probability to just above the minimum to maintain graph connectivity. WS graphs belong to the category of small-world graphs and exhibit a low clustering coefficient [13]. While still sparse, WS graphs show a very different structure. Delaunay graphs are planar and hence inherently sparse. We refer to Appendix A.3.1 for associated graph parameters.
## 3 Empirical Evaluation
In this section, we undertake an empirical evaluation by comparing three baseline models. Our analysis involves a comparison of training scenarios with and without hints, followed by comprehensive testing across all SALSA-CLRS test sets. This evaluation sheds light on deficiencies in the models on OOD test sets and therefore affirms the importance of the SALSA-CLRS benchmark.
Architectures.We use the same Encode-Process-Decode [14] from CLRS, but propose a slight simplification. We omit the re-encoding of decoded hints to update the node hidden states. This results in a simplification of the computational graph, making the architectures more scalable. We compare three baseline processors, a GRU [15] adapted GIN2 module [17], RecGNN [3], a recurrent message-passing GNN for algorithmic tasks and PGN [18], which has shown promising performance on the original CLRS benchmark. All architectures incorporate skip connections, implemented by forwarding the encoded input and the two most recent hidden states to the processor. This mechanism aids in mitigating vanishing gradient [19] issues. For a comprehensive overview of the Encode-Process-Decode architecture, our proposed changes, and the baselines, please see Appendix A.4.1.
Footnote 2: Dijkstra and MST require edge weights, so we use GINE [16].
Experiments.Each baseline model is trained for each algorithm with and without the inclusion of hints. Every run is confined to 100 epochs with early stopping. The batch size is eight graphs. All reported values are means over five runs. For more details on the metrics and the experiments, see Appendix A.4.
### Evaluation
In Table 1, we showcase the performance of two baseline models on all SALSA-CLRS algorithms. Note, as we increase the graph size, all models show a clear decline in performance. Furthermore, we observe significant performance disparities among different graph types. Remarkably, different algorithms show varying degrees of sensitivity to different graph types. For example, BFS shows stability when applied to larger ER graphs, but its performance drops on large Delaunay graphs. DFS shows the opposite behaviour. Similarly, the architectures show sensitivity to algorithms. For example, RecGNN shows the best extrapolation performance on BFS, while PGN is clearly the best on MIS. In general, the PGN model is often the best performer, in particular for DFS and Eccentricity, and for MIS we even see a very strong performance up to the largest graph sizes. It is worth mentioning that, consistent with previous findings [7], the incorporation of hints does not lead to performance improvements across the board (see Tables 2 and 3). More details can be found in Appendix A.4. It is important to emphasize the pivotal role of metrics selection. An example: Despite seemingly excellent Node Accuracy scores of both baselines on BFS, the graph accuracy shows a completely different picture (see Table 4). For larger graph instances, almost all graphs are predicted incorrectly, despite achieving a near-perfect Node Accuracy. These findings underscore SALSA-CLRS's effectiveness in comprehensively evaluating architectural vulnerabilities in terms of both scalability and graph diversity. |
2305.00582 | On APN functions and their derivatives | We determine a connection between the weight of a Boolean function and the
total weight of its first-order derivatives. The relationship established is
used to study some cryptographic properties of Boolean functions. We establish
a characterization of APN permutations in terms of the weight of the
first-order derivatives of their components. We also characterize APN functions
by the total weight of the second-order derivatives of their components. The
total weight of the first-order and second-order derivatives for functions such
as permutations, bent, partially-bent, quadratic, plateaued and balanced
functions is determined. | Augustine Musukwa | 2023-04-30T21:22:36Z | http://arxiv.org/abs/2305.00582v1 | # On APN functions and their derivatives
###### Abstract
We determine a connection between the weight of a Boolean function and the total weight of its first-order derivatives. The relationship established is used to study some cryptographic properties of Boolean functions. We establish a characterization of APN permutations in terms of the weight of the first-order derivatives of their components. We also characterize APN functions by the total weight of the second-order derivatives of their components. The total weight of the first-order and second-order derivatives for functions such as permutations, bent, partially-bent, quadratic, plateaued and balanced functions is determined.
Mzuzu University, P/Bag 201, Mzuzu 2, Malawi
augustinemusukwa@gmail.com
**Keywords:** Boolean functions; first-order derivatives; second-order derivatives; APN functions; APN permutations
**MSC 2010:** 06E30, 94A60, 14G50
## 1 Introduction
The nonlinearity and differential uniformity of a vectorial Boolean function from \(\mathbb{F}_{2}^{n}\) to itself are properties which are used to measure the resistance of a function towards linear and differential attacks, respectively. It is well-known that APN and AB functions provide optimal resistance against the said attacks. Many studies are conducted to learn more about the properties of these functions. There are several approaches to the study and characterization of these functions (for example see [1, 5, 7, 8]). Today, many APN functions are known as several studies have been conducted on their constructions. However, not so much is known about APN permutations in even dimensions. It is, therefore, important to study more about the properties and characterizations of APN permutations. One of the long-standing open problems is to find an APN permutation in dimension 8.
In this paper, we study about characterizations of APN functions (permutations). Can the total weight of the first-order and second-order derivatives of (vectorial) Boolean functions give us any information about the original functions? We first determine a relationship
between the weight of a Boolean function and the total weight of its first-order derivatives. The relationship established plays an important role in obtaining most of the results in this paper. We establish characterizations of APN functions (permutations) in terms of the weight of the first-order and second-order derivatives of their components. The total weight of the first-order and second-order derivatives of the functions such as permutations, plateaued, bent, partially-bent, quadratic and balanced functions is determined.
The paper is organised as follows. In Section 2, we report some definitions and known results. In Section 3, we study the weight of the first-order derivatives of Boolean functions and the components of vectorial Boolean functions. We establish characterization of APN permutation in terms of the weight of the first-order derivatives their components. In Section 4, we use the total weight of second-order derivatives of Boolean functions and the components of vectorial Boolean functions to characterize bent and APN functions.
## 2 Preliminaries
In this section, we report some definitions and well-known results. The reader is referred to [2, 4, 5, 9, 10, 11, 12] for more details.
In this paper, the field of two elements, \(0\) and \(1\), is denoted by \(\mathbb{F}_{2}\). A vector in the vector space \(\mathbb{F}_{2}^{n}\) is denoted by small letters such as \(v\). We use ordinary addition \(+\) instead of XOR \(\oplus\). Given any set \(A\), we denote its size by \(|A|\).
A _Boolean function_ is any function \(f\) from \(\mathbb{F}_{2}^{n}\) to \(\mathbb{F}_{2}\) and a _vectorial Boolean function_ is any function \(F\) from \(\mathbb{F}_{2}^{n}\) to \(\mathbb{F}_{2}^{m}\) for some positive integers \(n\) and \(m\). In this paper we study vectorial Boolean functions from \(\mathbb{F}_{2}^{n}\) to \(\mathbb{F}_{2}^{n}\). A Boolean function in algebraic normal form, which is the \(n\)-variable polynomial representation over \(\mathbb{F}_{2}\), is given by:
\[f(x_{1},...,x_{n})=\sum_{I\subseteq\mathcal{P}}a_{I}\left(\prod_{i\in I}x_{i} \right),\]
where \(\mathcal{P}=\{1,...,n\}\) and \(a_{I}\in\mathbb{F}_{2}\). The _algebraic degree_ (or simply _degree_) of \(f\), denoted by \(\deg(f)\), is \(\max_{I\subseteq\mathcal{P}}\{|I|\mid a_{I}\neq 0\}\).
A Boolean function \(f\) is called _affine_ if \(\deg(f)\leq 1\) and _quadratic_ if \(\deg(f)=2\). We denote the set of all affine functions by \(A_{n}\). The _weight_ of a Boolean function \(f\) is defined as \(\operatorname{wt}(f)=|\{x\in\mathbb{F}^{n}\mid f(x)=1\}|\) and \(f\) is _balanced_ if \(\operatorname{wt}(f)=2^{n-1}\). The _distance_ between two Boolean functions \(f\) and \(g\) is defined as \(d(f,g)=\operatorname{w}(f+g)\) and the _nonlinearity_ of \(f\) is defined as \(\mathcal{N}(f)=\min_{\alpha\in A_{n}}d(f,\alpha)\).
A vectorial Boolean function \(F\) from \(\mathbb{F}_{2}^{n}\) to itself is denoted by \(F=(f_{1},...,f_{n})\) where \(f_{1},...,f_{n}\) are Boolean functions called _coordinate functions_. The functions \(\lambda\cdot F\), with \(\lambda\in\mathbb{F}_{2}^{n}\setminus\{0\}\) and "\(\cdot\)" denoting dot product, are called _components_ of \(F\) and they are
denoted by \(F_{\lambda}\). A vectorial Boolean function \(F\) is a _permutation_ if and only if all its components are balanced. The degree of \(F\) is given by \(\deg(F)=\max_{\lambda\in\mathbb{F}_{2}^{n}\setminus\{0\}}\deg(F_{\lambda})\).
The _Walsh transform_ of a Boolean function \(f\) is defined as the function \(W_{f}\) from \(\mathbb{F}^{n}\) to \(\mathbb{Z}\) given by:
\[W_{f}(a)=\sum_{x\in\mathbb{F}_{2}^{n}}(-1)^{f(x)+a\cdot x}\,,\]
for all \(a\in\mathbb{F}_{2}^{n}\). A set of \(W_{f}(a)\) for all \(a\in\mathbb{F}_{2}^{n}\) is called _Walsh spectrum_. We define \(\mathcal{F}(f)\) as
\[\mathcal{F}(f)=W_{f}(0)=\sum_{x\in\mathbb{F}_{2}^{n}}(-1)^{f(x)}=2^{n}-2\text{ wt}(f).\]
Notice that \(f\) is balanced if and only if \(\mathcal{F}(f)=0\).
The nonlinearity of a function \(f\) can also be defined as follows:
\[\mathcal{N}(f)=2^{n-1}-\frac{1}{2}\max_{a\in\mathbb{F}_{2}^{n}}|W_{f}(a)|. \tag{1}\]
The nonlinearity of a vectorial Boolean function \(F\) is defined as follows:
\[\mathcal{N}(F)=\min_{\lambda\in\mathbb{F}_{2}^{n}\setminus\{0\}}\mathcal{N}( F_{\lambda}).\]
For every Boolean function \(f\), with \(n\) even, \(\mathcal{N}(f)\leq 2^{n-1}-2^{\frac{n}{2}-1}\) and \(f\) is said to be _bent_ if and only if equality holds. The lowest possible value for \(\max_{a\in\mathbb{F}_{2}^{n}}|W_{f}(a)|\) is \(2^{\frac{n}{2}}\) and this bound is achieved only for bent functions.
For \(n\) odd, a Boolean function \(f\) is called _semi-bent_ if \(\mathcal{N}(f)=2^{n-1}-2^{\frac{n-1}{2}}\). Semi-bent functions can also be defined in even dimension. For \(n\) even, a function \(f\) is semi-bent if \(\mathcal{N}(f)=2^{n-1}-2^{\frac{n}{2}}\). A vectorial Boolean function \(F\) in odd dimension is called _almost-bent (AB)_ if all its components are semi-bent.
A Boolean function \(f\) on \(n\) variables is called _plateaued_ if its Walsh spectrum is either \(\{\pm 2^{n/2}\}\) (this happens only when \(n\) is even and in this case \(f\) is bent) or \(\{0,\pm\mu\}\) for some integer \(\mu\) (see [13]). In [9, 13], the _order_ of a plateaued function \(f\) on \(n\) variables is defined as the even integer \(r\), \(0\leq r\leq n\), such that all non-zero values of \(W_{f}^{2}(a)\) are \(2^{2n-r}\).
We define the _first-order derivative_ of \(f\) at \(a\in\mathbb{F}_{2}^{n}\) by \(D_{a}f(x)=f(x+a)+f(x)\) and the _second-order derivative_ of \(f\) at \(a\) and \(b\) is defined as:
\[D_{b}D_{a}f(x)=f(x)+f(x+a)+f(x+b)+f(x+a+b).\]
An element \(a\in\mathbb{F}_{2}^{n}\) is called a _linear structure_ of \(f\) if \(D_{a}f\) is constant. The set of all linear structures of \(f\) is denoted by \(V(f)\) and we call it the _linear space_ of \(f\). It is well-known that \(V(f)\) is a vector space.
**Theorem 1**.: _A Boolean function \(f\) on \(n\) variables is bent if and only if \(D_{a}f\) is balanced for any nonzero \(a\in\mathbb{F}^{n}\)._
The two Boolean functions \(f,g:\mathbb{F}_{2}^{n}\rightarrow\mathbb{F}_{2}\) are said to be _affine equivalent_ if there exist an affinity \(\varphi:\mathbb{F}_{2}^{n}\rightarrow\mathbb{F}_{2}^{n}\) such that \(f=g\circ\varphi\). This relation is denoted by \(\sim_{A}\) and written as \(f\sim_{A}g\).
**Theorem 2**.: _Let \(f\) be a quadratic Boolean function on \(n\) variables. Then_
* \(f\sim_{A}x_{1}x_{2}+\cdots+x_{2k-1}x_{2k}+x_{2k+1}\)_, with_ \(k\leq\lfloor\frac{n-1}{2}\rfloor\)_, if_ \(f\) _is balanced,_
* \(f\sim_{A}x_{1}x_{2}+\cdots+x_{2k-1}x_{2k}+c\)_, with_ \(k\leq\lfloor\frac{n}{2}\rfloor\) _and_ \(c\in\mathbb{F}_{2}\)_, if_ \(f\) _is unbalanced._
**Definition 3**.: _For a vectorial Boolean function \(F\) and \(a,b\in\mathbb{F}_{2}^{n}\), define_
\[\delta_{F}(a,b)=|\{x\in\mathbb{F}_{2}^{n}\mid D_{a}F(x)=b\}|.\]
_The differential uniformity of \(F\) is defined as:_
\[\delta(F)=\max_{a\neq 0,b\in\mathbb{F}_{2}^{n}}\delta_{F}(a,b)\]
_and always satisfies \(\delta(F)\geq 2\). A function with \(\delta(F)=2\) is called Almost Perfect Nonlinear (APN)._
There is another representation of vectorial Boolean functions which is known as _univariate polynomial representation_. The finite field \(\mathbb{F}_{2^{n}}\) has \(2^{n}\) elements and we write \(\mathbb{F}_{2^{n}}^{*}\) to denote a set of all nonzero elements of \(\mathbb{F}_{2^{n}}\). The vector space \(\mathbb{F}_{2}^{n}\) can be endowed with the structure of the finite field \(\mathbb{F}_{2^{n}}\) (see [4]). Any function \(F\) from \(\mathbb{F}_{2^{n}}\) into \(\mathbb{F}_{2^{n}}\) admits a unique univariate polynomial representation over \(\mathbb{F}_{2^{n}}\) given as:
\[F(x)=\sum_{i=0}^{2^{n}-1}\delta_{i}x^{i}, \tag{2}\]
where \(\delta_{i}\in\mathbb{F}_{2^{n}}\) and the degree of \(F\) is at most \(2^{n}-1\). Given the binary expansion \(i=\sum_{s=0}^{n-1}i_{s}2^{s}\), we define \(\mathrm{w}_{2}(i)=\sum_{s=0}^{n-1}i_{s}\). Therefore, we say that \(F\) is a vectorial Boolean function whose algebraic degree is given by \(\max\{\mathrm{w}_{2}(i)\mid 0\leq i\leq 2^{n}-1,\delta_{i}\neq 0\}\) (see [4]).
The _(absolute) trace function_\(Tr:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) is defined as:
\[Tr(z)=\sum_{i=0}^{n-1}z^{2^{i}},\]
where \(z\in\mathbb{F}_{2^{n}}\). For \(\nu\in\mathbb{F}_{2^{n}}\), a component \(F_{\nu}\) of \(F\) is given by \(F_{\nu}(x)=Tr(\nu F)\).
**Theorem 4**.: _Let \(f\) be any unbalanced quadratic Boolean function on \(n\) variables. Then_
\[\operatorname{wt}(f)=2^{n-1}\pm 2^{\frac{n+k}{2}-1},\]
_where \(k=V(f)\)._
**Definition 5**.: _A Boolean function \(f\) is partially-bent if there exists a linear subspace \(E\) of \(\mathbb{F}_{2}^{n}\) such that the restriction of \(f\) to \(E\) is affine and the restriction of \(f\) to any complementary subspace \(E^{\prime}\) of \(E\) (where \(E\oplus E^{\prime}=\mathbb{F}_{2}^{n}\)) is bent._
**Remark 6**.: _In [6], it is proved that the weight for unbalanced partially-bent function is given by \(2^{n-1}\pm 2^{n-1-h}\) where \(\dim E=n-2h\) (\(E\) as defined in Definition 5) and \(h\leq n/2\)._
**Theorem 7**.: _Let \(F\) be a function from \(\mathbb{F}_{2^{n}}\) into itself. Then_
\[\sum_{\lambda\neq 0,a\in\mathbb{F}_{2^{n}}}\mathcal{F}^{2}(D_{a}F_{\lambda}) \geq 2^{2n+1}(2^{n}-1).\]
_Moreover, equality holds if and only if \(F\) is APN._
**Theorem 8**.: _Let \(F\) be a function from \(\mathbb{F}_{2^{n}}\) into \(\mathbb{F}_{2^{n}}\). Then, for any nonzero \(a\in\mathbb{F}_{2^{n}}\), we have_
\[\sum_{\lambda\in\mathbb{F}_{2^{n}}}\mathcal{F}^{2}(D_{a}F_{\lambda})\geq 2^{2n +1}.\]
_Moreover, \(F\) is APN if and only if equality holds for all nonzero \(a\in\mathbb{F}_{2^{n}}\)._
## 3 On weight of first-order derivatives of APN permutations
In this section, we consider the total weight of first-order derivatives of Boolean functions and components of vectorial Boolean functions. A characterization of APN permutations in terms of the weight of first-order derivatives of their components is established.
For a given Boolean function \(f\), we know from [5] that
\[\mathcal{F}^{2}(f)=\sum_{a\in\mathbb{F}_{2}^{n}}\mathcal{F}(D_{a}f). \tag{3}\]
We use the relation (3) to show the connection between the weight of a Boolean function \(f\) and the total weight of its first-order derivatives.
**Proposition 9**.: _For a Boolean function \(f\) on \(n\) variables, we have_
\[\operatorname{wt}(f)=2^{n-1}\pm\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F} _{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)}.\]
Proof.: From the relation \(\mathcal{F}^{2}(f)=\sum_{a\in\mathbb{F}_{2}^{n}}\mathcal{F}(D_{a}f)\), we obtain the following:
\[|\mathcal{F}(f)|=\sqrt{\sum_{a\in\mathbb{F}_{2}^{n}}\mathcal{F}(D_{a}f)}. \tag{4}\]
Since \(\mathcal{F}(f)=2^{n}-2\mathrm{wt}(f)\), then (4) becomes
\[|2^{n}-2\mathrm{wt}(f)|=\sqrt{\sum_{a\in\mathbb{F}_{2}^{n}}(2^{n}-2\mathrm{wt} (D_{a}f))}. \tag{5}\]
If \(\mathrm{wt}(f)\leq 2^{n-1}\), then \(|2^{n}-2\mathrm{wt}(f)|=2^{n}-2\mathrm{wt}(f)\). So (5) becomes
\[\mathrm{wt}(f) =2^{n-1}-\frac{1}{2}\sqrt{\sum_{a\in\mathbb{F}_{2}^{n}}(2^{n}-2 \mathrm{wt}(D_{a}f))}\] \[=2^{n-1}-\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F}_{2}^{n} \setminus\{0\}}\mathrm{wt}(D_{a}f)}.\]
If \(\mathrm{wt}(f)>2^{n-1}\), then \(|2^{n}-2\mathrm{wt}(f)|=2\mathrm{w}(f)-2^{n}\). So (5) becomes
\[\mathrm{wt}(f)=2^{n-1}+\frac{1}{2}\sqrt{2^{2n}-2\sum_{a\in\mathbb{F}_{2}^{n} \setminus\{0\}}\mathrm{wt}(D_{a}f)}.\qed\]
By using Proposition 9, we can write the total weight of first-order derivatives of a Boolean function \(f\) in terms of weight of \(f\) as in the following.
**Corollary 10**.: _Let \(f\) be a Boolean function on \(n\) variables. Then we have_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\mathrm{wt}(D_{a}f)=2\mathrm{wt}(f )[2^{n}-\mathrm{wt}(f)].\]
**Remark 11**.: _Observe that we can also use the relation 3 to write the total weight of the first-order derivatives of a Boolean function \(f\) in terms of \(\mathcal{F}^{2}(f)\) as follows:_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\mathrm{wt}(D_{a}f)=2^{2n-1}-\frac {1}{2}\mathcal{F}^{2}(f).\]
Since \(\mathcal{F}(f)=0\) if and only if \(f\) is balanced, then by Remark 11 and by the fact that \(0\leq\mathcal{F}^{2}(f)\leq 2^{2n}\) we deduce the following.
**Corollary 12**.: _For any Boolean function \(f\) on \(n\) variables, we have_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)\leq 2^{2n-1}.\]
_Moreover, equality holds if and only if \(f\) is balanced._
**Proposition 13**.: _For any Boolean function \(f\) on \(n\) variables, we have_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=2^{2n-1} -2\ell^{2}\]
_where \(0\leq\ell\leq 2^{n-1}\). Moreover, \(\ell=0\) if and only if \(f\) is balanced and \(\ell=2^{n-1}\) if and only if \(f\) is constant._
Proof.: Since \(\operatorname{wt}(f)\) is always a positive integer then the quantity in the square root appearing in the relation given in Proposition 9 must be
\[2^{2n}-2\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)= m^{2}\]
for some positive integer \(m\). This implies that
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=2^{2n-1 }-2\ell^{2}\]
where \(\ell=m/2\). From Corollary 12, it is clear that we must have \(0\leq\ell\leq 2^{n-1}\).
Note that, from Corollary 10, \(\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=0\) if and only if \(\operatorname{wt}(f)=0,2^{n}\) if and only if \(f\) is constant. So it follows that \(\ell=2^{n-1}\) if and only if \(f\) is a constant. We also conclude, by Corollary 12, that \(\ell=0\) if and only if \(f\) is balanced.
**Corollary 14**.: _Let \(f\) be any quadratic Boolean function on \(n\) variables. Then_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=\begin{cases} 2^{2n-1}&\text{ if $f$ is balanced}\\ 2^{2n-1}-2^{n+k-1}&\text{ otherwise},\end{cases}\]
_where \(k=V(f)\)._
Proof.: If \(f\) is balanced, then by Corollary 12 we have \(\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=2^{2n-1}\). Suppose that \(f\) is unbalanced. Then, by Theorem 4, we know that \(\operatorname{wt}(f)=2^{n-1}\pm 2^{\frac{n+k}{2}-1}\), where
\(k=V(f)\). Taking \(\operatorname{wt}(f)=2^{n-1}-2^{\frac{n+k}{2}-1}\) and applying Corollary 10, we have
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f) =2\left(2^{n-1}-2^{\frac{n+k}{2}-1}\right)(2^{n}-2^{n-1}+2^{\frac{ n+k}{2}-1})\] \[=2\left(2^{n-1}-2^{\frac{n+k}{2}-1}\right)\left(2^{n-1}+2^{\frac{ n+k}{2}-1}\right)\] \[=2\left(2^{2n-2}-2^{n+k-2}\right)=\left(2^{2n-1}-2^{n+k-1}\right)\] \[=2^{n-1}\left(2^{n}-2^{k}\right)=2^{2n-1}-2^{n+k-1}.\]
Using \(\operatorname{wt}(f)=2^{n-1}+2^{\frac{n+k}{2}-1}\) yields the same result.
Since \(D_{a}f\) is balanced for all nonzero \(a\in\mathbb{F}_{2}^{n}\) if and only if \(f\) is bent, then the following result is immediate.
**Corollary 15**.: _Let \(f\) be a bent Boolean function on \(n\) variables, with \(n\) even. Then_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=2^{n-1}( 2^{n}-1)=2^{2n-1}-2^{n-1}.\]
**Theorem 16**.: _For any partially-bent Boolean function \(f\) on \(n\) variables, we have_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=\begin{cases} 2^{2n-1}&\text{if $f$ is balanced}\\ 2^{2n-1}-2^{2n-2h-1}&\text{otherwise},\end{cases}\]
_where \(h\leq n/2\)._
Proof.: If \(f\) is balanced, then by Corollary 12 we have \(\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=2^{2n-1}\). Now suppose that \(f\) is unbalanced. From Remark 6, we know that \(\operatorname{wt}(f)=2^{n-1}\pm 2^{n-1-h}\) where \(\dim E=n-2h\). Taking \(\operatorname{wt}(f)=2^{n-1}+2^{n-1-h}\) and applying Corollary 10, we have
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a }f) =2(2^{n-1}+2^{n-1-h})(2^{n}-2^{n-1}-2^{n-1-h})\] \[=2(2^{n-1}+2^{n-1-h})(2^{n-1}-2^{n-1-h})\] \[=2(2^{2n-2}+2^{2n-2-2h})=(2^{2n-1}+2^{2n-1-2h})\] \[=2^{n-1}(2^{n}-2^{n-2h})=2^{2n-1}-2^{2n-2h-1}.\]
Using \(\operatorname{wt}(f)=2^{n-1}-2^{n-1-h}\) yields the same result.
**Proposition 17**.: _Let \(f\) be a plateaued function of order \(r\) on \(n\) variables. Then we have_
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=\begin{cases} 2^{2n-1}&\text{if $f$ is balanced}\\ 2^{2n-1}-2^{2n-r-1}&\text{otherwise.}\end{cases}\]
Proof.: If \(f\) is balanced, we have \(\mathcal{F}^{2}(f)=0\) and if \(f\) is not balanced, then \(\mathcal{F}^{2}(f)\) is nonzero positive integer and we deduce from the definition of a plateaued function of order \(r\) that it must be \(\mathcal{F}^{2}(f)=2^{2n-r}\). By applying Remark 11 the result follows.
**Theorem 18**.: _Let \(F\) be a vectorial Boolean function from \(\mathbb{F}_{2^{n}}\) to itself. Then_
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{a}F_{\lambda}) \leq 2^{2n-1}(2^{n}-1).\]
_Moreover, equality holds if and only if \(F\) is a permutation._
Proof.: By Corollary 12, we can deduce that for any function \(F\) from \(\mathbb{F}_{2^{n}}\) to itself, we have
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{a}F_{\lambda}) \leq 2^{2n-1}(2^{n}-1).\]
Since all the components \(F_{\lambda}\), \(\lambda\in\mathbb{F}_{2^{n}}^{*}\), of a permutation \(F\) are balanced, then it follows that the equality holds if and only if \(F\) is a permutation.
**Remark 19**.: _Observe that if \(F\) is an AB function or APN permutation from \(\mathbb{F}_{2^{n}}\) to itself, then we can use Theorem 18 to conclude that the total weight of the first-order derivatives of its components must be equal to \(2^{2n-1}(2^{n}-1)\). Computer check shows that Dillion's APN permutation in dimension \(6\) is equal to \(2^{11}(2^{6}-1)=129024\)._
**Remark 20**.: _Notice that, from Theorem 18, we can deduce that for any non-bijective APN function \(F\) from \(\mathbb{F}_{2^{n}}\) to itself we have_
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{a}F_{\lambda}) <2^{2n-1}(2^{n}-1).\]
**Proposition 21**.: _Let \(Q\) be a quadratic APN function from \(\mathbb{F}_{2^{n}}\) to itself, with bent and unbalanced semi-bent components only. Then we have_
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{a}F_{\lambda}) =2^{n-1}(2^{n}-1)(2^{n}-2).\]
Proof.: From Theorem 1, \(D_{a}f\) is balanced for all \(a\in\mathbb{F}_{2^{n}}^{*}\) if and only if \(f\) is bent. We know that for any unbalanced quadratic function \(f\), \(D_{a}f\) is balanced for all \(a\in\mathbb{F}^{n}\setminus V(f)\) and \(D_{\alpha}f=0\) for all \(\alpha\in V(f)\). For semi-bent we know that \(\dim V(f)=2\), i.e., \(|V(f)|=4\). Thus, for each bent, the total weight of its first-order derivatives must be \(2^{n-1}(2^{n}-1)\) (see Corollary 15) and for each semi-bent, the total weight of its first-order derivatives must be \(2^{n-1}(2^{n}-4)\). For any quadratic APN function with bent and semi-bent components, there are exactly \(2/3(2^{n}-1)\) bent components and \((2^{n}-1)/3\) semi-bent components (see [11]). So the total weight of all first-order derivatives of \(Q\) must be
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{a} Q_{\lambda}) =\frac{2}{3}(2^{n}-1)2^{n-1}(2^{n}-1)+\frac{1}{3}(2^{n}-1)2^{n-1} (2^{n}-4)\] \[=2^{n-1}(2^{n}-1)\left(\frac{2}{3}(2^{n}-1)+\frac{1}{3}(2^{n}-4)\right)\] \[=2^{n-1}(2^{n}-1)\left(\frac{2(2^{n}-1)+(2^{n}-1)-3}{3}\right)\] \[=2^{n-1}(2^{n}-1)\left(\frac{3(2^{n}-1)-3}{3}\right)\] \[=2^{n-1}(2^{n}-1)(2^{n}-2).\qed\]
**Remark 22**.: _Computer check shows that the total weight of first-order derivatives of components of the Gold APN power functions in dimensions \(4\), \(6\) and \(8\) satisfy Proposition 21. However, the non-quadratic APN power functions such as Kasami functions of dimension \(6\) and \(8\) also have the total weight that satisfy Proposition 21._
**Theorem 23**.: _Let \(F\) be a permutation from \(\mathbb{F}_{2^{n}}\) to itself. Then_
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\left[\operatorname{wt}(D_{a}F_{ \lambda})\right]^{2}\geq 2^{2n-1}(2^{n}-1)(2^{n-1}+1).\]
_Moreover, equality holds if and only if \(F\) is an APN permutation._
Proof.: We have the following:
\[\sum_{\lambda\neq 0,a\in\mathbb{F}_{2^{n}}}\mathcal{F}^{2}(D_{a}F_{ \lambda}) =\sum_{\lambda\neq 0,a\in\mathbb{F}_{2^{n}}}\left[2^{n}-2 \operatorname{wt}(D_{a}F_{\lambda})\right]^{2}\] \[=\sum_{\lambda\neq 0,a\in\mathbb{F}_{2^{n}}}\left[2^{2n}-4\cdot 2^{n} \operatorname{wt}(D_{a}F_{\lambda})+4[\operatorname{wt}(D_{a}F_{\lambda})]^{2}\right]\] \[=2^{3n}(2^{n}-1)-4\cdot 2^{n}\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}} \operatorname{wt}(D_{a}F_{\lambda})+4\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}} \left[\operatorname{wt}(D_{a}F_{\lambda})\right]^{2}. \tag{6}\]
Since \(F\) is a permutation then, by Theorem 18, we have
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{a}F_{\lambda})=2^{2 n-1}(2^{n}-1).\]
So by Equation (6) and Theorem 7, we have
\[2^{3n}(2^{n}-1)-4\cdot 2^{3n-1}(2^{n}-1)+4\sum_{\lambda,a\in\mathbb{F}_{2^{n}} ^{*}}\left[\operatorname{wt}(D_{a}F_{\lambda})\right]^{2}\geq 2^{2n+1}(2^{n}-1) \tag{7}\]
from which we deduce that
\[\sum_{\lambda,a\in\mathbb{F}_{2^{n}}^{*}}\left[\operatorname{wt}(D_{a}F_{ \lambda})\right]^{2}\geq 2^{2n-1}(2^{n}-1)(2^{n-1}+1). \tag{8}\]
Since, by Theorem 7, we conclude that the equality in the relation (7) holds if and only if \(F\) is an APN permutation, then we deduce from the same that the equality in the relation (8) holds if and only if \(F\) is an APN permutation.
**Remark 24**.: _Computer check shows that the result in Theorem 23 holds for Dillion's APN permutation in dimension \(6\). However, the result can never hold for non-bijective APN functions._
## 4 On weight of second-order derivatives of APN functions
In this section, we use the total weight of second-order derivatives of (vectorial) Boolean functions to characterize bent and APN functions.
We first exhibit the relationship between the weight of the first-order derivatives and the weight of the second-order derivatives. By Corollary 10, we can deduce that for any \(a\in\mathbb{F}_{2}^{n}\) we have \(\sum_{b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2 \operatorname{wt}(D_{a}f)[2^{n}-\operatorname{wt}(D_{a}f)]=2^{n+1} \operatorname{wt}(D_{a}f)-2[\operatorname{wt}(D_{a}f)]^{2}\) from which we get
\[\sum_{a,b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2 ^{n+1}\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)-2 \sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}[\operatorname{wt}(D_{a}f)]^{2}. \tag{9}\]
From Corollary 12, we know that \(\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{a}f)=2^{2n-1}\) if and only if \(f\) is balanced. So it implies that the equation (9) becomes
\[\sum_{a,b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2 ^{3n}-2\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}[\operatorname{wt}(D_{a}f)] ^{2} \tag{10}\]
if and only if \(f\) balanced.
By Remark 11, for any \(a\in\mathbb{F}_{2}^{n}\), observe that \(\sum_{b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2^{2n -1}-\frac{1}{2}\mathcal{F}^{2}(D_{a}f)\) from which we obtain
\[\sum_{a,b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2 ^{2n-1}(2^{n}-1)-\frac{1}{2}\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}} \mathcal{F}^{2}(D_{a}f). \tag{11}\]
Next we give an upper bound on the total weight of the second-order derivatives and show that this bound is met only for bent functions.
**Proposition 25**.: _Let \(f\) be a Boolean function on \(n\) variables. Then we have_
\[\sum_{a,b\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{b}D_{a}f)\leq 2^{2n-1} (2^{n}-1).\]
_Moreover, equality holds if and only if \(f\) is bent._
Proof.: Observe that
\[\sum_{a\in\mathbb{F}_{2}^{n}\setminus\{0\}}\mathcal{F}^{2}(D_{a}f)=0\]
if and only if \(D_{a}f\) is balanced for all \(a\in\mathbb{F}_{2}^{n}\setminus\{0\}\) if and only if \(f\) bent. Hence, the result follows immediately from the relation (11).
**Proposition 26**.: _For any partially-bent function \(f\) on \(n\) variables, we have_
\[\sum_{a,b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2 ^{2n-1}(2^{n}-2^{n-2h}),\]
_where \(h\leq n/2\)._
Proof.: Observe that, for all \(\alpha\in E\), we have \(D_{\alpha}f=c\), with \(c\in\mathbb{F}_{2}\) and \(D_{a}f\) is balanced for all \(a\in\mathbb{F}^{n}\setminus E\). Since, by Remark 6, we are given that \(\dim E=n-2h\), then we have \(|\mathbb{F}_{2}^{n}\setminus E|=2^{n}-2^{n-2h}\). Since \(D_{a}f\) is balanced then, by Corollary 12, we can deduce that \(\sum_{b\in\mathbb{F}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)=2^{2n-1}\) and since \(D_{\alpha}f\) is constant then clearly we have \(\sum_{b\in\mathbb{F}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{\alpha}f)=0\). Therefore, it follows that
\[\sum_{a,b\in\mathbb{F}^{n}\setminus\{0\}}\operatorname{wt}(D_{b} D_{a}f) =\sum_{a\in\mathbb{F}^{n}\setminus E}\sum_{b\in\mathbb{F}^{n} \setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f)+\sum_{\alpha\in E}\sum_{b\in \mathbb{F}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{\alpha}f)\] \[=2^{2n-1}(2^{n}-2^{n-2h})+2^{n-2h}(0)=2^{2n-1}(2^{n}-2^{n-2h}).\qed\]
It appears like the total weight of second-order derivatives can be associated with the nonlinearity of a Boolean function. Observe from Propositions 25 and 26 that all bent functions have the same total weight of their second-order derivatives and also all partially-bent functions with the same nonlinearity have the same total weight of their second-order derivatives.
Next we show that the total weight of the second-order derivatives can be expressed in terms of the Walsh transforms of a Boolean function \(f\).
**Lemma 27**.: _Let \(f\) be a Boolean function on \(n\) variables. Then we have_
\[\sum_{a,b\in\mathbb{F}_{2}^{n}\setminus\{0\}}\operatorname{wt}(D_{b}D_{a}f) =2^{3n-1}-\frac{1}{2^{n+1}}\sum_{a\in\mathbb{F}_{2}^{n}}W_{f}^{4}(a).\]
Proof.: If \(f\) be a Boolean function on \(n\) variables, we have
\[\sum_{a\in\mathbb{F}_{2}^{n}}W_{f}^{4}(a) =\sum_{a\in\mathbb{F}_{2}^{n}}\sum_{x,y,z,w\in\mathbb{F}_{2}^{n}} (-1)^{f(x)+f(y)+f(z)+f(w)+a\cdot(x+y+z+w)}\] \[=\sum_{a\in\mathbb{F}_{2}^{n}}\sum_{x,y,z,w\in\mathbb{F}_{2}^{n}} (-1)^{f(x)+f(y)+f(z)+f(w)}(-1)^{a\cdot(x+y+z+w)}\] \[=\sum_{x,y,z,w\in\mathbb{F}_{2}^{n}}(-1)^{f(x)+f(y)+f(z)+f(w)} \sum_{a\in\mathbb{F}_{2}^{n}}(-1)^{a\cdot(x+y+z+w)}\] \[=\sum_{x,y,z,w\in\mathbb{F}_{2}^{n}|x+y+z+w=0}2^{n}(-1)^{f(x)+f(y )+f(z)+f(w)}\] \[=2^{n}\sum_{x,y,z,w\in\mathbb{F}_{2}^{n}|w=x+y+z}(-1)^{f(x)+f(y)+ f(z)+f(w)}\] \[=2^{n}\sum_{x,y,z\in\mathbb{F}_{2}^{n}}(-1)^{f(x)+f(y)+f(z)+f(x+y +z)}\] \[=2^{n}\sum_{x,b,c\in\mathbb{F}_{2}^{n}}(-1)^{f(x)+f(x+b)+f(x+c)+f (x+b+c)}\quad\text{ ( where $y=x+b$ and $z=x+c$)}\] \[=2^{n}\sum_{x,b,c\in\mathbb{F}_{2}^{n}}(-1)^{D_{b}D_{c}f(x)}=2^{n} \sum_{b,c\in\mathbb{F}_{2}^{n}}\mathcal{F}(D_{b}D_{c}f)\] \[=2^{n}\sum_{b,c\in\mathbb{F}_{2}^{n}}(2^{n}-2\operatorname{wt}(D _{b}D_{c}f))\] \[=2^{4n}-2^{n+1}\sum_{b,c\in\mathbb{F}_{2}^{n}}\operatorname{wt}( D_{b}D_{c}f). \tag{12}\]
The result immediately follows from the relation (12).
**Remark 28**.: _Since \(\mathcal{W}_{f}(a)=\pm 2^{\frac{n}{2}}\), for all \(a\in\mathbb{F}^{n}\), if and only if \(f\) is bent, then Proposition 25 can also be deduced from Lemma 27. Since nonlinearity depends on \(\max_{a\in\mathbb{F}_{2}^{n}}|\mathcal{W}_{f}(a)|\) (see the relation (1)), then we can use Lemma 27 to conclude that a function with high nonlinearity its total weight is relatively high._
Next we give a characterization of APN functions that is deduced from Theorem 7.
**Theorem 29**.: _Let \(F\) be a function from \(\mathbb{F}_{2^{n}}\) to itself. Then_
\[\sum_{\lambda,b,a\in\mathbb{F}_{2^{n}}^{*}}\mathrm{wt}(D_{b}D_{a}F_{\lambda}) \leq 2^{2n-1}(2^{n}-1)(2^{n}-2).\]
_Moreover, equality holds if and only if \(F\) is APN._
Proof.: By the relation (11), for any \(\lambda\in\mathbb{F}_{2^{n}}^{*}\), we obtain
\[\sum_{a\in\mathbb{F}_{2^{n}}^{*}}\mathcal{F}^{2}(D_{a}F_{\lambda})=2^{2n}(2^{ n}-1)-2\sum_{a,b\in\mathbb{F}_{2^{n}}^{*}}\mathrm{wt}(D_{b}D_{a}F_{\lambda})\]
from which we deduce the following:
\[\begin{split}\sum_{a\in\mathbb{F}_{2^{n}}}\mathcal{F}^{2}(D_{a}F_ {\lambda})&=\sum_{a\in\mathbb{F}_{2^{n}}^{*}}\mathcal{F}^{2}(D_{a }F_{\lambda})+2^{2n}=2^{2n}(2^{n}-1)-2\sum_{a,b\in\mathbb{F}_{2^{n}}^{*}} \mathrm{wt}(D_{b}D_{a}F_{\lambda})+2^{2n}\\ &=2^{3n}-2\sum_{a,b\in\mathbb{F}_{2^{n}}^{*}}\mathrm{wt}(D_{b}D_{ a}F_{\lambda}).\end{split} \tag{13}\]
Using the relation (13), we have
\[\sum_{\lambda\neq 0,a\in\mathbb{F}_{2^{n}}}\mathcal{F}^{2}(D_{a}F_{\lambda})=2 ^{3n}(2^{n}-1)-2\sum_{\lambda,a,b\in\mathbb{F}_{2^{n}}^{*}}\mathrm{wt}(D_{b}D_ {a}F_{\lambda}). \tag{14}\]
By applying Theorem 7, the relation (14) becomes
\[2^{3n}(2^{n}-1)-2\sum_{\lambda,a,b\in\mathbb{F}_{2^{n}}^{*}}\mathrm{wt}(D_{b} D_{a}F_{\lambda})\geq 2^{2n+1}(2^{n}-1), \tag{15}\]
and equality holds if and only if \(F\) is APN. The relation (15) is then reduced to:
\[\sum_{\lambda,a,b\in\mathbb{F}_{2^{n}}^{*}}\mathrm{wt}(D_{b}D_{a}F_{\lambda}) \leq 2^{2n-1}(2^{n}-1)(2^{n}-2)\]
and equality holds if and only if \(F\) is APN.
**Theorem 30**.: _Let \(F\) be a function from \(\mathbb{F}_{2^{n}}\) into \(\mathbb{F}_{2^{n}}\). Then, for any nonzero \(a\in\mathbb{F}_{2^{n}}\), we have_
\[\sum_{\lambda,b\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{b}D_{a}F_{ \lambda})\leq 2^{2n-1}(2^{n}-2).\]
_Moreover, \(F\) is APN if and only if equality holds for all nonzero \(a\) in \(\mathbb{F}_{2^{n}}\)._
Proof.: By applying the relation (3), we have the following:
\[\sum_{\lambda\in\mathbb{F}_{2^{n}}}\mathcal{F}^{2}(D_{a}F_{\lambda}) =\sum_{\lambda,b\in\mathbb{F}_{2^{n}}}\mathcal{F}(D_{b}D_{a}F_{ \lambda})\] \[=\sum_{\lambda,b\in\mathbb{F}_{2^{n}}}[2^{n}-2\operatorname{wt} (D_{b}D_{a}F_{\lambda})]\] \[=2^{3n}-2\sum_{\lambda,b\in\mathbb{F}_{2^{n}}}\operatorname{wt} (D_{b}D_{a}F_{\lambda})\] \[=2^{3n}-2\sum_{\lambda,b\in\mathbb{F}_{2^{n}}^{*}}\operatorname{ wt}(D_{b}D_{a}F_{\lambda}). \tag{16}\]
By Theorem 8 and the equation (16) we obtain the following:
\[\sum_{\lambda,b\in\mathbb{F}_{2^{n}}^{*}}\operatorname{wt}(D_{b}D_{a}F_{ \lambda})\leq 2^{2n-1}(2^{n}-2).\qed\]
**Remark 31**.: _As observed in Remark 28, we can conclude that the nonlinearities of components of APN functions are relatively high since in Theorem 29 the total weight of its second-order derivatives is the highest._
## 5 Conclusion
In this paper, we have established a characterization of APN permutations in terms of the weight of their first-order derivatives. Using the total weight of second-order derivatives, a characterization for APN functions has been discovered. The total weight of the first-order and second-order derivatives for the functions such as bent, partially-bent, plateaued, balanced and permutations have been determined.
|
2309.16035 | MKRAG: Medical Knowledge Retrieval Augmented Generation for Medical
Question Answering | Large Language Models (LLMs), although powerful in general domains, often
perform poorly on domain-specific tasks such as medical question answering
(QA). In addition, LLMs tend to function as "black-boxes", making it
challenging to modify their behavior. To address the problem, our work employs
a transparent process of retrieval augmented generation (RAG), aiming to
improve LLM responses without the need for fine-tuning or retraining.
Specifically, we propose a comprehensive retrieval strategy to extract medical
facts from an external knowledge base, and then inject them into the LLM's
query prompt. Focusing on medical QA, we evaluate the impact of different
retrieval models and the number of facts on LLM performance using the
MedQA-SMILE dataset. Notably, our retrieval-augmented Vicuna-7B model exhibited
an accuracy improvement from 44.46% to 48.54%. This work underscores the
potential of RAG to enhance LLM performance, offering a practical approach to
mitigate the challenges posed by black-box LLMs. | Yucheng Shi, Shaochen Xu, Tianze Yang, Zhengliang Liu, Tianming Liu, Quanzheng Li, Xiang Li, Ninghao Liu | 2023-09-27T21:26:03Z | http://arxiv.org/abs/2309.16035v3 | # MedEdit: Model Editing for Medical Question Answering
###### Abstract
Large Language Models (LLMs), although powerful in general domains, often perform poorly on domain-specific tasks like medical question answering (QA). Moreover, they tend to function as "black-boxes," making it challenging to modify their behavior. Addressing this, our study delves into model editing utilizing in-context learning, aiming to improve LLM responses without the need for fine-tuning or retraining. Specifically, we propose a comprehensive retrieval strategy to extract medical facts from an external knowledge base, and then we incorporate them into the query prompt for the LLM. Focusing on medical QA using the MedQA-SMILE dataset, we evaluate the impact of different retrieval models and the number of facts provided to the LLM. Notably, our edited Vicuna model exhibited an accuracy improvement from 44.46% to 48.54%. This work underscores the potential of model editing to enhance LLM performance, offering a practical approach to mitigate the challenges of black-box LLMs.
## 1 Introduction
Large Language Models (LLMs) are transformative models in NLP and machine learning, characterized by deep architectures and extensive parameters [21]. They excel in comprehending and generating human-like text due to their training on diverse and extensive text collections [1][1]. Consequently, LLMs are highly versatile and can be applied in various tasks such as question answering, translation, and content generation.
However, despite their extensive pre-training on diverse corpora, large language models may exhibit gaps in their medical knowledge. To assess the accuracy of the medical information encoded within LLMs, we conduct a preliminary experiment as shown in Figure 1. In this experiment, we select Vicuna, a fine-tuned LLaMA language model [1], as our candidate model. We utilized a reconstructed Disease Database [21] containing 44,561 triplets as our medical knowledge base. We randomly selected 1000 facts and prompted the LLM to provide the tail entity of the medical triplet, using the head entity and relation as the question. If the LLM's answer contained the correct tail entity, we considered it an accurate encoding of the medical fact. Conversely, if the answer did not include the expected tail entity, it indicated a failure to encode the relevant medical information. For comparison, we also assessed the LLM's performance on the CounterFact dataset [15], which consists of general domain factual knowledge. The results revealed that Vicuna performed relatively poorly in answering medical knowledge questions but demonstrated much better performance in the general domain knowledge domain. This discrepancy highlights the challenges in medical knowledge understanding for LLMs and underscores the need for further model editing in this domain.
To address the above problem, we propose to conduct medical model editing, which refers to the process of modifying LLMs to improve their performance on tasks related to answering medical questions. Through model editing, we attempt to add or revise the medical knowledge within the LLM to enhance its ability to understand, answer, or generate content related to medical queries.
Several significant challenges exist in medical model edit
Figure 1: Preliminary Experiment on Encoded Knowledge.
ing. Firstly, the predominant LLM architectures, such as ChatGPT [14], tend to operate as black-box models, rendering intricate adjustments or refinements challenging. While other models like LLaMA [15] do offer avenues for fine-tuning, they often come with the drawback of being computationally demanding. To address them, we propose to utilize in-context learning, a powerful method that offers dual advantages: compatibility with black-box models and the elimination of the need for extensive retraining.
However, in-context learning also has limitations. The primary challenge lies in the constraint imposed on input token limitation, prompting us to carefully select facts for optimal prompt design. Naive search strategies will fail in the medical context. Firstly, entity matching becomes especially challenging due to the numerous aliases for medical terms. Moreover, a direct embedding search strategy, such as simultaneously embedding both questions and answers and then searching, can lead to misleading retrieval. This is because questions generally contain richer contextual information compared to answers, potentially causing essential answer-related facts to be overlooked in this approach.
In our study, we make meaningful contributions to the medical language model editing. (1) We utilize in-context learning as an innovative mechanism to conduct model modifications. By directly including relevant facts within the model's context, rather than relying on intricate fine-tuning and resource-intensive retraining processes, we achieve substantial improvements in the performance of medical QA tasks. (2) We introduce a tailored fact extraction strategy specifically designed for medical QA. By employing a two-step search, our approach ensures the retrieval of the most crucial and contextually relevant information from a large external knowledge base.
## 2 Related Work
### Medical QA & Large Language Model
LLMs have undergone extensive evaluations to assess their aptitude within specialized medical domains. A notable investigation at the Mayo Clinic delved into the capability of ChatGPT, among other LLMs, to answer intricate radiation oncology physics questions. This study notably found that ChatGPT (GPT-4) surpassed other models, revealing its superior capacity. Additionally, its performance was even more pronounced when prompted first to explain before answering, emphasizing the model's versatility in complex tasks [11]. In another focused effort, researchers conducted a detailed assessment of thirty-two LLMs for their proficiency in interpreting radiology reports, an integral part of radiology NLP. Their findings provided crucial insights into the varying strengths and weaknesses of these models, showcasing their growing importance in the study of medical NLP [13]. Furthermore, a holistic review conveyed many evaluation methods employed for LLMs across diverse domains. This survey emphasized the need for comprehensive evaluation techniques, especially when considering LLM applications in medicine, ethics, and education, among others [10]. Collectively, these researches amplify the emerging significance and vast potential of LLMs in refining medical QA processes and offering specialized medical insights.
### Domain Specific Language Models
The evolution and specialization of LLMs for domain-specific applications, particularly in the medical realm, have been instrumental in strengthening the efficacy of natural language processing (NLP) in these niche areas. A seminal work highlighted the advantages of initiating pre-training from scratch on biomedicine-dedicated text data, challenging the mainstream approach of further pre-training on general-domain pre-trained models. Their comprehensive biomedical NLP benchmark, known as the BLURB, demonstrated that domain-specific pre-training laid a robust foundation for various biomedical NLP tasks, often leading to state-of-the-art results [17]. Delving deeper into medical specialties, RadOnc-GPT emerged as a paradigm of domain-specific expertise. Fine-tuned on an extensive dataset of radiation oncology records, this model, through instruction tuning, excelled in tasks ranging from generating radiotherapy treatment regimens to determining optimal radiation modalities, highlighting the transformative potential of domain-specific LLMs [13]. The introduction of PharmacyGPT extended this trend to clinical pharmacy. By emulating the responsibilities and capabilities of clinical pharmacists, PharmacyGPT unveiled both the potential applications and inherent limitations of LLMs in patient care scenarios and, more broadly, in the healthcare domain [13]. In public health and infodemiology, the AD-AutoGPT appeared as a game-changer, autonomously processing and analyzing intricate health narratives related to Alzheimer's Disease, bridging gaps in public discourse understanding [1]. Radiology-GPT's unveiling further solidified the promise of domain-specific LLMs. Fine-tuned specifically for radiology applications, this model demonstrated versatility in radiological diagnosis and research, emphasizing the potential and necessity of developing LLMs tailored to cater to the unique requirements of different medical specialties [13].
### Retrieval Method
The integration of retrieval methods has developed as an enhancement for various applications. The Retrieval Augmented Generation (RAG) model in Open-Domain Question Answering (ODQA), initially relied on a Wikipedia-based external knowledge base but faced challenges in domain adaptation, especially for specialized areas like healthcare and news. However, with the proposed _RAG-end2end_, this model was further evolved to effectively adapt to domain-specific knowledge bases through joint training of both the retriever and generator components, ensuring that all external knowledge base components were updated during the training phase [21]. Another concept, Retrieval-Augmented Language Modeling (RALM), utilized grounding documents to condition a language model during its generative phase. Interestingly, a simplified approach known as In-Context RALM demonstrated that by prepending grounding documents to the input, even without altering the original LM architecture, there were significant gains in lan
guage modeling performance [14]. Within the medical field, the Almanac model showcased how retrieval-augmented LLMs could transform clinical medicine. By leveraging medical guidelines and treatment recommendations, Almanac was able to improve factuality in clinical scenarios, demonstrating the possibility of LLMs to support clinical decision-making processes [11]. Furthermore, the ChatGPT model by OpenAI, although primarily recognized for its conversational capabilities, incorporated a retrieval feature that enabled it to search the web while generating responses. This development, though still in its beta phase, exhibits the perspective synergy of combining traditional literature search engines with LLMs to amplify information retrieval and synthesis in medical contexts [15]. When integrated with LLMs, these retrieval techniques highlight a forward-moving path toward enhancing domain-specific information retrieval and effectiveness across diverse sectors.
## 3 Method
In this work, we aim to edit the language model utilizing in-context learning, where external medical facts will be included in the prompt to guide reasoning. We define our task in Section 3.1, with our facts retrieval strategy in Section 3.2.
### Model Editing for Medical QA
**Medical Question Answering**
We aim to edit the medical question answering (Medical QA) task in this work. Formally, for one medical question \(q\) and four candidate answers \(a_{1},a_{2},a_{3}\), and \(a_{4}\), a language model \(g_{\theta}(\cdot)\) is expected to pick out the correct answer \(a^{*}\) from four candidates given the question \(q\) and a question prompt template \(t_{q}\). Ideally, the LLM should provide the correct answer as output: \(g_{\theta}(t_{q},q)=a^{*}\). An example question template is shown below:
```
Givenquestion:[q],whichofthe followingansweristtrue: {a_1},[a_2],[a_3],[a_4] Youcanonlyoutputthepredictedlabelin exactwords.
```
**Model Model Editing**
Medical model editing aims to insert or revise the medical knowledge that is missing or wrongly encoded within the language model. In this paper, we leverage in-context learning for model editing. Specifically, we suggest directly incorporating the facts requiring editing into the question prompt. This ensures the language model is informed about these facts while responding to the question. Successful model editing can effectively alter the output of a pre-trained language model, thereby enhancing its performance on medical question-answering datasets.
Formally, we use \(g_{\theta}(\cdot)\) to represent the original pre-trained language model with parameters \(\theta\). The medical fact to be edited is represented as \(f\), comprising a triplet \((h,r,t)\), where \(h\) signifies the head entity, \(r\) denotes the relation, and \(t\) represents the tail entity. In the medical context, the fact could be a medical statement, such as _(Atherosclerosis, is a risk factor for, cholesterol embolism)_.
We use \(g_{\theta^{\prime}}(\cdot)\) to denote the edited model obtained through in-context learning. We can obtain \(g_{\theta^{\prime}}(\cdot)\) by incorporating medical facts \(f\) into the question template. We define the editing question template as \(t_{e}\), which can be designed like:
```
Herearesomemedicalfacts:f_1,f_2,...f_n. Givenquestion:[q],whichofthefollowingansweristtrue: {a_1},[a_2],[a_3],[a_4] Youcanonlyoutputthepredictedlabelin exactwords.
```
The sole distinction between the two templates, \(t_{q}\) and \(t_{e}\), lies in the inclusion of additional medical facts \(\{f_{1},f_{2},...,f_{n}\}\). Through the incorporation of these medical facts into the model input, we aim to guide the language model to generate an answer that corresponds to the accurate information contained within the provided medical facts.
In our design, the quality of incorporated medical facts is critical to the efficacy of model editing. Ideally, we would prefer to encompass all facts from the external knowledge base to ensure comprehensiveness. However, due to constraints related to model input limitations and the need for editing efficiency, we must limit the number of facts considered. Our design proposes to include facts with high relevance to both the ground truth answer \(a^{*}\) and question \(q\). Thus, in the following section, we will explore strategies for retrieving these highly relevant facts for each editing instance.
### Medical Facts Retrieval
For a successful in-context model editing, we need to retrieve the most relevant medical facts from the external knowledge base. We formally define our objective as follows: Given a question \(q\) and four answer candidates, \(a_{1},a_{2},a_{3},a_{4}\), our primary goal is to discern and extract a set of facts that possess the highest relevance to both the question and the answer candidates from an external knowledge base \(\mathcal{F}\). Subsequently, our aim is to employ these extracted facts as the prompt to conduct in-context learning editing.
To achieve this goal, we introduce a comprehensive strategy for extracting relevant facts. In the initial preparatory phase, we transform the entire external knowledge base into embeddings. Specifically, for each fact \(f_{i}\) within the knowledge base \(\mathcal{F}\), we employ a pre-trained language model \(g_{z}\) to convert it into an embedding denoted as \(z_{i}^{f}\). This process results in the creation of an embedded knowledge base, represented as \(Z_{\mathcal{F}}\). In our research, we select the Disease Database [21] as our knowledge base \(\mathcal{F}\) and employ various models, including SapBert [15] and Contriver [15], as the embedding model \(g_{z}\).
Following this, in the first extraction step, we employ the same embedding techniques to represent each answer candidate \(a_{i}\) as an embedding \(z_{i}^{a}\). For every candidate answer, we then extract the \(K\) most closely related facts from the external knowledge base \(\mathcal{F}\), establishing an initial set of facts denoted
as \(\mathcal{F}_{I}\). In this work, the semantic relevance is evaluated by the embeddings similarity \(s\), which is defined as below:
\[s(z^{a},z^{f})=(z^{a})^{T}z^{f}. \tag{1}\]
For \(z^{a}\), the top-\(K\) most related fact set can be selected :
\[\mathcal{F}_{I}=\underset{f\in\mathcal{F}}{\text{Top-}K}\ s(g_{z}(z^{f}),z^{a}), \tag{2}\]
where \(\text{Top-}K\) function returns \(K\) facts with the highest similarity value. The initial set comprises medical information pertaining to all answer candidates, which could be integrated into the editing prompt to aid the language model in its reasoning process. However, the retrieved set of facts \(\mathcal{F}_{\mathcal{I}}\) may include redundant information that is unrelated to the question description \(q\). The inclusion of irrelevant information could potentially confuse the language model, resulting in a decrease in answering performance [11].
Thus, we need to remove these redundant facts, which leads to our fact refinement in the second step of the extraction. Here, we first convert the question \(q\) concatenated with four candidates into an embedding \(z^{q}\) using the same model \(g_{z}\). Subsequently, we select the top-\(k\) facts from the initial facts set \(\mathcal{F}_{I}\) that exhibit high similarity to \(z^{q}\), forming the refined facts set \(\mathcal{F}_{R}\), which can be defined as below:
\[\mathcal{F}_{R}=\underset{f\in\mathcal{F}_{I}}{\text{Top-}k}\ s(g_{z}(z^{f}),z^{q}). \tag{3}\]
The refined facts set \(\mathcal{F}_{R}\) contains facts that are related to both question description and answer candidates. These contextual facts will act as anchors, providing the model with vital background information intended to improve its decision-making capability. Our hypothesis is that these contextual facts will aid the model in better understanding and aligning its responses with the the specific medical question at hand, ultimately enhancing accuracy.
## 4 Experiments
To explore the efficacy of model editing in medical question answering, we conduct experiments driven by three key questions: **RQ1:** Can model editing enhance performance? **RQ2:**Which fact retrieval model is most effective? **RQ3:** Does the number of editing facts impact performance?
### Experiment Setting
In this section, we present our experimental settings, including the test dataset, the target language model for editing, baseline models for comparison, and our evaluation methodology with defined criteria and metrics.
#### 4.1.1 MedQA-USMILE Dataset
In our experimental setup, we turned to the MedQA-USMILE dataset, a robust tool specifically tailored for the rigors of open-domain medical question answering. Curated with precision from the United States Medical License Exams (USMLE), this dataset encompasses a trilingual spectrum, covering English, simplified Chinese, and traditional Chinese. A detailed breakdown reveals a diverse set of 12,723 questions in English, 34,251 in simplified Chinese, and 14,123 in traditional Chinese. As highlighted by Jin [13], the dataset's design and complexity present a formidable challenge to even the most advanced OpenQA systems. Contemporary research leveraging this dataset has exposed certain performance bottlenecks, with top recorded accuracies standing at 36.7% for English questions, a commendable 70.1% for those in simplified Chinese, and 42.0% for traditional Chinese iterations. The multifaceted nature of the MedQA-USMILE dataset not only establishes it as a crucible for medical QA model evaluation but also underscores its potential as a pivotal benchmark in multilingual medical assessments.
#### 4.1.2 Pre-edit Language Model: Vicuna
The model leveraged in our study is Vicuna-13B [14], an innovative open-source chatbot developed by fine-tuning the LLaMA model on an expansive dataset derived from user-shared conversations on ShareGPT. The
Figure 2: Framework Design for Our Proposed MedEdit: (0) Preparation: use an embedding model to convert fact triplets into embedding. (1) Broad Search: search the most related facts to answer candidates and form the initial facts set. (2) Refined Search: select facts related to the question from the initial facts set and form the refined facts set. (3) Apply retrieved facts to in-context editing.
dataset comprised approximately 70K conversations, ensuring a diverse and rich training set. Modifications were made to the training regimen, building upon the groundwork laid by the Stanford Alpaca project. Noteworthy adjustments include accounting for multi-turn conversations in the training loss and significantly extending the maximum context length from the conventional 512, as seen in Alpaca, to 2048 tokens in Vicuna. It is noteworthy that the quality assessment of Vicuna-13B was intriguingly executed using GPT-4 as a comparative metric, which juxtaposed the chatbot's responses against other models. Though promising, it is essential to underscore that Vicuna, like other large language models, does manifest certain limitations which we accounted for in our experimental design.
#### 4.1.2 Baseline Models for Medical QA
In the landscape of language models, our experiments positioned our approach amidst a collection of innovative models. BERT [4] was a trailblazer, introducing bidirectional pre-training from unlabeled text. This allowed it to capture rich context by examining both preceding and following text, a feat that spawned a new wave of models. One of its derivatives, BioBERT [11], was specifically designed for the biomedical domain. By pre-training on biomedical corpora, it navigated the distinct terminologies and structures inherent to biomedical literature, often surpassing BERT and other models in biomedical text mining tasks. Another noteworthy advancement came from RoBERTa [12]. This model revisited the training regimen of BERT, making optimizations in hyperparameters and underscoring the substantial benefits of diligent parameter tuning. SapBERT [12] was exceptional with its unique self-alignment mechanism, combined with its capacity to exploit vast biomedical ontologies like UMLS, which made it a robust solution for tasks like medical entity linking. Lastly, QA-GNN [23] offered a novel take on question answering. By blending insights from pre-trained language models and knowledge graphs, it showcased an enhanced ability to reason over diverse data sources, especially evident in benchmarks like MedQA-USMLE. These models, each with their distinct strengths, provided a comprehensive benchmark for gauging the performance of our experimental approach.
#### 4.1.3 Answer Evaluation
For the assessment of our model's performance, we employed a string-matching approach to gauge the alignment between the model-generated answers and the ground truth. Specifically, we examined the extent to which the ground truth answer was present within the response provided by the model. In this context, an answer was deemed correct if the entirety of the ground truth was identifiable within the model's output. Utilizing string matching in this manner offers a straightforward yet effective method to ascertain the precision of the model's responses, particularly when direct alignment with the expected answer is paramount.
### Main Experiment Results
As shown in Table 1, our edited Vicuna model achieves the best performance in accuracy on the test split of MedQA-USMLE dataset. We also have the following observation: (1) our post-edited Vicuna model yielded a significant boost in accuracy, achieving 48.54%. This improvement not only out-paced the baseline, pre-edited Vicuna by over 4% points but also surpassed the performances of models like Bio-BERT-large, SapBERT, and QA-GNN.
(2) Our approach is more efficient than baselines. The enhancements in the post-edited Vicuna model are realized without resorting to resource-intensive methods like fine-tuning (BioBERT) or the overhead of training an entirely new model from scratch (SapBERT and QA-GNN). This underscores the effectiveness of model editing as a strategy, showcasing that with strategic modifications, we can achieve competitive performance gains without the typically associated computational costs.
### Ablation Study
#### 4.3.1 Evaluation on Retrieval Model
In this subsection, we compare the effectiveness of different embedding models in our fact retrieval task. We show the QA accuracy using the Contirever [13] and SapBert [12] as the embedding model \(g_{z}\) in Figure 3. The obtained embeddings are used to retrieve related facts, as we discuss in Section 3.2. We can observe that Contirever slightly outperformed SapBert, securing an accuracy of 48.54% compared to SapBert's 48.07%.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Method** & **Accuracy (\%)** \\ \hline BERT-base [4] & 34.3 \\ BioBERT-base [11] & 34.1 \\ RoBERTa-large [12] & 35.0 \\ BioBERT-large [11] & 36.7 \\ SapBERT [12] & 37.2 \\ QA-GNN [23] & 38.0 \\ \hline Pre-edited Vicuna [14] & 44.46 \\ \hline Post-edited Vicuna (**Ours**) & 48.54 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of MedQA-USMLE (Test) Answering Accuracy.
Figure 3: Retrieval Model Comparison.
A potential rationale for Contriever's enhanced performance might be its more comprehensive pre-training strategy. First, Contriever utilizes contrastive learning to pre-train the model, which is more effective than self-alignment pre-training [12]. Moreover, it is possible that Contriever's pre-training on various text data, beyond just medical data, equips it with a nuanced understanding of language semantics and structures, making it more adept at pinpointing relevant facts in the context of a query. In contrast, SapBert is primarily designed for specialized medical datasets and might be inherently more narrow in its focus. Although both models operated on the same dataset for this experiment, their foundational design principles could result in different retrieval competencies.
In summation, the results underscore the significance of a retrieval model's architecture and foundational training, even when working within a specialized domain. Contriever's slightly superior accuracy suggests that a broader pre-training approach can offer advantages in specific retrieval tasks, even within a constrained dataset like our disease database.
**Evaluation on Retrieval Fact Number**
In the following experiment, we analyze the editing performance with different numbers of medical facts. Specifically, we first select \(k\) values from 4, 8, and 16, then incorporate the corresponding top-\(k\) facts into the prompt. We compare the QA accuracy of the edited language model in Table 2. The results indicate a positive correlation between the number of inserted facts and the model's performance, with a notable improvement as more facts are incorporated.
This correlation can be attributed to the nature of medical data. Medicine is a discipline characterized by its intricate web of interrelated facts, pathologies, and treatments. By providing the model with a richer set of facts, we essentially arm it with a more comprehensive context, thereby enabling it to discern finer nuances and interrelations when faced with medical questions. In such a complex domain, every additional piece of relevant information becomes a crucial anchor, aiding in more informed decision-making.
However, there's a practical ceiling to this approach, determined by the limitations of the prompt size that can be presented to the model. As the number of facts increases, so does the prompt length, which can potentially surpass the model's maximum allowable input length. Therefore, while the trend suggests that more facts inherently lead to better performance, the real-world application of this insight is bounded by the technical constraints of the language model.
## 5 Conclusion
In this study, we ventured into the intricate field of language model performance optimization, addressing key research questions regarding model editing, retrieval model efficacy, and the optimal number of facts required to bolster model performance. Our experiments, grounded on the MedQA-SMILE dataset and pivoted around the Vicuna language model, showcased critical findings. Model editing indeed emerges as a potent strategy to enhance language model output without necessitating extensive fine-tuning or the development of entirely new models. Such an approach is not only resource-efficient but also effectively bridges the performance gap, allowing our post-edited Vicuna model to eclipse baseline figures with a 48.54% accuracy.
The comparison between Contriever and SapBert for fact retrieval accentuated the nuanced importance of retrieval mechanisms, with the former marginally outperforming the latter (48.54% accuracy versus 48.07%). Our exploration into the optimal number of facts to present to a model reaffirmed that the more facts provided the better the model performances. Yet, it also cautioned us about the limitations imposed by model prompt sizes, emphasizing a balance between information abundance and manageability.
Our efforts were also set against the backdrop of state-of-the-art models, such as BERT, BioBERT, RoBERTa, SapBERT, and QA-GNN. These models, with their distinct contributions and specializations, enriched our understanding and provided a comprehensive benchmarking canvas.
In essence, this work is a testament to the boundless potential of iterative model refinement. By understanding the intricacies of language models, we can push the boundaries of what these models can achieve, especially in specialized domains like medical question answering. We hope the insights gathered from this research could light the path for future endeavors in the field of language model optimization.
|
2309.07138 | Blind Source Separation of Single-Channel Mixtures via Multi-Encoder
Autoencoders | The task of blind source separation (BSS) involves separating sources from a
mixture without prior knowledge of the sources or the mixing system.
Single-channel mixtures and non-linear mixtures are a particularly challenging
problem in BSS. In this paper, we propose a novel method for addressing BSS
with single-channel non-linear mixtures by leveraging the natural feature
subspace specialization ability of multi-encoder autoencoders. During the
training phase, our method unmixes the input into the separate encoding spaces
of the multi-encoder network and then remixes these representations within the
decoder for a reconstruction of the input. Then to perform source inference, we
introduce a novel encoding masking technique whereby masking out all but one of
the encodings enables the decoder to estimate a source signal. To this end, we
also introduce a sparse mixing loss that encourages sparse remixing of source
encodings throughout the decoder and a so-called zero reconstruction loss on
the decoder for coherent source estimations. To analyze and evaluate our
method, we conduct experiments on a toy dataset, designed to demonstrate this
property of feature subspace specialization, and with real-world biosignal
recordings from a polysomnography sleep study for extracting respiration from
electrocardiogram and photoplethysmography signals. | Matthew B. Webster, Joonnyong Lee | 2023-08-31T17:35:42Z | http://arxiv.org/abs/2309.07138v3 | # Self-Supervised Blind Source Separation via Multi-Encoder Autoencoders
###### Abstract
The task of blind source separation (BSS) involves separating sources from a mixture without prior knowledge of the sources or the mixing system. This is a challenging problem that often requires making restrictive assumptions about both the mixing system and the sources. In this paper, we propose a novel method for addressing BSS of non-linear mixtures by leveraging the natural feature subspace specialization ability of multi-encoder autoencoders with fully self-supervised learning without strong priors. During the training phase, our method unmixes the input into the separate encoding spaces of the multi-encoder network and then remixes these representations within the decoder for a reconstruction of the input. Then to perform source inference, we introduce a _novel_ encoding masking technique whereby masking out all but one of the encodings enables the decoder to estimate a source signal. To this end, we also introduce a so-called _pathway separation loss_ that encourages sparsity between the unmixed encoding spaces throughout the decoder's layers and a so-called _zero reconstruction loss_ on the decoder for coherent source estimations. In order to carefully evaluate our method, we conduct experiments on a toy dataset and with real-world biosignal recordings from a polysomnography sleep study for extracting respiration.
## 1 Introduction
Source separation is the process of separating a mixture of sources into their unmixed source components. Blind source separation (BSS) aims to perform source separation without any prior knowledge of the sources or the mixing process and is widely applied in various fields such as biomedical engineering, audio processing, image processing, and telecommunications. Classical approaches to blind source separation such as those incorporating independent component analysis (ICA) [2, 14, 15, 16], principal component analysis (PCA) [20] or non-negative matrix factorization (NMF) [23, 41] are effective at separating sources only under specific conditions. For example, these approaches make assumptions such as statistical independence or linear dependency between sources, which limits their potential applications and effectiveness [21].
Blind source separation has a long-standing shared connection with deep learning as they both originally take inspiration from biological neural networks [3, 15, 23]. The connection between these two fields exists since even before the advancement of modern deep learning approaches. Of course, with the explosive rise of deep learning, recent approaches to source separation often incorporate deep neural architectures employing supervised learning, semi-supervised learning, or transfer learning [18, 19, 42]. Further, in deep neural network architectures such as transformers with multi-headed attention [51], or the pivotal AlexNet model [22], originally trained with two sparsely connected convolutional encoders, the tendency for separated encoding structures that share the same inputs to specialize in feature subspaces has been observed. Several works have shown feature subspace specialization to be a crucial
aspect of the learning process in the case of transformers [46, 52, 53]. Generally, the previous work focuses primarily on the interpretability of the representations that arise from this phenomenon of feature subspace specialization.
Artificial neural networks are proven to be a successful approach for many source separation tasks in the literature [5, 7, 9, 26, 35, 39, 40, 43, 57, 58]. In this paper, we utilize the natural feature subspace specialization of neural networks with multiple encoder-like structures for BSS. Our proposed method employs a fully self-supervised training process such that the network is given no examples of the sources or information about the feature distribution of the sources, and therefore, is fully compatible with the problem statement of BSS in which we do not have any prior knowledge of the sources or the mixing process and only have access to the mixture for separation. Our proposed method utilizes a convolutional multi-encoder-single-decoder autoencoder that learns to separate the feature subspaces of the given mixture's sources within the encoding space of the network. The network decoder learns to remix the concatenated outputs of the multiple encoders to predict the original input mixture. This is achieved with a simple reconstruction loss between the network's input and output without making any strong assumptions about the source signals or the mixing process. Significantly, _no assumption of linear separability from the mixture is made for the sources_; an assumption that many other works incorporate as a key component of the loss design. In addition, we propose two key regularization methods for enabling source estimation: the _pathway separation loss_ and the _zero reconstruction loss_. The pathway separation loss aims to keep the weight connections shared by source encoding spaces and subsequent mappings sparse throughout the decoder by decaying the submatrices in the weights that connect separate encoding spaces toward zero. In other words, we encourage learned pathways in the weights of the decoder that each decode a specific encoding space such that all of the pathways are sparsely connected. The degree of sparseness can be controlled by a hyperparameter which scales the contribution of the loss term with respect to the total loss. After training, source estimations are produced using a novel encoding masking technique where all but a single encoding are masked with zero arrays and then the resulting output of the decoder is the source estimation associated with the active encoding i.e. the encoding that is not masked. The quality of source estimations is ensured by applying the zero reconstruction loss on the decoder during the training phase. The zero reconstruction loss reduces the impact that parameters unrelated to the target source have on the decoding of the active encoding space for source estimations by having the decoder construct a zero output when given an all-zero encoding.
To the best of our knowledge, our proposed method is the _first_ in the literature to employ fully self-supervised multi-encoder autoencoders for performing BSS with a special focus on non-linearly mixed sources. The effectiveness of the proposed method is evaluated on a toy dataset of non-linear mixtures, and on raw real-world electrocardiogram (ECG) and photoplethysmogram (PPG) signals for extracting respiratory signals. The results demonstrate that (a) the encoders of the multi-encoder autoencoder trained with our method can effectively discover and separate the sources of the proposed toy dataset, consisting of non-linearly mixed triangle and circle shapes, and (b) our method can be successfully applied to estimate a respiratory signal from ECG and PPG signals in a fully self-supervised manner where it shows comparable results with existing methods.
We summarize our contributions in this paper as follows:
* We propose a novel method for BSS by employing fully self-supervised multi-encoder autoencoders for the first time in the literature.
Figure 1: Illustration of the general training and inference procedure for the proposed method.
* We propose a novel encoding masking technique for source separation inference, and two novel model regularization techniques, the _pathway separation loss_ and the _zero reconstruction loss_ for implicit source separation learning and source estimation.
* We demonstrate the ability of the proposed method to extract meaningful respiratory signals from raw ECG and PPG recordings using a fully self-supervised learning scheme.
## 2 Related Work
In the audio domain, source separation is applied to various fields; for example, in speech separation where the voices of different individuals are the source signals, and also in music source separation, where the individual instruments that comprise the mixed audio are the target sources. In the literature, different methods for source separation based on deep learning algorithms have been proposed. Wave-U-Net [43] brings end-to-end source separation using time-domain waveforms. The deep attractor network (DANet), [7] uses an LSTM network to map mixtures in the time-frequency domain into an embedding space which is then used with so-called attractors, taking inspiration from cognitive speech perception theories in neuroscience, to create masks for each source.
Generating masks on a per-source basis makes the assumption that the source is linearly separable from the mixture which is a reasonable and often necessary assumption to make for many problems. Deep clustering [10] starts by embedding time-frequency representations of a mixed audio signal, then using K-means clustering to segment multiple sources, and lastly with the informed segmentation uses sparse non-negative matrix factorization to perform source separation. TaSNet [28] uses a three-stage network with an encoder that estimates a mixture weight, a separation module that estimates source masks, and a decoder for reconstructing source signals. Other works make extensions to TaSNet such as the dual-path RNN [27] that improves performance while reducing the model size, Conv-TasNet [29] which brings a fully convolutional approach to the prior method, or Meta-TaSNet [38] which introduces meta-learning to generate the weights for source extraction models. As is the current trend in deep learning research, transformer models also show an impressive capability for audio source separation due to their explicit design for sequence modeling [5, 35, 44, 57].
Deep learning approaches to separating superimposed images, shadow and shading separation, or reflection separation are areas of active research within the domain of computer vision. One work [13] approaches the task of separating superimposed handwritten digits from the MNIST dataset and superimposed images of handbags and shoes by learning a single mask to separate two sources. For the first source, the mixed signal is multiplied by the mask, and for the second source, the mixed signal is multiplied by one minus the mask, which inverts the values of the mask. In another work, [32] a self-supervised BSS approach based on variational autoencoders makes the assumption that the summation of the extracted sources is equal to the mixed signal. The authors have demonstrated the effectiveness of the approach for separating linearly mixed handwritten digits and for separating mixed audio spectrograms where two sources are assumed to be the summation of the given mixture. Liu Y et al. [26] use a dual encoder-generator architecture with a cycle consistency loss to separate linear mixtures, more specifically the addition of two sources, using self-supervision. They explore the tasks of separating backgrounds and reflections, and separating albedo and shading. The Double-DIP method [9] seeks to solve image segmentation, image haze separation, and superimposed image separation with coupled deep-image-prior networks to predict a mask for separating two image layers. Separation techniques that use a single mask to separate two sources make the assumption that there is a solution that can linearly separate a sample of mixed sources even though the process of generating said mask may be non-linear. Zou Z et al. [58] propose their own method for separating superimposed images which uses a decomposition encoder with additional supervision from a separation critic and a discriminator to improve the quality of the source estimations.
Other domains where deep learning approaches to source separation are in use include biosignals processing tasks [1, 24, 33, 39, 40], finance [40], and communication signals processing [56]. There are many challenges surrounding source separation and BSS tasks, but data-driven solutions, such as deep learning, are shown to be an effective strategy for solving these problems.
## 3 Proposed method
### Preliminaries
Multi-Encoder AutoencodersAutoencoders are a class of self-supervised neural networks that compress high-dimensional data into a low-dimensional latent space \(z\) using an encoder network \(E_{\theta}\), parameterized by \(\theta\), and then reconstruct the input data from the latent space using a decoder network \(D_{\phi}\), parameterized by \(\phi\). During the training process, the encoder must learn to preserve the most salient information of the original data in order to make accurate
reconstructions [11].
\[z=E_{\theta}(x),\;\hat{x}=D_{\phi}(z) \tag{1}\] \[\min_{\theta,\phi}\mathcal{L}_{\text{recon.}}(x,D_{\phi}(E_{\theta }(x))) \tag{2}\]
Multi-encoder autoencoders [47] use a total of \(N\) encoders \(\mathbf{E_{\theta}}=\begin{bmatrix}E_{\theta_{0}}^{0}&E_{\theta_{1}}^{1}&\dots&E_{ \theta_{N}}^{N}\end{bmatrix}\), which all take the same input \(x\) for this paper, unlike prior works, such that their separate encodings \(\mathbf{z}=\begin{bmatrix}z_{0}&\dots&z_{N}\end{bmatrix}\) are combined in some manner before being passed into the decoder \(D_{\phi}\) for reconstruction.
\[\mathbf{z}=\mathbf{E_{\theta}}(x)=\begin{bmatrix}E_{\theta_{0}}^{0}(x)&E_ {\theta_{1}}^{1}(x)&\dots&E_{\theta_{N}}^{N}(x)\end{bmatrix},\;\hat{x}=D_{\phi }(\mathbf{z}) \tag{3}\] \[\min_{\mathbf{\theta},\phi}\mathcal{L}_{recon.}(x,D_{\phi}(\mathbf{E_{ \theta}}(x))) \tag{4}\]
Blind Source SeparationA formal definition of blind source separation is given as follows: For some set of \(N\) source signals \(\mathbf{s}(t)\) and some set of \(K\) noise signals \(\mathbf{n}(t)\), a mixing system \(\mathcal{F}\) operates on \(\mathbf{s}(t)\) and \(\mathbf{n}(t)\) to produce a mixed signal \(x(t)\). Further, the mixing system \(\mathcal{F}\) may be non-linear and non-stationary in time or space \(t\).
\[\mathbf{s}(t)=\begin{bmatrix}s_{0}(t)&s_{1}(t)&\dots&s_{N}(t)\end{bmatrix} \tag{5}\] \[\mathbf{n}(t)=\begin{bmatrix}n_{0}(t)&n_{1}(t)&\dots&n_{K}(t)\end{bmatrix}\] (6) \[x(t)=\mathcal{F}(\mathbf{s}(t),\mathbf{n}(t),t) \tag{7}\]
In BSS, the goal is either to find an inverse mapping of the mixing system \(\mathcal{F}^{-1}\) or extract the source signals \(\mathbf{s}(t)\) given only the mixture \(x\). Since the mixing system \(\mathcal{F}\) is often non-linear and time-variant for many real-world problems, the problem of BSS is especially challenging due to the additional ambiguity it creates in the mapped feature space as illustrated in Figure 2. Figure 1(a) shows a graph with three well-separated feature subspaces, \(s^{a}\), \(s^{b}\), and \(s^{c}\), that correspond to the sources in some set of mixtures. Figure 1(b) shows a graph where the mixing system \(\mathcal{F}\) maps the source feature subspaces into a new feature space where there may be increased ambiguity between the source feature subspaces or in the worst case complete ambiguity.
### Blind Source Separation with Multi-Encoder Autoencoders
In this paper, we train a multi-encoder autoencoder for reconstructing input mixtures where the encoders specialize in the feature subspaces of different sources as an emergent property. Specifically, the encoders unmix the sources within the encoding space of the network, and the decoder remires the sources such that each source component removed in the encoding space results in the source being removed from the output reconstruction. In Figure 0(a), the training phase is shown during which all encoders \(\mathbf{E}\) are active and the expected output of the network is a reconstruction \(\hat{x}\) of the input signal \(x\). However during inference, shown in Figure 0(b), only one encoder is active and the expected output is the source signal \(s^{n}\) that corresponds to the source feature subspace that the active encoder \(E^{n}\) specializes in. In place of the encodings from non-active encoders, zero matrices of the same size are concatenated to the active encoder's output encoding, keeping the position of the active encoding in place.
Multi-Encoder Design and RegularizationAs stated in the description of multi-encoder autoencoders (i.e., section 3.1), \(N\) encoders \(\mathbf{E_{\theta}}=\begin{bmatrix}E_{\theta_{0}}^{0}&E_{\theta_{1}}^{1}&\dots&E_{ \theta_{N}}^{N}\end{bmatrix}\), parameterized by \(\mathbf{\theta}=\begin{bmatrix}\theta_{0}&\theta_{1}&\dots&\theta_{N}\end{bmatrix}\), are chosen and produce encodings \(\mathbf{z}=\begin{bmatrix}z^{0}&z^{1}&\dots&z^{N}\end{bmatrix}\) for input variables \(x\in\mathbb{R}^{C\times M}\) where \(M\) represents an arbitrary number of dimensions. Prior knowledge of the number of sources is not strictly necessary for choosing the number of encoders as overestimating can result in _dead encoders_ that have no contribution to the final source reconstruction though they may still contribute information about the mixing system. An occurrence of such dead encoders is exemplified later in subsection 4.1.
Figure 2: Illustration showing the mixing of source feature subspaces by a mixing system \(\mathcal{F}\).
Batch normalization [17] is applied to all layers with the exception of the output of each encoder. We apply an L2 regularization loss (8) to the output of each encoder during training as follows to prevent these values from growing too large and causing exploding gradients, a role that batch normalization fills for the hidden layers of each encoder.
\[\mathcal{L}_{\mathsf{z}}=\lambda_{\mathsf{z}}\frac{1}{Nh}\sum_{z^{n}\in\mathbf{z}} \|z^{n}\|_{2}^{2} \tag{8}\]
where \(\|z^{n}\|_{2}^{2}\) represents the Euclidean norm of some encoding \(z^{n}\) squared, the coefficient \(\lambda_{\mathsf{z}}\) controls the penalty strength, and \(h\) represents the size of the encoding (product of the channel size and spatial size).
\[Z=E^{0}(x)\oplus E^{1}(x)\oplus\ldots\oplus E^{N}(x) \tag{9}\]
where the \(\oplus\) operator represents concatenation along the appropriate dimension. As the final step of the encoding phase, the encodings \(\mathbf{z}\) are concatenated either along the channel dimension for fully convolutional networks or along the width dimension for linear encodings to produce the input for the decoder \(Z\). One final important consideration for designing the encoders is choosing a proper encoding size. The size should not be so small as to prevent good reconstructions but also not so large that every encoder generalizes to the entire feature space rather than specializing.
Decoder Design and RegularizationThe proposed method uses a standard single decoder architecture \(D_{\phi}\) parameterized by \(\phi\), as described previously in section 3.1, with input \(Z\) such that the reconstruction \(\hat{x}=D_{\phi}(Z)\). In this paper, two novel regularization techniques are designed for guiding the network toward conducting blind source separation. Other considerations include the use of weight normalization on the output layer and the use of group normalization. A so-called **pathway separation loss** is proposed for each layer in the decoder, with the exception of the output layer, to encourage sparse mixing of the encodings and their mappings throughout the decoder. Each _pathway_ consists of a subset of corresponding submatrices in each layer that are not only responsible for decoding their respective source encodings but also control how much information from other encoding spaces is allowed to be mixed. The result is source encodings are allowed to flow through the decoder with varying degrees of influence from other encoding spaces. For a given layer the weights are first segmented into a set of blocks or non-overlapping submatrices \(\mathbf{B}\) where the number of blocks is equal to the number of encoders squared i.e. \(N^{2}\). For each block \(B_{i,j}\) in the weight \(W\), with elements \(w\in W\), the height and width are equal to the input size \(C_{\text{in}}\) and the output size \(C_{\text{out}}\) of \(W\) divided by the number of encoders \(N\). For convolutional layers, this process is generalized with respect to the channel dimensions. Then all submatrices \(B_{i,j}\) in the set of submatrices \(\mathbf{B}\), where \(i\neq j\) i.e. off-diagonal blocks, are decayed towards zero using the L1 norm leading to a partial or full separation of the encoding spaces in the decoder. This is to say that blocks on the diagonal are responsible for decoding specific encodings and the blocks not on the diagonal are responsible for the remixing of encoding spaces. The impacts of the pathway separation loss for a fully connected layer are illustrated in Figure 3. In the case that all off-diagonal weights are exactly zero 3a, there is no mixing across the encoding spaces shown. However, if the off-diagonal blocks are exactly zero for all layers in the decoder 3b this is a hard assumption of linear separability. To make a soft assumption of linear separability, the off-diagonal blocks are
Figure 3: Illustration of the effect that the _pathway separation loss_ has on weight matrices while decoding separate encoding spaces.
rather decayed towards zero giving the network the option to use the mixing weights when necessary.
\[W=\mathbf{B}=\begin{bmatrix}B_{0,0}&B_{0,1}&\cdots&B_{0,N}\\ B_{1,0}&B_{1,1}&\cdots&B_{1,N}\\ \vdots&\vdots&\ddots&\vdots\\ B_{N,0}&B_{N,1}&\cdots&B_{N,N}\end{bmatrix},\;B_{i,j}=\begin{bmatrix}w\frac{C_{ \mathrm{in}}}{N}i,\frac{C_{\mathrm{out}}}{N}j&\cdots&w\frac{C_{\mathrm{in}}}{ N}i,\frac{C_{\mathrm{out}}}{N}(j+1)\\ \vdots&\ddots&\vdots\\ w\frac{C_{\mathrm{in}}}{N}(i+1),\frac{C_{\mathrm{out}}}{N}j&\cdots&w\frac{C_{ \mathrm{in}}}{N}(i+1),\frac{C_{\mathrm{out}}}{N}(j+1)\end{bmatrix} \tag{10}\] \[\mathcal{L}_{\text{pathway}}=\lambda_{pathway}\sum_{B_{i,j}\in \mathbf{B}}^{i\neq j}\alpha\|B_{i,j}\|_{1}\] (11) \[\alpha=\frac{\frac{C_{\mathrm{in}}}{N}\frac{C_{\mathrm{out}}}{N}} \text{, or }\] (12) \[\alpha=\left\{\begin{array}{ll}\frac{1}{(N-i)\frac{C_{\mathrm{in }}}{N}\frac{C_{\mathrm{out}}}{N}},&\text{ if }j>i\\ \frac{1}{(N-(N-i))\frac{C_{\mathrm{in}}}{N}\frac{C_{\mathrm{out}}}{N}},&\text{ if }j<i\end{array}\right. \tag{13}\]
where \(\lambda_{pathway}\) denotes the penalty strength of the weight decay applied to the off-diagonal blocks, and \(\|B_{i,j}\|_{1}\) denotes the L1 norm of some block \(B_{i,j}\). The pathway separation loss is not applied to the final output layer of the network. Instead, weight normalization [37] is applied to the output layer at each step for training stability. In our standard formulation, the scalar \(\alpha\) is an additional term equal to the reciprocal of the number of elements in each block \(B_{i,j}\). However, for our experiments with PPG and ECG signals, another scaling scheme that assigns different penalties depending on each block's position is devised in consideration of the highly non-linear relationships between sources in PPG or ECG signals, as is seen in Equation 13. The role of the proposed _pathway separation loss_ is illustrated in Figure 3. For the weight matrix \(W\), the brighter squares represent weights further from zero and darker squares represent weights closer to zero. In the input and output vectors, \(v_{\text{in}}\) and \(v_{\text{out}}\), the colors represent the different encoding spaces. In Figure 2(a), no mixing occurs between the encoding spaces because the off-diagonal blocks are exactly zero. In Figure 2(b), some mixing between the encoding spaces due to the presence of weights larger than zero in the off-diagonal blocks. The mixing of information across encoding spaces is visualized as the mixing of the colors that correspond to each encoding.
During source inference, all encodings except the encoding associated with the source of interest are masked out with zero arrays. However, even when the encodings are masked, the biases and weights that do not decode the target source may still contribute to the final output causing undesirable artifacts. Thus, a **zero reconstruction loss** is proposed to ensure that masked source encodings have minimal contribution to the final source estimation. For the zero reconstruction loss, an all-zero encoding vector \(Z_{\text{zero}}\) is passed into the decoder, and the loss between the reconstruction \(\hat{x}_{\text{zero}}\) and the target \(x_{\text{zero}}\), an all-zero vector equal to the output size, is minimized. This encourages the network to adjust the weights and bias terms of each layer in a way that encourages inactive encoding spaces to have little or no contribution to the output reconstruction. This loss is implemented as a second forward pass through the network, but the gradients are applied in the same optimization step as the other losses. While conducting this second forward pass for the decoder, the affine parameters of each normalization layer are frozen i.e. no gradient is calculated for the affine parameters when calculating the zero reconstruction loss at each step.
\[\min_{\phi}\mathcal{L}_{\text{zero recon}}(x_{\text{zero}},D_{\phi}(Z_{\text{ zero}})) \tag{14}\]
As for the normalization layers in the decoder, group normalization [54] is used such that the number of groups matches the number of encoders. Group normalization is applied to all layers in the decoder except the output layer. Group normalization and similar approaches are ideal as they can independently normalize each encoding pathway in the decoder.
Training and InferenceThe entire training process can be summarized as minimizing the reconstruction loss between the input and output of the multi-encoder-single-decoder autoencoder model with additional regularization losses that help ensure quality source separation during inference: encoding regularization \(\mathcal{L}_{\text{z}}\), pathway separation \(\mathcal{L}_{\text{pathway}}\), and zero reconstruction loss \(\mathcal{L}_{\text{zero recon}}\).The training process is also summarized in Algorithm 1. The final training loss is summarized with the formula below.
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{recon.}}+\mathcal{L}_{\text{ pathway}}+\mathcal{L}_{\text{zero recon.}}+\mathcal{L}_{\text{z}} \tag{15}\]
In this paper, only min-max scaling on the input \(x\) and a binary cross-entropy loss for the reconstruction terms \(\mathcal{L}_{\text{recon.}}\) and \(\mathcal{L}_{\text{zero recon.}}\) are considered.
During inference, source separation is performed by leaving the \(n^{\text{th}}\) encoder \(E_{\theta_{n}}^{n}\) active and masking out all other encodings with zero vectors \(\mathbf{0}\). The concatenation of the active encoding with the masked encodings \(Z^{n}\) are passed into the decoder to give the source estimation \(\hat{s}^{n}\) as shown in Figure 0(b) and in Algorithm 2.
\[Z^{n}=\big{[}\mathbf{0}\oplus\ldots\oplus E_{\theta_{n}}^{n}(x) \oplus\ldots\oplus\mathbf{0}\big{]} \tag{16}\] \[\hat{s}^{n}=D_{\phi}(Z^{n}) \tag{17}\]
For all encoders, this step is repeated to get each source prediction \(\hat{s}^{n}\) for some input \(x\). Cropping the edges of the predicted source when using convolutional architectures may be necessary due to information loss at the edges of convolutional maps which more significantly impacts source signals that have only a minor contribution to the mixed signal \(x\). For the circle and triangles dataset there is no need for cropping source estimations because the two sources have equal contributions to the mixture in terms of amplitude, however, this step is necessary for the ECG and PPG experiments.
### Implementation and Algorithm
```
1:\(\mathbf{E_{\theta}}\) is comprised of \(N\) encoder networks \(E_{\theta_{n}}^{n}\) parameterized by \(\mathbf{\theta}=[\theta_{0}\ \theta_{1}\ \ldots\ \theta_{N}]\).
2:\(D_{\phi}\) is the decoder network parameterized by \(\phi\).
3:\(x\) is a batch of samples in the dataset \(X\).
4: Choose scaling term(s) \(\alpha\) (see Equation 13).
5:for\(x\) in \(X\)do\(\triangleright\) Get encodings, \(\mathbf{z}=\big{[}z^{0}\ z^{1}\ \ldots\ z^{N}\big{]}\).
6:\(\mathbf{z}\leftarrow\big{[}z^{0}\oplus z^{1}\oplus\ldots\oplus z^{N}\big{]}\)\(\triangleright\) Concatenate encodings along channel dimension.
7:\(\hat{x}\leftarrow D_{\phi}(Z)\)\(\triangleright\) Get reconstruction.
8:\(\mathcal{L}_{\text{recon}}\leftarrow\text{BCE}(\hat{x},x)\)\(\triangleright\) Calculate \(\mathcal{L}_{x}\)
9:for\(z^{n}\) in \(\mathbf{z}\)do\(\triangleright\) (Equation 8)
10:\(\mathcal{L}_{x}\leftarrow\mathcal{L}_{x}+\frac{1}{Nh}\|z^{n}\|_{2}^{2}\)\(\triangleright\) Calculate \(\mathcal{L}_{x}\)
11:endfor
12:for\(W\) in \(\phi\)do
13:for\(B_{i,j}\) in \(W\)do\(\triangleright\) For definition of \(W\) and \(B_{i,j}\) see Equation 10.
14:if\(i\neq j\)then
15:\(\mathcal{L}_{\text{pathway}}\leftarrow\mathcal{L}_{\text{pathway}}+\alpha\|B_ {i,j}\|_{1}\)
16:endif
17:endfor
18:endfor
19:\(Z_{\text{zero}}\leftarrow[0\ 0\ 0\ \ldots\ 0]\)\(\triangleright\)\(Z_{\text{zero}}\) has same shape as \(Z\).
20:\(\mathcal{L}_{\text{zero recon.}}\leftarrow\text{BCE}(D_{\phi}(Z_{\text{zero}}),0)\)
21:\(\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{recon.}}+\lambda_{\text{pathway }}\mathcal{L}_{\text{pathway}}+\lambda_{\text{zero recon.}}\mathcal{L}_{\text{ zero recon.}}+\lambda_{x}\mathcal{L}_{x}\)
22: Update \(\theta\) and \(\phi\) by minimizing \(\mathcal{L}_{\text{total}}\) via gradient descent.
23:endfor
```
**Algorithm 1** Training loop for proposed method
All experiments were coded using Python (3.10.6) using the PyTorch (1.13.1) machine learning framework, and the PyTorch Lighting (2.0.0) library which is a high-level interface for PyTorch.
Training ProcedureAlgorithm 1 shows the basic training loop for the proposed method. In lines 1-2, we construct our architecture which has \(N\) encoders \(\mathbf{E_{\theta}}\) and a single decoder \(D_{\phi}\), parameterized by \(\theta\) and \(\phi\) respectively. In line 4, the scaling term for the pathway separation loss \(\alpha\) is chosen, two examples of which are shown in Equation 13. From line 5 to line 24, for each batch of samples \(x\) in the given dataset, the losses are calculated and the weights are updated. In line 6, the encodings \(\mathbf{z}\) are produced by forward propagating our batch of samples through each encoder \(E_{\theta_{n}}^{n}\). In lines 7-9, the resulting set of encodings is concatenated along the channel dimension to create a combined representation which is forward propagated through the decoder to produce the reconstructions \(\hat{x}\) and the binary cross entropy loss is calculated between the batch of reconstructions \(\hat{x}\) and the batch of inputs \(x\). In lines 10-12, for each encoding \(z^{n}\) in \(\mathbf{z}\), the L2 norm is calculated and the mean is taken. In lines 13-19, the pathway separation loss is calculated such that the L1 norm, scaled by \(\alpha\), is calculated for each block \(B_{i,j}\) in each layer weight \(W\) of the decoder (see Equation 10). In lines 20-21, we forward propagate a second time through the decoder with an all-zero matrix \(Z_{zero}\) the same size as our concatenated encodings \(\bar{Z}\), calculating the binary cross entropy between the decoded all-zero encoding and an all-zero matrix the same size as our input. In lines 22-23, the losses are summed to give the total
loss \(\mathcal{L}_{\text{total}}\), and then the loss is minimized to update the parameters via gradient descent. The described training loop is repeated until convergence or acceptable results are achieved.
```
1:\(E_{\mathbf{\theta}}\) is comprised of \(N\) encoder networks \(E_{\theta_{n}}^{n}\) parameterized by \(\mathbf{\theta}=[\theta_{0}\ \theta_{1}\ \dots\ \theta_{N}]\).
2:\(D_{\phi}\) is the decoder network parameterized by \(\phi\).
3:\(x\) is a sample or a batch of samples in the dataset \(X\).
4:\(x\leftarrow\mathbf{E}_{\theta}(x)\)\(\triangleright\) Get encodings, \(\mathbf{z}=\left[z^{0}\ z^{1}\ \dots\ z^{N}\right]\).
5:Choose encoding \(z^{n}\) to decode.
6:\(Z^{n}=[\mathbf{0}\oplus\dots\oplus z^{n}\oplus\dots\oplus\mathbf{0}]\)\(\triangleright\)\(\mathbf{0}\) is an all-zero matrix with the same shape as \(z^{n}\)
7:\(\hat{s}^{n}=D_{\phi}(Z^{n})\)\(\triangleright\) Get source estimation.
```
**Algorithm 2** Inference procedure for proposed method
Inference ProcedureAlgorithm 2 describes the general inference procedure for producing source estimations with the proposed method. In line 4, the set of encodings \(\mathbf{z}\) for some input or batch of inputs are produced. In lines 5-6, a single encoding \(z^{n}\) (the active encoding) is left for decoding and all other encodings are replaced with all-zero matrices \(\mathbf{0}\) of the same size as their original size. These encodings, keeping the order of the chosen encoding in place, are concatenated along the channel dimension. In line 7, the resulting combined encoding is forward propagated through the decoder \(D_{\phi}\) to produce the source estimation \(\hat{s}^{n}\) associated with the encoding \(z^{n}\).
## 4 Experimental Evaluation
In this section, we evaluate the effectiveness of our proposed methodology in comparison to existing methods. In Sections 4.1 and 4.2, we conduct experiments with the triangles & circles toy dataset and on ECG and PPG data for respiratory source extraction, respectively.
### Triangles & Circles Toy Dataset
The toy dataset comprised of non-linear mixtures of triangle and circle shapes is generated for analysis of the proposed method. Our approach to generating a synthetic dataset for BSS is distinct from similar works [26] as it does not use a linear mixing system which better reflects many real-world source separation tasks.
Data SpecificationsThe Python Imaging Library [48] is used to generate the training and validation sets of non-linearly mixed single-channel triangles and circles. \(150,000\) triangle and circle image pairs, \(x_{\bigtriangleup}\) and \(x_{\bigcircup}\), are generated such that they both have uniform random positions within the bounds of the final image and uniform random scale between 40% and 60% the width of the image. Where the triangle and circle shapes exist within the image the luminance value is 100% or a value of \(1.0\), and everywhere else within the image is a luminance of 0% or a value of \(0.0\). After resampling, the values of the edges of shapes may exist anywhere within this range. The images are generated with a height and width of \(128\) pixels and then downsampled to a height and width of \(64\) pixels using bilinear resampling. Then for each pair of triangle and circle images, the following mixing system is applied.
\[x_{\text{mixed}}=\text{Scaled}(\sigma(\frac{\alpha}{2}(x_{\bigtriangleup}+x_ {\bigcircup})))*\text{Distor. Kernel} \tag{18}\]
\[\text{Scaled}(x)=\frac{x-\text{min}(x)}{\text{max}(x)-\text{min}(x)} \tag{19}\]
where \(\sigma\) refers to the sigmoid function, Scaled refers to a min-max scaling function, and \(*\) refers to the convolution operation. The distortion kernel is designed such that the edges of the combined shapes become distorted in a shifted and blurred manner. The \(\alpha\) argument controls the degree to which the luminance of non-overlapping regions matches overlapping regions. In the final triangles & circles dataset experiment, \(\alpha\) equals 6. In addition, the kernel is vertically flipped with a \(50\%\) probability to create more variation. Specifics on the implementation and the distortion kernel are included in the project repository that accompanies this work. To create the training and test splits, 80% of the generated data is reserved for training and the remaining 20% of the data becomes the test set.
Model SpecificationsFor this experiment, a fully convolutional multi-encoder autoencoder, detailed in Figure (b)b, is trained on the introduced toy dataset using the proposed method. Though there are only two sources for this dataset, three encoders are implemented to show the emergence of a dead encoder. The Adam optimizer is used for optimization with a learning rate of \(1\times 10^{-3}\), and a step learning rate scheduler is applied, multiplying the current
learning rate by \(0.1\) every 50 epochs. The model trains for 100 epochs in total, and a batch size of \(256\) is used. In addition, a weight decay of \(1\times 10^{-5}\) over all parameters is used. Lastly, three scalars (in addition to the global learning rate) are used to control the contribution of the encoding regularization loss, the pathway separation loss, and the zero reconstruction loss which are denoted as \(\lambda_{\text{pathway}}\), \(\lambda_{\text{zero recon.}}\), and \(\lambda_{x}\). For this experiment, \(\lambda_{\text{zero recon.}}\) and \(\lambda_{x}\) are set to be \(1\times 10^{-2}\) and \(\lambda_{\text{pathway}}\) is set to \(5\times 10^{-1}\).
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{recon.}}+\lambda_{\text{pathway }}\mathcal{L}_{\text{pathway}}+\lambda_{\text{zero recon.}}\mathcal{L}_{\text{ zero recon.}}+\lambda_{x}\mathcal{L}_{x} \tag{20}\]
Evaluation and AnalysisFigure 4 shows two samples with the predicted reconstruction and source estimations from the trained model along with the ground truth mixture and sources (a video showing the training progression can be found in the supplementary material). Despite the non-linear mixing of sources and _never having any direct or indirect supervision that uses the ground truth circle and triangle sources_, the network learns to reconstruct a surprisingly accurate representation of the sources. Nevertheless, the distortion caused by the design of the mixing system is still somewhat present. Displayed along with the triangle and circle source estimations is the output that results from decoding the output of the third encoder. As expected, this third encoding pathway ultimately becomes a _dead encoding space_ because there are only two sources that can be modeled. In terms of source reconstruction error, an average absolute validation error of \(0.021\) between the true and predicted triangle sources and an average absolute validation error of \(0.025\) between the true and predicted circle sources is achieved (a worst-case mean absolute error would be a value of \(1.0\)). For reference, the average absolute validation error of the mixed signal reconstruction is \(0.006\) an order of magnitude smaller than the source reconstructions but the results are still acceptable. The final errors and the results shown in Figure 4 highlight a key limitation of the proposed method: Source estimations are in part limited by the features that the decoder can express. However, this is a common limitation of many BSS techniques as the features of the true source signals may not be knowable or fully expressed for non-linear mixtures.
In Figure 5, the absolute values of the weights for each layer in the decoder are summed along the spatial dimensions, and the decoder input layer is transposed. The application of the pathway separation loss is apparent, but in the second convolutional block's weights (labeled _Conv. Block 1_) there are clear sparse connections shared between the encoding pathway blocks associated with the triangle and circle sources. These shared connections establish the relationship between the two source encoding spaces within the _remixing_ or decoding process. The third encoding pathway (bottom right of each weight) is also noted to have absolute values close to zero relative to the other pathway blocks throughout the decoder indicating a _dead encoding space_ as described in section 3.2.
Figure 4: Sample predictions from the triangles & circles dataset experiment.
Figure 5: Visualization of weights in the decoder after training a model on the triangles & circles dataset with the proposed method.
Figure 6: Model specifications for the ECG/PPG experiments and the triangles & circles dataset experiments.
### ECG and PPG Respiratory Source Extraction
Two practical experiments using real-world biosignal recordings from a polysomnography sleep study are conducted using the proposed method. Specifically, the blind source separation of respiration source signals from raw electrocardiogram (ECG) and photoplethysmography (PPG) signals, which are typically used for the analyses of cardiovascular health, are considered. There is a vast literature on extracting respiratory rate or respiration signals from ECG and PPG using various heuristic, analytical, or source separation techniques [1, 4, 8, 25, 31, 34, 36, 49, 50]. However, BSS methods applied to respiration extraction from ECG or PPG are challenging because the cardiovascular and respiratory sources may not be assumed linearly separable and may be correlated in time. ECG or PPG signals may be influenced by respiratory-induced amplitude modulation, frequency modulation, or baseline wander at different times in a signal depending on a number of physiological factors such as respiratory sinus arrhythmia [12], or the various mechanical effects of respiration on the position and orientation of the heart [45]. For these reasons, many BSS methods would fail to separate the cardiovascular and respiratory sources in a meaningful way, and thus heuristics and strong priors are often relied upon for respiration signal extraction. The proposed method has an advantage in that highly non-linear mixtures can be modeled without strong priors.
Data SpecificationsThe Multi-Ethnic Study of Atherosclerosis (MESA) [6, 55] is a research project funded by the NHLBI that involves six collaborative centers and investigates the factors associated with the development and progression of cardiovascular disease. The study includes 6,814 individuals of various ethnic backgrounds between the ages of 45 and 84. This sleep study also involved overnight polysomnography, wrist-worn actigraphy for seven days, and a sleep questionnaire. The stated objective of the sleep study is to examine sleep patterns and disorders across different gender and ethnic groups and understand their relationship with subclinical atherosclerosis. The MESA Sleep study provides raw polysomnography data which includes simultaneously measured ECG, PPG, and other respiratory-related signals.
Photoplethysmography (PPG) signals capture changes in blood volume and are used for measuring heart rate, oxygen saturation, and other cardiovascular parameters. Electrocardiography (ECG) signals record the heart's electrical activity and are used to assess cardiac rhythm, detect abnormalities, and diagnose cardiac conditions. Both PPG and ECG signals play important roles in monitoring and diagnosing cardiovascular health.
From the polysomnography data, raw PPG and ECG signals are extracted for the purpose of separating a respiratory source using the proposed method which is done without the supervision of a reference respiratory source or the use of any heuristics or strong priors. Thoracic excursion recordings, which measure the expansion of the chest cavity, and nasal pressure recordings, which measure the change in airflow through the nasal passage, are extracted as reference respiratory signals for verifying the quality of the predicted respiratory source separations. For this work, \(1,000\) recordings of the total \(2,056\) polysomnography recordings are randomly selected from the MESA polysomnography dataset. Then, after randomly shuffling the recordings, the first \(500\) recordings of this subset are used for training and the remainder are used for validation. As a first step, the PPG and ECG recordings for each polysomnography recording are resampled from the original 256hz to 200hz. Then each recording is divided into segments that are \(12,288\) samples in length, or approximately \(60\) seconds. As a final step, the segments are scaled using min-max scaling. No other processing or filtering is applied to the segments, however, some segments are removed if the segment appears to have physiologically impossible characteristics i.e. the extracted heart rate is too high or too low. For evaluation of the extracted respiratory sources, the NeuroKit2 open source library [30] is used to process the respiratory signals and then extract the respiratory rate. For inference, the two ends of the source estimations are clipped as the edges tend to produce noise that is much higher in amplitude than source signal representations found towards the center.
Model SpecificationsFor both the ECG and PPG experiments, the same fully convolutional multi-encoder autoencoder, detailed in Figure 6a, is trained using the proposed method. The Adam optimizer with a learning rate of \(1\times 10^{-3}\) is used for optimizing the model weights. Additionally, a weight decay of \(1\times 10^{-6}\) is applied over all parameters. The scalars for the additional regularization terms, \(\lambda_{\text{pathway}}\), \(\lambda_{\text{zero recon.}}\), and \(\lambda_{\text{z}}\), are set to \(1\times 10^{-3}\), \(1\times 10^{-2}\), and \(1\times 10^{-2}\), respectively. The batch size is set to \(256\) segments. The models for both PPG and ECG are trained for \(50\) epochs, and then the best-performing model from these runs is chosen for the final evaluation on the test set.
Evaluation and AnalysisIn Figure 7, a simultaneously measured reference respiration signal, and the extracted respiration source signal using our method are visualized for both ECG and PPG experiments (more samples are in the supplementary material). In Figure 7a the reference respiration signal shown is thoracic excursion, and for Figure 7b the reference respiratory signal is nasal pressure. While the predicted respiratory source and the thoracic excursion signal are often well correlated, they represent slightly different physiological processes. The predicted source is an estimated representation of the respiratory-related movement feature subspace within the mixed input
signals, ECG or PPG. Thus the predicted respiratory source and a reference respiration signal may have subtle or obvious differences in feature expression. For example, the predicted respiration source may have unique amplitude modulation patterns or appear flipped relative to the respiration reference as seen in Figure 6(a). For these reasons, the proposed method is evaluated by extracting respiratory rate from the predicted source signals and for two reference respiratory signals, nasal flow and thoracic excursion, which is achieved by measuring the periods between inhalation onsets or troughs for one-minute segments. Then the mean absolute error (MAE) between the respiratory rates of each of the source estimations and each of the reference signals is calculated to determine the degree to which the predicted respiration sources preserve respiratory information. The experimental results in Table 1 show the test mean absolute error (MAE) in breaths per minute between each method of respiratory signal extraction and the two reference respiratory signals, nasal pressure and thoracic excursion (abbreviated as _nasal press._ and _thor._) where _type_ refers to the input signal that the method extracts the respiratory signal from, either ECG or PPG. The primary result shows that the proposed method for BSS performs better than the competing heuristic approaches designed specifically for the task of respiratory signal extraction. Two additional baselines are included as a point of reference: the results of a supervised model trained to predict the nasal pressure signal from the two types of input signals (similar to other works [1, 8, 34]) and a direct comparison between the thoracic excursion signal and nasal flow signal. The supervised approach, employing a standard autoencoder (AE) architecture, generally performs well at predicting the reference respiratory signal it is trained to reconstruct, nasal pressure, but generalizes poorly to the other reference respiratory
\begin{table}
\begin{tabular}{c c c} \hline \hline Method (Type) & Breaths/Min. MAE \(\downarrow\) \\ \hline BSS & Nasal Press. & Thor. \\ \hline Ours (PPG) & 1.51 & 1.50 \\ Ours (ECG) & 1.73 & 1.59 \\ \hline Supervised & & Direct Comparison \\ \hline AE (PPG) & 0.46 (0.87) & 2.07 (1.71) \\ AE (ECG) & 0.48 (1.03) & 2.16 (1.69) \\ \hline \hline \end{tabular}
\begin{tabular}{c c c} \hline \hline Method (Type) & Breaths/Min. MAE \(\downarrow\) \\ \hline Heuristic & Nasal Press. & Thor. \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\begin{tabular}{c c c}
signal, thoracic excursion. Early-stopping does result in better generalization (shown in parenthesis next to the primary result). However, for many tasks where BSS is applied, a supervised approach is usually not viable because reference signals for sources may not be available, though having reference signals was essential for the evaluation of our method in this paper. The absolute values of the decoder weights summed along the spatial dimension for the PPG and ECG models are shown in Figure 8. Due to the complex non-linear relationships that occur between sources in a PPG or ECG signal, a lower penalty is applied to the pathway separation loss, and the alternative \(\alpha\) scalar (13) is applied. Thus, the weights within the off-diagonal pathway blocks have generally higher values in comparison to the model trained for the triangles & circles dataset experiment.
### Code Availability
Our implementation of the experiments described in this paper is available on GitHub at the following link: github.com/webstah/self-supervised-bss-via-multi-encoder-ae.
## 5 Conclusion
The method for BSS using multi-encoder autoencoders introduced in this paper presents a novel data-driven solution to BSS without strong application-specific priors or the assumption that sources are linearly separable from the given mixture. The ability of the separate encoders to discover the feature subspaces of sources and of the decoder to reconstruct those sources was demonstrated. In addition, this work demonstrates that the proposed method does not have a limitation in the number of sources that can be separated. Overestimating the number of sources with more encoders than is truly necessary simply leads to dead encoders, however, there are some key limitations. Due to the nature of the BSS task, it is not possible to know if the extracted sources are true representations of the sources. As seen in the triangles & circles dataset experiment, the distortion applied by the mixing system is not fully removed from the reconstructed sources. Future works exploring these issues can lead to improved results as well as enhance the understanding of the proposed method in relation to the BSS task.
### Broader Impact Statement
While this work shows that meaningful source separation occurs with the ECG and PPG experiments, the ability of the proposed method, or any BSS approach, to reconstruct exact sources should not be overstated and this fact must be taken into consideration before applying the proposed method in any real-world context especially those involving
Figure 8: The absolute values of the decoder weights summed along the spatial dimension for the PPG and ECG models showing the application of the _pathway separation loss_.
medical diagnosis or decisions that may negatively impact any individuals, demographics, or society as a whole. Further investigation is necessary to understand the shortcomings of the proposed method despite the positive results presented in this paper.
### Acknowledgements
We want to express our sincere thanks to Dr. Masoud R. Hamedani and Dr. M. Dujon Johnson for their help in editing and providing valuable suggestions to enhance the clarity and flow of this paper. Additionally, we thank Dr. Jeremy Wurbs for their engaging discussions that helped kick-start our research journey.
This work was supported by the Technology Development Program (S3201499) funded by the Ministry of SMEs and Startups (MSS, Korea).
The Multi-Ethnic Study of Atherosclerosis (MESA) Sleep Ancillary study was funded by NIH-NHLBI Association of Sleep Disorders with Cardiovascular Health Across Ethnic Groups (RO1 HL098433). MESA is supported by NHLBI funded contracts HHSN268201500003I, N01-HC-95159, N01-HC-95160, N01-HC-95161, N01-HC-95162, N01-HC-95163, N01-HC-95164, N01-HC-95165, N01-HC-95166, N01-HC-95167, N01-HC-95168 and N01-HC-95169 from the National Heart, Lung, and Blood Institute, and by cooperative agreements UL1-TR-000040, UL1-TR-001079, and UL1-TR-001420 funded by NCATS. The National Sleep Research Resource was supported by the National Heart, Lung, and Blood Institute (R24 HL114473, 75N92019R002).
|
2309.13165 | Large Language Models Are Also Good Prototypical Commonsense Reasoners | Commonsense reasoning is a pivotal skill for large language models, yet it
presents persistent challenges in specific tasks requiring this competence.
Traditional fine-tuning approaches can be resource-intensive and potentially
compromise a model's generalization capacity. Furthermore, state-of-the-art
language models like GPT-3.5 and Claude are primarily accessible through API
calls, which makes fine-tuning models challenging. To address these challenges,
we draw inspiration from the outputs of large models for tailored tasks and
semi-automatically developed a set of novel prompts from several perspectives,
including task-relevance, supportive evidence generation (e.g. chain-of-thought
and knowledge), diverse path decoding to aid the model. Experimental results on
ProtoQA dataset demonstrate that with better designed prompts we can achieve
the new state-of-art(SOTA) on the ProtoQA leaderboard, improving the Max
Answer@1 score by 8%, Max Incorrect@1 score by 4% (breakthrough 50% for the
first time) compared to the previous SOTA model and achieved an improvement on
StrategyQA and CommonsenseQA2.0 (3% and 1%, respectively). Furthermore, with
the generated Chain-of-Thought and knowledge, we can improve the
interpretability of the model while also surpassing the previous SOTA models.
We hope that our work can provide insight for the NLP community to develop
better prompts and explore the potential of large language models for more
complex reasoning tasks. | Chenin Li, Qianglong Chen, Yin Zhang, Yifei Zhang, Hongxiang Yao | 2023-09-22T20:07:24Z | http://arxiv.org/abs/2309.13165v1 | # Large Language Models Are Also Good Prototypical Commonsense Reasoners
###### Abstract
Commonsense reasoning is a pivotal skill for large language models, yet it presents persistent challenges in specific tasks requiring this competence. Traditional fine-tuning approaches can be resource-intensive and potentially compromise a model's generalization capacity. Furthermore, state-of-the-art language models like GPT-3.5 and Claude are primarily accessible through API calls, which makes fine-tuning models challenging. To address these challenges, we draw inspiration from the outputs of large models for tailored tasks and semi-automatically developed a set of novel prompts from several perspectives, including task-relevance, supportive evidence generation (e.g. chain-of-thought and knowledge), diverse path decoding to aid the model. Experimental results on ProtoQA dataset demonstrate that with better designed prompts we can achieve the new state-of-art(SOTA) on the ProtoQA leaderboard, improving the Max Answer@1 score by 8%, Max Incorrect@1 score by 4% (breakthrough 50% for the first time) compared to the previous SOTA model and achieved an improvement on StrategyQA and CommonsenseQA2.0 (3% and 1%, respectively). Furthermore, with the generated Chain-of-Thought and knowledge, we can improve the interpretability of the model while also surpassing the previous SOTA models. We hope that our work can provide insight for the NLP community to develop better prompts and explore the potential of large language models for more complex reasoning tasks.
Commonsense Reasoning, Large Language Model, Prompt
languageresource |
2309.15514 | The conformal sector of Quantum Einstein Gravity beyond the local
potential approximation | The anomalous scaling of Newton's constant around the Reuter fixed point is
dynamically computed using the functional flow equation approach. Specifically,
we thoroughly analyze the flow of the most general conformally reduced
Einstein-Hilbert action. Our findings reveal that, due to the distinctive
nature of gravity, the anomalous dimension $\eta$ of the Newton's constant
cannot be constrained to have one single value: the ultraviolet critical
manifold is characterized by a line of fixed points $(g_\ast(\eta),
\lambda_\ast (\eta))$, with a discrete (infinite) set of eigenoperators
associated to each fixed point. More specifically, we find three ranges of
$\eta$ corresponding to different properties of both fixed points and
eigenoperators and, in particular, the range $ \eta < \eta_c \approx 0.96$ the
ultraviolet critical manifolds has finite dimensionality. | Alfio Bonanno, Maria Conti, Dario Zappalà | 2023-09-27T09:29:37Z | http://arxiv.org/abs/2309.15514v1 | # The conformal sector of Quantum Einstein Gravity beyond the local potential approximation
###### Abstract
The anomalous scaling of Newton's constant around the Reuter fixed point is dynamically computed using the functional flow equation approach. Specifically, we thoroughly analyze the flow of the most general conformally reduced Einstein-Hilbert action. Our findings reveal that, due to the distinctive nature of gravity, the anomalous dimension \(\eta\) of the Newton's constant cannot be constrained to have one single value: the ultraviolet critical manifold is characterized by a line of fixed points \((g_{*}(\eta),\lambda_{*}(\eta))\), with a discrete (infinite) set of eigenoperators associated to each fixed point. More specifically, we find three ranges of \(\eta\) corresponding to different properties of both fixed points and eigenoperators and, in particular, the range \(\eta<\eta_{c}\approx 0.96\) the ultraviolet critical manifolds has finite dimensionality.
keywords: Functional Renormalization Group, Asymptotic Safety +
Footnote †: journal:
## 1 Introduction
Understanding the structure of the ultraviolet (UV) critical manifold in Asymptotically Safe (AS) theories of gravity has been a formidable challenge despite substantial efforts. A striking departure from well-established examples, such as scalar theories in \(d\) dimensions, lies in the unique role played by Newton's constant, \(G_{N}\), in determining the scaling dimension of the gravitational field in the action:
\[S[g_{\mu\nu}]=-\frac{1}{16\pi}\int d^{4}x\frac{1}{G_{N}}\sqrt{g}R \tag{1}\]
In this context, the inverse of \(G_{N}\), equivalent to the wavefunction renormalization function \(Z\) for the graviton, assumes a central role. Percacci and Perini highlighted a crucial distinction from the scalar field case, where, despite its technical inessentiality [1], \(1/G_{N}\) should scale as the inverse of the cutoff square. Consequently, the anomalous dimension of the field obeys the equation:
\[\eta=\beta_{N}-2. \tag{2}\]
Here, \(\beta_{N}\) denotes the beta-function for Newton's constant which can be obtained from (1). At the fixed point (FP), where \(\beta_{N}\) vanishes, the anomalous dimension \(\eta\) takes on the value of -2 [2; 3; 4]. The value \(\eta=-2\) merely reflects the classical (negative) natural dimension of the background Newton's constant. In contrast, the anomalous dimension associated with graviton fluctuations, denoted as \(\eta_{h}\), emerges dynamically and takes a value close to one (\(\eta_{h}\approx 1.02\) as reported in [5]).
The central question we aim to tackle in this paper is whether we can transcend the definition provided in equation (2), which is based on a \(\beta\)-function approach, (i.e. implying polynomial truncation of the theory space), and propose a dynamic determination of the anomalous dimension of the background field, similar to the case of inessential couplings in standard quantum field theory (QFT). To overcome the limitations of equation (2), it is evident that we must approach the problem at a non-perturbative level, taking into account all possible operators compatible with the general group of diffeomorphisms.
As originally pioneered by Wilson and Wegner [6; 7], the Renormalization Group (RG), when implemented with the aid of Functional Flow Equations, serves as a powerful tool for exploring the structure of the UV critical manifold at a non-perturbative level [8; 9; 10; 11]. At the heart of this approach lies the Effective Average Action \(\Gamma_{k}\), representing the effective action
of the system at the scale \(k\), which converges to the familiar Effective Action in the limit as \(k\to 0\).
In recent years, the Renormalization Group, implemented with Functional Flow Equations, has proven invaluable for exploring the properties of the UV critical manifold in Asymptoticly Safe (AS) theories of gravity beyond the framework defined in (1). Researchers have turned to theories of the form:
\[\Gamma_{k}[g_{\mu\nu}]=\frac{1}{16\pi}\int d^{4}x\sqrt{g}f_{k}(R), \tag{3}\]
aiming to move beyond polynomial truncation. For an extensive review of this approach, please refer to [12]. The rationale behind (3) draws parallels with the Local Potential Approximation (LPA) in scalar theories, where exact flow equations for the potential \(V(\phi)\) have been derived and successfully solved, leading to highly accurate studies of critical properties in dimensions \(d=3\) and \(d=4\). Despite substantial computational efforts dedicated to analyzing scaling solutions (i.e., fixed point) of the flow equation for (3), progress in moving beyond Eq. (2) has remained somewhat limited. This challenge is further compounded by the issue of background independence.
The Wetterich equation is grounded in the Effective Average Action \(\Gamma_{k}\), which, in the context of the background field method, depends on two distinct metrics: the full metric \(g_{\mu\nu}\) and the background field metric \(\tilde{g}_{\mu\nu}\). While in standard gauge theory all correlation functions of the fundamental field can be derived from "on shell" background field correlation functions with the aid of a gauge-fixing term invariant under a background transformation, in gravity, particularly following the tenets of loop quantum gravity [13], background independence carries a more profound significance. Here, full background independence entails that no metric should hold any privileged status at any stage of the calculation.
Given the formidable technical complexity of achieving full background independence, advancement in this direction necessitates significant simplifications. For instance, building upon the approach initiated by [14; 15], several studies have explored a conformally reduced version of the theory [16; 17], demonstrating that the conformal factor plays a pivotal role in generating the Reuter fixed point with a finite number of relevant directions, even beyond the standard Einstein-Hilbert truncation [18].
In contrast, Morris and his collaborators have addressed the more rigorous requirement of full background independence directly within the flow equation framework in a series of works [19; 20; 21]. Particularly, in [21], the use of a specific cutoff function allowed for the examination of the ultraviolet critical manifold of the conformally reduced theory beyond the LPA approximation employed in [15]. A significant finding from this work is the identification of a line of fixed points, each with a continuous set of relevant eigendirections. Taken at face value, this result suggests that the theory is renormalizable but not predictive, given the infinite number of relevant directions.
In this paper, we pursue an alternative path. We employ a spectrally-adjusted cutoff [22] previously utilized in scalar theory and quantum gravity investigations [17; 23; 24], to surpass the LPA approximation presented in [15]. Consequently, we adopt a single metric approximation for the renormalized flow. By virtue of our cutoff selection, we are able to obtain analytical solutions. While we confirm the presence of a line of fixed points, we observe a discrete spectrum of eigendirections. Notably, the UV critical manifold exhibits finite dimensionality for \(\eta<\eta_{c}\approx 0.96\).
The structure of the paper is the following. In Sec. 2 the general framework and the approximation scheme are discussed; in Sec. 3 the numerical analysis employed for the determination of the fixed points and for the eingenvalue spectra is presented; the conclusions are reported in Sec. 4.
## 2 General setup and differential equations in the flat projection
We start by considering the well known Euclidean Einstein-Hilbert action [17]
\[S^{EH}[g_{\mu\nu}]=-\frac{1}{16\pi}\int d^{d}x\sqrt{g}\,G^{-1}\,(R-2\Lambda), \tag{4}\]
which, by Weyl rescaling \(g_{\mu\nu}=\phi^{2\nu}\tilde{g}_{\mu\nu}\) where \(\nu\) is a parameter taken as \(\nu=\frac{2}{d-2}\), can be written as
\[S^{EH}_{k}[\phi]=\int d^{d}x\,\frac{\sqrt{g}\,Z_{k}}{2}\,\left(\hat{g}^{\mu\nu }\partial_{\mu}\phi\partial_{\nu}\phi+A\,\hat{R}\,\phi^{2}-4A\,\Lambda_{4}\phi ^{\frac{2d}{d-2}}\right), \tag{5}\]
where \(\hat{R}\equiv R(\hat{g})\), \(A=A(d)=\frac{d-2}{8(d-1)}\) and
\[Z_{k}=-\frac{1}{2\pi\,G_{k}}\,\frac{d-1}{d-2} \tag{6}\]
and it is assumed that \(\Lambda_{k}\) and \(G_{k}\) (and \(Z_{k}\) through Eq. (6)) are RG scale \(k\)-dependent parameters. It is then convenient to consider a general action of the type
\[S_{k}[\phi]=\int d^{d}x\sqrt{\tilde{g}}\,\left(\frac{1}{2}Z_{k}\,\,\hat{g}^{ \mu\nu}\partial_{\mu}\phi\,\partial_{\nu}\phi+V_{k}[\phi]\right), \tag{7}\]
where \(V_{k}[\phi]\equiv Z_{k}U_{k}[\phi]\), with \(U[\phi]\) a potential term for the action. Then, the parallel between Eqs. (5) and (7) is evident. Following [17; 23] the proper-time flow equation reads
\[\partial_{t}\,\Gamma_{k}[f;\chi_{B}]=-\frac{1}{2}\,\mathrm{Tr}\int_{0}^{\infty }\frac{ds}{s}\,\partial_{t}\,\rho_{k}\,\exp\left(-s\frac{\partial^{2}\Gamma_{ k}[f;\chi_{B}]}{\delta f^{2}}\right), \tag{8}\]
where \(t\equiv\log k\) is the RG time and the original field \(\phi\) is split into a constant background component \(\chi_{B}\) and a dynamical fluctuation \(\tilde{f}:\phi(x)=\chi_{B}+\tilde{f}(x)\). Moreover, a reference metric \(\hat{g}_{\mu\nu}\) is chosen through a Weyl rescaling \(\tilde{g}_{\mu\nu}=\chi_{B}^{2\nu}\tilde{g}_{\mu\nu}\), with constant \(\chi_{B}\) and \(\nu=2/(d-2)\), where \(d\) is the space-time dimension and, because of this definition, the two momenta scales (defined as the Eigenvalues of the \(-\dot{\square}\) and \(-\dot{\square}\) respectively) are usually related through
\[\hat{k}^{2}=\chi_{B}^{2\nu}\tilde{k}^{2}. \tag{9}\]
and the conformal factor is considered as dimensionless. However, as discussed in the introduction, in our investigation we consider an anomalous scaling for the \(\chi_{B}\) and we write
\[\chi_{B}=\psi_{B}\,k^{\frac{3}{4}}. \tag{10}\]
being \(\psi_{B}\) a dimensionless field and \(\eta\) the anomalous dimension. This choice entails some differences with respect to (9). While the identity \(\hat{\Box}=\chi_{B}^{2\nu}\,\hat{\Box}\) always holds true, in order for \(\hat{\Box}\) to show the correct dimensionality of \(k^{2}\) (\([\hat{\Box}]=k^{2}\)), it must imply: \([\hat{\Box}]=k^{2-\eta\nu}\). Hence, the relation between momenta built with the reference metric and the background metric must be of the following form:
\[\hat{k}^{2}=\chi_{B}^{2\nu}\hat{k}^{2-\eta\nu}, \tag{11}\]
which for \(d=4\), i.e. \(\nu=1\), simply reduces to \(\hat{k}^{2}=\chi_{B}^{2\nu}\hat{k}^{2-\eta}\).
The smooth infrared (IR) regulator \(\rho_{k}=\rho_{k}[\chi_{B}]\) (repeatedly analyzed in the literature [22, 23, 24, 25, 26, 17, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 299, 294, 295, 296, 297, 298, 299, 299, 299, 296, 297, 298, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 309, 320, 306, 309, 321, 307, 308, 309, 333, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 308, 309, 311, 300, 312, 303, 304, 305, 306, 309, 313, 307, 309, 314, 305, 306, 309, 315, 316, 317, 318, 319, 320, 317, 319, 321, 322, 323, 324, 345, 346, 347, 348, 349, 350, 351, 352, 353, 356, 357, 358, 359, 360, 361, 363, 364, 365, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 389, 390, 391, 393, 394, 395, 396, 397, 398, 399, 399, 399, 399, 399, 399, 399, 399, 399, 399, 399, 399, 40, 411, 421, 434, 445, 446, 447, 447, 448, 449, 450, 461, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 48, 471, 48, 48, 48, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 50, 51, 52, 53, 54, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 79, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99,99, 99, 99, 99,99, 99, 99, 99, 99, 99, 99, 99,99, 99, 99, 99,99, 99, 99,9, 999, 99, 99,9, 99, 99,99, 99, 999, 99, 99,99, 99, 99,99, 99, 99,99, 99,99, 99, 99,99, 99,99, 99,99, 99,99, 99,99, 99,99, 99,99, 99,99, 999,99, 99,99, 99,
Together with \(W_{1}\) and \(W_{2}\), the functions \(f_{0}\), \(g_{0}\), \(h_{0}\) and \(l_{0}\) have been defined as \(f_{0}=\eta\nu-2\), \(g_{0}=3\eta\nu-5\), \(h_{0}=5\eta\nu-9\) and \(l_{0}=2+\eta\nu\), that appear in the following definitions
\[q_{0} =d+f_{0}n \tag{26}\] \[r_{0} =d+f_{0}(1+n)\] (27) \[s_{0} =d+f_{0}(2+n)\] (28) \[t_{0} =d(1+f_{0})+f_{0}g_{0}\] (29) \[u_{0} =d^{2}(f_{0}+1)+df_{0}h_{0}+f_{0}^{2} \tag{30}\]
and the '0' subscripts have been introduced to avoid confusion with other variables. We now introduce \(Y_{k}\), \(z_{k}\), \(X\) as the dimensionless counterparts of the potential \(V_{k}\), the renormalization function \(Z_{k}\) and the field \(\phi\) respectively as follows:
\[V_{k}[\phi] =Y_{k}[X]k^{d} \tag{31}\] \[Z_{k}[\phi] =z_{k}[X]k^{d-2-\eta}\] (32) \[\phi =Xk^{\frac{\eta}{2}}, \tag{33}\]
where \(\eta\) represents the anomalous dimension of _both_ fields \(\chi_{B}\) and \(\phi\), as shown by equations (10) and (33) respectively.
Following the single field approximation [17], once the fluctuation \(\bar{f}\) defined in (13) is integrated out, the expectation value of the dimensionless field \(X\), introduced in Eq. (33), is identified with the the background \(\psi_{B}\), which, for the sake of simplicity, from now on is indicated as \(x\):
\[X=\psi_{B}\equiv x\,. \tag{34}\]
In terms of the dimensionless quantities \(Y_{k}\) and \(z_{k}\) in \(d=4\), equations (22) and (23) now read more easily as:
\[\partial_{t}Y_{k}(x)=-4Y_{k}+\frac{\eta\,x\,Y_{k}^{\prime}}{2}+n^{\frac{4}{ \eta-2}+\alpha}x^{\frac{10}{\eta+2}+\alpha}z_{k}z_{k}^{\frac{4}{\eta-2}+\alpha }\,Q_{1}^{\frac{4}{\eta}-2-\alpha-3}\,Q_{2}^{3} \tag{35}\]
and
\[\partial_{t}z_{k}(x)=(\eta-2)\,z_{k}+\frac{\eta\kappa_{k}^{\prime }}{2}-\frac{n^{\frac{4}{\eta-2}+\alpha}((\eta-2)n+4)}{3(\eta-2)^{4}}x^{\frac{4 }{\eta-2}+2\alpha}\times\] \[\times\,z_{k}^{\frac{4}{\eta-2}+\alpha-1}\,Q_{1}^{\frac{4}{\eta- 2}-\alpha-3}\left\{(\eta-1)(Y_{k}^{(3)})^{2}z_{k}^{2}\!\left(\!(\eta-2)(n+1)+4 \!\right)\!\times\right.\] \[\times\left((\eta-2)(n+2)+4\right)-\left(4(\eta-1)+(\eta-2)(3\eta -5)\right)\!\times\] \[\times\left((\eta-2)(n+1)+4\!\right)\!Z_{k}^{(3)}z_{k}^{\ast}\! \left.\!z_{k}^{\ast}\!\left.\!Q_{1}+\left[3(\eta-2)^{2}z_{k}z_{k}^{\ast}\!+ \right.\right.\right.\] \[\left.\left.+\left((\eta-2)^{2}+4(5\eta-9)(\eta-2)+16(\eta-1) \right)\!(z_{k}^{\ast})^{2}\right]Q_{1}^{2}\,\right\} \tag{36}\]
where
\[Q_{1}=\left(nx^{2}z_{k}+Y_{k}^{\prime\prime}\right) \tag{37}\]
and
\[Q_{2}=\left(nx^{\frac{4}{\eta-2}}z_{k}+x^{-\frac{2\eta}{\eta-2}}Y_{k}^{\prime \prime}\right) \tag{38}\]
and the derivatives appearing in the right hand side of (35) and (36) are taken with respect to the dimensionless field \(X\).
Eqs. (35) and (36) are lengthy and rather involved as far as their numerical treatment is concerned, especially in the case of large \(n\). Therefore we shall take the limit \(n\rightarrow+\infty\) in the two flow equations, which are turned into a more compact form, as all the troublesome powers of \(n\) in (35) and (36) are replaced in this limit by \(n\)-independent exponential factors that are easier to handle numerically. In this limit, the two flow equations read:
\[\partial_{t}Y_{k}(x)=-4Y_{k}+\frac{1}{2}\eta xY_{k}^{\prime}+x^{4}e^{-\frac{Y _{k}^{\prime}}{x_{0}^{\prime}\lambda}} \tag{39}\]
and
\[\partial_{t}z_{k}(x)=(\eta-2)z_{k}+\frac{1}{2}\eta xz_{k}^{\prime }-\frac{e^{-\frac{y_{k}^{\prime}}{x_{0}^{\prime}\lambda}}}{3(\eta-2)^{3}x^{2 }z_{k}^{2}}\times\] \[\times\left[(\eta-2)^{2}(\eta-1)(Y_{k}^{(3)})^{2}+\right.\] \[+2\left(-3\eta^{3}+13\eta^{2}-20\eta+12\right)x^{2}Y_{k}^{(3)}z_{ k}^{\prime}+\] \[\left.+x^{4}\left(3(\eta-2)^{3}z_{k}z_{k}^{\prime\prime}+\left(2 1\eta^{2}-64\eta+60\right)\!\left(z_{k}^{\prime}\right)^{2}\right)\right] \tag{40}\]
In the next Section we shall analyse the fixed point structure of these flow equations including the central role of the anomalous dimension \(\eta\) and then, the nature of the related eigendirections.
## 3 The role of the anomalous dimension
### The fixed point line structure
In this Section we focus on the fixed point solutions of the flow equations (39) and (40), i.e. \(\partial Y_{k}(x)=\partial z_{k}(x)=0\), for the generalization of the conformally reduced Einstein- Hilbert (CREH) truncation shown in (5), with \(d=4\) and with flat curvature \(\hat{R}=0\). It is easy to check that the functions
\[z =-\frac{1}{w} \tag{41}\] \[Y =\frac{u}{w}x^{4} \tag{42}\]
(where \(w\) and \(u\) are numerical coefficients), are such that all dependence on \(x\) gets canceled both in Eqs. (39) and (40), so that the two differential equations reduce to simple numerical equations. In addition, from (4), (5), (6) and (41), (42), one easily deduce that \(u\) and \(w\) correspond respectively to the dimensionless cosmological constant \(\lambda\) and Newton's constant \(g\).
It is crucial to remark that all numerical searches performed to determine alternative solutions \(Y(x)\) and \(z(x)\) did not produce any positive result, leaving the monomial functions in (41) and (42) as the only acceptable fixed point solution among the entire set of real, polynomial or non-polynomial, functions.
After replacing the functions (41), (42) in Eqs. (39) and (40), we get the following equations in terms of the three parameters \(u\), \(w\), and \(\eta\) :
\[e^{12u}+\frac{2u\,(-2+\eta)}{w}=0 \tag{43}\] \[\frac{2-\eta}{w}-\frac{192\,e^{12u}\,u^{2}\,(-1+\eta)}{-2+\eta}=0 \tag{44}\]
Clearly, one of the three parameters cannot be determined and we choose to parameterize the solutions of Eqs. (43), (44), in terms of the anomalous dimension \(\eta\). The corresponding solutions \(u^{\ast}(\eta)\) and \(w^{\ast}(\eta)\) are reported in Fig. 1. It is evident from Eqs. (43), (44), that \(\eta=2\) is a singular point, whose origin can
Figure 1: Fixed point line for different values of the anomalous dimension \(\eta\).
Blue curve: \(\eta<1\); red curve: \(\eta>2\); green curve: \(1<\eta<2\).
Figure 2: Eigenvalues for three different types of eigenfunctions: \((f,h)=\{(s,r\,x^{4}),(s\,x^{2},r\,x^{6}),(s\,x^{4},r\,x^{8})\}\), for different values of the anomalous dimension \(\eta\). Blue curve: \(\eta<1\); red curve: \(\eta>2\). As eigenvalues always show the form \(\theta_{1,2}=\text{Re}(\theta)\pm i\,\text{Im}(\theta)\) for these values of \(\eta\), only one of the two Eigenvalues is plotted.
be traced back to the dimensions of the background field and to the exponent of the momentum \(\bar{k}\) in Eq. (11). However, as can be checked in Eqs. (43) and (44), \(u^{*}\) and \(w^{*}\) are both regular and vanish in the limit \(\eta\to 2\) (In Fig. 1, \(u^{*}\) and \(w^{*}\) are plotted up to \(\eta=2\pm 0.001\)). From Eqs. (43), (44), we also notice that, in the limit \(\eta\to 1\), at least one of the two solutions \(u^{*}\) and \(w^{*}\) diverge, thus making the point \(\eta=1\) singular; in addition, when \(\eta\to\pm\infty\), the parameter \(w^{*}\) diverges.
Therefore, Fig. 1 shows three disconnected lines of fixed points parameterized in terms of the continuous variable \(\eta\), with two singular points at \(\eta=1\) and \(\eta=2\). This is quite different from the picture of a simple scalar quantum field theory where, in the same kind of analysis, one can constrain \(w^{*}=1\), because any rescaling of \(w^{*}\) would imply a physically irrelevant rescaling of \(u^{*}\) and of the field, with no change in the spectrum of the eigenvalues of the linearized flow equations around the fixed point solution. In this case, the inclusion of the same anomalous dimension for both fields \(\phi\) and \(\chi_{B}\), together with the identification of \(u^{*}\) and \(w^{*}\) with the physically meaningful quantities \(\lambda^{*}\) and \(g^{*}\) makes the rescaling of \(w^{*}\) an appreciable operation. We conclude that each FP of the lines in Fig. 1, corresponds to a distinct solution.
From Fig. 1, we notice that the solutions \(u^{*}\) and \(w^{*}\) maintain a definite sign within each of the three ranges \(\eta<1\), \(1<\eta<2\) and \(2<\eta\), but they can switch sign when passing from one range to another. Finally, we recall that the point \(\eta=0\) corresponds to the usual formalism of literature [17], as (31) and (32) reproduce the known scaling of \(V_{k}\) and \(Z_{k}\). In other words, \(\eta=0\) must correctly reproduce the Reuter FP as in [17], characterized by \(g^{*}>0\) and \(\lambda^{*}>0\) and, in fact, within our computation, the Reuter FP with \(\eta=0\) is located at \(w^{*}=0.086\) and \(u^{*}=0.173\).
Although it seems natural to expect \(u^{*},w^{*}>0\), which in our computation is recovered only when \(\eta<1\), we do not have any argument to reject the negative solutions as long as they correspond to UV fixed points whose outgoing trajectories lead in the IR limit to physically acceptable positive values of \(\lambda\) and \(g\). Therefore the following mandatory step is the study of the eigenfunction spectrum of our flow equations.
### Eigenfunction spectrum
The set of eigenfunctions, that stems from the resolution of the flow equations suitably linearized around a FP, determines the renormalization properties of the model at that FP. In fact, as discussed in [27], after establishing the critical surface around the FP, spanned by the full set of UV-attractive, or relevant, directions, the continuum limit of the model is realized on the'renormalized trajectories' (RTs), i.e. RG flow trajectories, which land on the critical surface in the continuum limit \(t\to\infty\). The form of the effective action in terms of eigenfunctions around a fixed point is
\[S_{\lambda}[\phi]=S_{*}[\phi]+\sum_{i=1}^{n}\alpha_{i}e^{-\alpha_{i}t}O_{i}[ \phi]\,, \tag{45}\]
where \(S_{*}[\phi]\) is the FP action and the operators \({\cal O}_{i}[\phi]\) are the eigenperturbations and \(\theta_{i}\) the corresponding eigenvalues. The sign of \(\theta_{i}\) defines the nature of the eigenfunction, namely positive, negative or vanishing \(\theta_{i}\) correspond respectively to relevant, irrelevant or marginal operators. To explicitly determine the eigenvalues we express the variables \(Y(x,t)\) and \(z(x,t)\) as
\[Y(x,t)=Y^{*}(x)+\delta\,e^{-\alpha_{i}}h(x)\,, \tag{46}\] \[z(x,t)=z^{*}(x)+\delta\,e^{-\alpha_{i}}f(x)\,. \tag{47}\]
where \(h(x)\) and \(f(x)\) represent small perturbations around the FP solution \(Y^{*}\), \(z^{*}\) (which are related to \(u^{*}\), \(w^{*}\) through Eqs. (41), (42)) and \(\delta\) is a small number used to parametrize the expansion around the FP. Remarkably, as we already found out in the case of the FP, for a specific monomial form of \(f(x)\) and \(h(x)\), the linearized flow equations become field independent and the problem is again reduced to a set of algebraic equations.
In fact, we consider \(f(x)=s\,x^{p}\) and \(h(x)=r\,x^{q}\), with non-negative integers \(p\) and \(q\) and with constant \(s\) and \(r\). Then, with these prescriptions, the linearized equations (39) and (40) (i.e. the coefficients of the linear terms in \(\delta\) when Eqs. (39) and (40) are expanded in powers of \(\delta\), after the insertion of Eqs. (46), (47) ) become
\[4-\frac{\eta q}{2}-e^{12\nu^{*}}\;w^{*}\;\left(\frac{12\,s\,u^{*}\,x^{-q+p+4}} {r}+(q-1)\,q\right)=\theta \tag{48}\]
\[\frac{16\,(\eta-1)\,(q-1)\,q\,r\,e^{12\nu^{*}}\;u^{*}\,w^{*}\,(q+12u^{*}-2)\, x^{q+p-4}}{(\eta-2)s}\] \[+\left[384(\eta-2)(\eta-1)\,(u^{*})^{2}\,(6u^{*}+1)+16(\eta\,(3 \eta-7)+6)\,p\,u^{*}\right.\] \[+\left.(\eta-2)^{2}p-(\eta-2)^{2}p^{2}\right]\,\frac{e^{12\nu^{*} }\,w^{*}}{(\eta-2)^{2}}-2-\frac{\eta}{2}\,(p+2)=\theta \tag{49}\]
Only if \(q=p+4\) these two equations become independent of \(x\) and therefore, the spectrum of eigenfunctions consists of the pairs \((f,h)=(s,r\,x^{4})\), \((f,h)=(s\,x^{2},r\,x^{6})\), \((f,h)=(s\,x^{4},r\,x^{8})\) and so on, where the coefficient \(s\) and \(r\) have to be determined separately for each eigenfunction from Eqs. (48) and (49).
Then, for each value of \(\eta\), which selects a FP solution \(u^{*}(\eta)\), \(w^{*}(\eta)\) according to Fig. 1, and for each eigenfunction \((f,h)\) we compute the parameters \(s\) and \(\theta\) from Eqs. (48) and (49) (note that in these equations the parameter \(r\) only appears in the ratio \((s/r)\) and therefore we can choose \(r=1\) to normalize the eigenfunction \((f,h)\)), whose non-linear structure provides either a pair of complex conjugate eigenvalues \(\theta\), \(\theta^{*}\), or a pair of real eigenvalues \(\theta_{1}\), \(\theta_{2}\). Numerous eigenfunctions were tested, up to \((f,h)=(s\,x^{1000},r\,x^{1004})\), and in all cases the complex eigenvalues are found in the ranges \(\eta<1\) and \(2<\eta\), while the real eigenvalues in the interval \(1<\eta<2\). These results are summarized in Figs. 2, 3.
In Fig. 2 the real and imaginary part of \(\theta\) corresponding to \(\eta<1\) (in blue), and to \(2<\eta\) (in red), are plotted, for the three eigenfunctions corresponding to \(q=4,6,8\) (we recall that the second solution for each value of \(\eta\) is \(\theta^{*}\)).
We observe that the eigenvalues associated with \(q=4\) are different from all the others as they have negative real part (which indicates irrelevant eigenfunctions) when \(2<\eta\) and \(0.96\simeq\eta_{c}<\eta<1\), and positive real part (relevant eigenfunctions) for \(\eta<\eta_{c}\). Conversely, all other cases with \(q>4\) show negative real part for \(\eta<1\), and positive for \(2<\eta\).
Remarkably, at \(\eta=0\), our numerical determination of the eigenvalue associated with \(q=4\) (the only one with positive real part), namely \(\theta_{1,2}=2.919\pm i\,3.923\), coincides with the eigenvalue of the asymptotically safe trajectory of the Reuter FP found in [17] for the R\({}^{4}\) projection with \(n=+\infty\).
Finally, in Fig. 3 the eigenvalues in the region \(1<\eta<2\) are displayed. In this case, the two complex conjugate solutions found in the other ranges of \(\eta\), are replaced by two real solutions plotted respectively on the \(x\) and \(y\) axis in the figure. Again, we observe a special role of the solutions with \(q=4\), because both real eigenvalues are positive for any value of \(\eta\) in this range. All other solutions, corresponding to \(q>4\), have only one positive eigenvalue in the restricted range \(1<\eta<\overline{\eta}\) (with \(\overline{\eta}\lesssim 2\)), while the remaining solutions are negative.
Then, we emphasize the striking feature emerging from Figs. 2-3, that at fixed \(\eta\), all eigenfunctions share the same behavior, with the only exception of the one with \(q=4\) which, in most cases with complex eigenvalues shows opposite sign of the real part of the eigenvalue with respect to the other eigenfunctions, while it shows at least one eigenvalue with opposite sign, in the case of real eigenvalues.
## 4 Conclusions
The approximation scheme adopted in this paper to analyze the RG flow of a generalization of the conformally reduced Einstein-Hilbert action, yields a rich picture of the UV critical manifold of the theory.
Specifically, in our approach the background field is treated in the so called single field approximation within the framework of the proper time flow, where the background field is eventually identified with the full field expectation. This scheme implies a distinction between the spectra of \(\dot{\square}\) and \(\dot{\square}\), and consequently it is possible to introduce the anomalous dimension \(\eta\) through Eqs. (10) and (11), and \(\eta\) is eventually determined by integrating over the fluctuations of the conformal factor.
Then, the generalization of the CREH action reported in Eq. (7), studied in this paper, generates a continuum of fixed points suitably parameterized by the anomalous dimension \(\eta\) in Fig. 1. Each fixed point possesses a discrete spectrum of eigenoperators as shown in Figs. 2 and 3.
On one hand, these findings represent the improvement produced by our approximation scheme with respect to the analysis of the plain CREH truncation [17] where a single FP, namely the Reuter FP related with the property of asymptotic safety, was spotted. On the other hand, the UV critical manifold illustrated here presents an evident dissimilarity with respect to the one derived in [21] where a full background independent approximation scheme (realized by resorting to modified split Ward identities) was adopted and a continuum of fixed points, supporting both a discrete and a continuous eigenoperator spectrum is found. The partial incongruity of the two results, which is certainly to be addressed to the different approaches adopted in the two cases, in our opinion is an clear indicator of the sensitivity of the predictions to the particular procedure and approximation scheme selected.
Turning to the results of the analysis discussed above, if we attempt to establish the physical significance of our solutions from the eigenfunction analysis, rather than from the determination of the sign of the FP shown in Fig. 1, we should discard FPs and eigenfunctions corresponding to \(\eta>2\), as in this case we find an infinite number of relevant directions (those with \(q>4\)) and just an irrelevant one (\(q=4\)) and this would correspond to a not predictive theory.
Conversely, the line of FPs in the range \(\eta<\eta_{c}\approx 0.96\), has the desired properties of yielding both positive Newton's Constant and Cosmological Constant, with complex eigenvalues associated to the eigenfunction solutions of the linearized flow equation around each FP, but with just one relevant solution (i.e. with positive real part of the eigenvalue) that produces the renowned spiral behavior. In particular, the FP at \(\eta=0\) correctly reproduces the main features of the Reuter FP. Moreover,
Figure 3: Eigenvalues for three different types of Eigenfunctions: \((f,h)=\{(s,r\,x^{4}),(s\,x^{2},r\,x^{6}),(s\,x^{4},r\,x^{8})\}\), for different values of the anomalous dimension \(1<\eta<2\). The two eigenvalues associated to each value of \(\eta\) within this range, are real and their values are respectively reported on the two axes of the plot.
it is worth noticing that the presence of an endpoint in a continuous line of FPs, analogous to \(\eta_{c}\) in our analysis, in the case of the two-dimensional Kosterlitz-Thouless transition is directly related to the universal, physically measurable, property of the spin-stiffness jump [28; 29].
Finally, the range \(1<\eta<2\) has infinite relevant directions (and therefore it corresponds again to a not predictive theory that must be discarded), except for the small range \(\overline{\eta}<\eta<2\) where only the two real eigenvalues with \(q=4\) are positive. Therefore, this small range of \(\eta\) produces a peculiar structure, different from the spiral behavior so far observed, which certainly deserves a more accurate investigation, in order to clarify whether it corresponds to a genuine physical effect and not to an artifact of the adopted approximation.
## Acknowledgements
MC thanks the National Institute of Astrophysics (INAF) section of Catania together with the INAF - Catania Astrophysical Observatory for their hospitality during the preparation of the manuscript. MC acknowledges support from National Institute of Nuclear Physics (INFN). MC also wishes to thank Sergio Cacciatori for his invaluable help with the 'phiboxing'. This work has been carried out within the INFN project FLAG.
|
2301.13747 | Algebraic identities on q-harmonic numbers and q-binomial coefficients | The aim of this paper is to present a general algebraic identity. Applying
this identity, we provide several formulas involving the q-binomial
coefficients and the q-harmonic numbers. We also recover some known identities
including an algebraic identity of D. Y. Zheng on q-Ap\'{e}ry numbers and we
establish the q-analog of Euler's formula. The proposed results may have
important applications in the theory of q-supercongruences. | Said Zriaa, Mohammed Mouçouf | 2023-01-31T16:32:49Z | http://arxiv.org/abs/2301.13747v1 | # Algebraic identities on q-harmonic numbers and q-binomial coefficients
###### Abstract.
The aim of this paper is to present a general algebraic identity. Applying this identity, we provide several formulas involving the q-binomial coefficients and the q-harmonic numbers. We also recover some known identities including an algebraic identity of D. Y. Zheng on q-Apery numbers and we establish the q-analog of Euler's formula. The proposed results may have important applications in the theory of q-supercongruences.
Key words and phrases:Algebraic identities, q-binomial coefficients, q-harmonic numbers, complete Bell polynomials
## 1. Introduction
For an indeterminate \(x\) the q-shifted factorial is usually defined by
\[(x;q)_{0}=1\ \ \text{and}\ \ (x;q)_{n}=(1-x)(1-qx)\cdots(1-q^{n-1}x)\ \ \text{for}\ \ n=1,2,\ldots\]
The Gaussian q-binomial coefficient is correspondingly given by
\[\genfrac{[}{]}{0.0pt}{}{n}{j}=\frac{(q;q)_{n}}{(q;q)_{j}(q;q)_{n-j}}\ \ \text{for}\ \ j=0,1,2,\ldots,n,\]
The q-harmonic numbers are defined by
\[\mathcal{H}_{n}(q)=\sum_{k=1}^{n}\frac{1}{[k]_{q}}\ \ \text{for}\ \ n=1,2,\ldots,\]
where the q-numbers are given by
\[[k]\ \ \text{or}\ \ [k]_{q}:=\frac{1-q^{k}}{1-q}=1+q+\cdots+q^{k-1}\]
Following Comtet [7], the complete Bell polynomials can be explicitly expressed as
\[\mathbf{B}_{n}(x_{1},x_{2},\cdots,x_{n})=\sum_{m_{1}+2m_{2}+\cdots+nm_{n}=n} \frac{n!}{m_{1}!m_{2}!\cdots m_{n}!}\bigg{(}\frac{x_{1}}{1!}\bigg{)}^{m_{1}} \bigg{(}\frac{x_{2}}{2!}\bigg{)}^{m_{2}}\cdots\bigg{(}\frac{x_{n}}{n!}\bigg{)} ^{m_{n}}\]
During the last two decades, there has been an increasing interest in studying binomial sums and their q-analogues, one can consult recent papers, for example [6, 18, 19]. For a comprehensive account of the q-series and its applications to numbers theory, combinatorics and special functions, we refer the reader to the excellent monograph by G. Gasper and M. Rahman [8]. Recently, the study of the q-harmonic congruences turned into a very active research area (see e.g., [17] and the references therein).
There exist numerous combinatorial identities involving q-binomial coefficients in the mathematical literature (see e.g., [6, 12, 16, 18, 19]). Nowadays, there has been growing interest in deriving q-analogues of several combinatorial identities. This includes, for example, q-generalizations of some well known identities involving harmonic numbers.
The Apery numbers are defined by the following binomial sum
\[A(n)=\sum_{k=0}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}\]
These numbers have many interesting properties that make them extremely useful in the proof of the irrationality of \(\zeta(3)\) (see [3] for further details). Also the Apery numbers have remarkable arithmetic properties [9]. F. Beukers conjectured [4] that
\[A\bigg{(}\frac{p-1}{2}\bigg{)}\equiv a(p)\ \ (modp^{2})\]
where \(p\) is an odd prime and \(a(n)\) is determined by
\[\sum_{k=1}^{\infty}a(k)q^{k}:=q\prod_{k=1}^{\infty}(1-q^{2k})^{4}(1-q^{4k})^{ 4}=q-4q^{3}-2q^{5}+24q^{7}+\cdots.\]
Beukers' conjecture was later showed by S. Ahlgren, K. Ono [1], who reduce this statement in terms of the harmonic numbers \(H_{n}\) to the identity
\[\sum_{k=1}^{n}\binom{n}{k}^{2}\binom{n+k}{k}^{2}\bigg{(}1+2kH_{n+k}+2kH_{n-k}- 4kH_{k}\bigg{)}=0\]
In order to give a classical proof of the last identity, Chu [5] presented the following algebraic identity
\[\sum_{j=0}^{n}\binom{n}{j}^{2}\binom{n+j}{j}^{2}\bigg{(}\frac{-j}{(x+j)^{2}}+ \frac{1+2j(H_{n+j}-H_{j})+2j(H_{n-j}-H_{j})}{x+j}\bigg{)}=\frac{x(1-x)_{n}^{2} }{(x)_{n+1}^{2}}\]
which gives the desired formula in the limit.
In order to prove irrationality results on the q-analog of \(\zeta(3)\):
\[\zeta_{q}(3)=\sum_{k=1}^{+\infty}\frac{q^{k}(1+q^{k})}{(1-q^{k})^{3}}\]
The authors of [13] introduced implicitly a q-analog of the Apery numbers \(A_{q}^{KRZ}(n)\) and they showed that
\[A_{q}^{KRZ}(n)=\sum_{k=0}^{n}\frac{a_{q}(n,k)}{q^{k}}\]
where \(a_{q}(n,k)\) can be defined via the following q-partial fraction decomposition
\[\frac{(xq^{-n};q)_{n}^{2}}{(x;q)_{n+1}^{2}}=\sum_{k=0}^{n}\bigg{(}\frac{a_{q} (n,k)}{(1-q^{k}x)^{2}}+\frac{b_{q}(n,k)}{1-q^{k}x}\bigg{)} \tag{1.1}\]
A. Straub [17] showed via the partial fraction decomposition technique that \(A_{q}^{KRZ}(n)\) have the following explicit q-binomial representation
\[q^{n(2n+1)}A_{q}^{KRZ}(n)=\sum_{k=0}^{n}q^{(n-k)^{2}}\binom{n}{k}^{2}\binom{n +k}{k}^{2}\]
which reduces to Apery numbers when \(q\to 1\).
D. Y. Zheng [20] has recently introduced the q-Apery numbers
\[q^{n(n+1)}A_{q}^{KRZ}(n)=\sum_{k=0}^{n}q^{k(k-2n)}\binom{n}{k}^{2}\binom{n+k} {k}^{2}\]
then he established an interesting algebraic identity
\[\frac{x^{2n}(q/x;q)_{n}^{2}}{(1-x)(xq;q)_{n}^{2}}=\frac{1}{1-x}+\sum_{j=1}^{n }\binom{n}{j}^{2}q^{j(j-2n)}\bigg{(}\frac{q^{j}-1}{(1-xq^{j})^{2}}+\frac{1-4[j ]\mathcal{H}_{j}(q)+2[j]\mathcal{H}_{n+j}(q)+2q[j]\mathcal{H}_{n-j}(q^{-1})}{ 1-xq^{j}}\bigg{)} \tag{1.2}\]
which is a q-extension of the Chu's identity, and obtained the identity
\[\sum_{k=0}^{n}q^{k(k-2n)}\binom{n}{k}^{2}\binom{n+k}{k}^{2}\bigg{(}2\mathcal{ H}_{k}(q)-\mathcal{H}_{n+k}(q)-q\mathcal{H}_{n-k}(q^{-1})\bigg{)}=0\]
The purpose of this paper is to establish and develop important algebraic identities involving q-harmonic numbers and q-binomial coefficients, which may have important applications in the theory of q-supercongruences.
## 2. Some identities of q-binomial coefficients
We now state and prove one of the main results of this paper.
**Theorem 2.1**.: _Let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{s}\) be distinct elements of \(\mathbb{C}\). For a positive integer \(m\) let \(P(x)=(x-\alpha_{1})^{m}(x-\alpha_{2})^{m}\cdots(x-\alpha_{s})^{m}\). For any polynomial \(Q(x)\) such that \(\deg(Q)<\deg(P)\), we have_
\[\frac{Q(x)}{P(x)}=\sum_{j=1}^{s}\sum_{i=0}^{m-1}\sum_{k=0}^{m-1-i}\frac{(-1)^{k }g_{j}(\alpha_{j})\mathbf{B}_{k}(x_{1},\cdots,x_{k})Q^{(i)}(\alpha_{j})}{i!k!( x-\alpha_{j})^{m-i-k}}. \tag{2.1}\]
_where_
\[x_{l}=m(l-1)!\sum_{i=1,i\neq j}^{s}\frac{1}{(\alpha_{j}-\alpha_{i})^{l}}\ \text{ and }\ g_{j}(x)=\prod_{i=1,i\neq j}^{s}(x-\alpha_{i})^{-m}.\]
Proof.: Following [14, Eq.4], we can write
\[Q(x)=\sum_{j=1}^{s}\sum_{i=0}^{m-1}\frac{1}{i!}Q^{(i)}(\alpha_{j})L_{ji}(x)[P].\]
By virtue of Equation (2) of [14], it is clear that
\[L_{ji}(x)[P]=P(x)\sum_{k=0}^{m-1-i}\frac{g_{j}^{(k)}(\alpha_{j})}{k!(x-\alpha_ {j})^{m-i-k}}\]
If we combine the two previous identities, we get
\[\frac{Q(x)}{P(x)}=\sum_{j=1}^{s}\sum_{i=0}^{m-1}\sum_{k=0}^{m-1-i}\frac{g_{j} ^{(k)}(\alpha_{j})Q^{(i)}(\alpha_{j})}{i!k!(x-\alpha_{j})^{m-i-k}}.\]
Since
\[g_{j}(x)=\phi(x)\circ f_{j}(x)\]
where \(\phi(x)=\exp(mx)\) and \(f_{j}(x)=\ln(\prod_{i=1,i\neq j}^{s}(x-\alpha_{i})^{-1})\). Then \(\phi^{(k)}(x)=m^{k}\exp(mx)\) and \(f_{j}^{(k)}(x)=(-1)^{k}(k-1)!\mathcal{H}_{k,\alpha_{i}[j]}(x)\), where \(\mathcal{H}_{l,\alpha_{i}[j]}(x)=\sum_{i=1,i\neq j}^{s}\frac{1}{(x-\alpha_{i}) ^{l}}\). By using the Faa di Bruno formula, we can easily prove that
\[g_{j}^{(k)}(x)=(-1)^{k}g_{j}(x)\sum_{m_{1}+2m_{2}+\cdots+km_{k}=k}\frac{k!}{m _{1}!m_{2}!\cdots m_{k}!}\prod_{l=1}^{k}\left(\frac{m(l-1)!\mathcal{H}_{l, \alpha_{i}[j]}(x)}{l!}\right)^{m_{l}}\]
In particular
\[g_{j}^{(k)}(\alpha_{j})=(-1)^{k}g_{j}(\alpha_{j})\mathbf{B}_{k}(x_{1},\cdots, x_{k})\]
This gives the required result.
Taking \(\alpha_{i}=q^{-i}\) in Theorem (2.1), we obtain after some minor manipulations the following theorem.
**Theorem 2.2**.: _Let \(m\) and \(n\) positive integers and let \(Q(x)\) be a polynomial such that \(\deg(Q)<(n+1)m\), we have_
\[\frac{(q;q)^{m}_{n}Q(x)}{(x;q)^{m}_{n+1}}=\sum_{j=0}^{n}{n\brack j}^{m}q^{m{j+ 1\choose 2}}\sum_{i=0}^{m-1}\sum_{k=0}^{m-1-i}\frac{(-1)^{mj+i}\mathbf{B}_{k}(x _{1},x_{2},\cdots,x_{k})Q^{(i)}(q^{-j})}{i!k!q^{j(i+k)}(1-xq^{j})^{m-i-k}}.\]
_where_
\[x_{1}=\frac{q^{j}m}{1-q}\bigg{(}\mathcal{H}_{j}(q)-q\mathcal{H}_{n-j}(q^{-1}) \bigg{)}\]
_and_
\[x_{l}=m(l-1)!\sum_{i=0,i\neq j}^{n}\frac{q^{jl}}{(1-q^{j-i})^{l}}\ \ \text{for}\ \ l=1,2,\ldots,m-1,\]
In view of Theorem (2.2), we establish interesting corollaries
**Corollary 2.3**.: _Let \(n\) be a positive integer and let \(Q(x)\) be a polynomial such that \(\deg(Q)<2(n+1)\). We have_
\[\frac{(q;q)_{n}^{2}Q(x)}{(x;q)_{n+1}^{2}}=\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{ n}{j}^{2}q^{j(j+1)}\bigg{(}\frac{Q(q^{-j})}{(1-xq^{j})^{2}}-\frac{q^{-j}Q^{{}^{ \prime}}(q^{-j})}{(1-xq^{j})}+\frac{2Q(q^{-j})}{(1-xq^{j})}\bigg{(}\frac{ \mathcal{H}_{j}(q)-q\mathcal{H}_{n-j}(q^{-1})}{1-q}\bigg{)}\bigg{)}\]
**Identity 2.4**.: _Setting \(Q(x)=1\) in the last corollary, we obtain_
\[\frac{(q;q)_{n}^{2}}{(x;q)_{n+1}^{2}}=\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{ j}^{2}q^{j(j+1)}\bigg{(}\frac{1}{(1-xq^{j})^{2}}+\frac{2}{1-xq^{j}}\bigg{(} \frac{\mathcal{H}_{j}(q)-q\mathcal{H}_{n-j}(q^{-1})}{1-q}\bigg{)}\bigg{)}\]
_In particular_
\[(q;q)_{n}^{2}=\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}^{2}q^{j(j+1)}\bigg{(} 1+2\bigg{(}\frac{\mathcal{H}_{j}(q)-q\mathcal{H}_{n-j}(q^{-1})}{1-q}\bigg{)} \bigg{)}\]
**Identity 2.5**.: \[\frac{x^{2n}(q/x;q)_{n}^{2}}{(1-x)(xq;q)_{n}^{2}}=\frac{1}{1-x}+\sum_{j=1}^{n} \genfrac{[}{]}{0.0pt}{}{n}{j}^{2}q^{j(j-2n)}\bigg{(}\frac{q^{j}-1}{(1-xq^{j})^ {2}}+\frac{1-4[j]\mathcal{H}_{j}(q)+2[j]\mathcal{H}_{n+j}(q)+2q[j]\mathcal{H}_ {n-j}(q^{-1})}{1-xq^{j}}\bigg{)}\]
_This identity recovers Zheng identity (1.2)._
Proof.: Let
\[Q(x)=(1-x)(x-q)^{2}\cdots(x-q^{n})^{2}=(1-x)x^{2n}(q/x;q)_{n}^{2}\]
It is not difficult to verify that
\[\frac{Q(x)}{(x;q)_{n+1}^{2}}=\frac{x^{2n}(q/x;q)_{n}^{2}}{(1-x)(xq;q)_{n}^{2}} \ \,\ \ Q(q^{-j})=q^{-j(2n+1)}(q^{j}-1)(q;q)_{n}^{2}\genfrac{[}{]}{0.0pt}{}{n+j}{j}^{2}\]
and
\[Q^{{}^{\prime}}(q^{-j})=q^{-2nj}(q;q)_{n}^{2}\genfrac{[}{]}{0.0pt}{}{n+j}{j}^{ 2}\bigg{(}-1-2[j]\bigg{(}\mathcal{H}_{n+j}(q)-\mathcal{H}_{j}(q)\bigg{)}\bigg{)}\]
Applying Corollary (2.3), we get after some simplifications the desired identity.
**Identity 2.6**.: \[\frac{(xq^{-n};q)_{n}^{2}}{(x;q)_{n+1}^{2}}=\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt }{}{n}{j}^{2}\genfrac{[}{]}{0.0pt}{}{n+j}{j}^{2}q^{j(j+1)-n(n+2j+1)}\bigg{(} \frac{1}{(1-xq^{j})^{2}}+\frac{4\mathcal{H}_{j}(q)-2q\mathcal{H}_{n-j}(q^{-1}) -2\mathcal{H}_{n+j}(q)}{(1-q)(1-xq^{j})}\bigg{)}\]
_This identity gives the explicit representation of (1.1)._
Proof.: Let \(Q(x)=(xq^{-n};q)_{n}^{2}\). We have \(Q(q^{-j})=q^{-n(n+2j+1)}\genfrac{[}{]}{0.0pt}{}{n+j}{j}^{2}\) and
\[Q^{{}^{\prime}}(q^{-j})=2q^{-n(n+2j+1)+j}\genfrac{[}{]}{0.0pt}{}{n+j}{j}^{2} \bigg{(}\frac{\mathcal{H}_{n+j}(q)-\mathcal{H}_{j}(q)}{1-q}\bigg{)}\]
Using Corollary (2.3), the identity follows.
**Identity 2.7**.: _Letting \(x=0\) in the last identity, we obtain_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}^{2}\genfrac{[}{]}{0.0pt}{}{n+j}{j}^ {2}q^{j(j+1)-2nj}\bigg{(}1-q+4\mathcal{H}_{j}(q)-2q\mathcal{H}_{n-j}(q^{-1})-2 \mathcal{H}_{n+j}(q)\bigg{)}=q^{n(n+1)}(1-q)\]
**Corollary 2.8**.: _Let \(m,n,l\) be tree positive integers such that \(0\leq l<(n+1)m\). Then we have_
\[\frac{(q;q)_{n}^{m}x^{l}}{(x;q)_{n+1}^{m}}=\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{ j}^{m}q^{m\binom{j+1}{2}}\sum_{i=0}^{m-1}\sum_{k=0}^{m-1-i}\binom{l}{i}\frac{(-1)^{ mj+i}\mathbf{B}_{k}(x_{1},x_{2},\cdots,x_{k})}{k!q^{j(k+l)}(1-xq^{j})^{m-i-k}}.\]
_where_
\[x_{l}=m(l-1)!\sum_{i=0,i\neq j}^{n}\frac{q^{jl}}{(1-q^{j-i})^{l}}\]
**Corollary 2.9**.: _Let \(m\) and \(n\) be positive integers. We have_
\[\frac{(q;q)_{n}^{m}}{(x;q)_{n+1}^{m}}=\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{ j}^{m}q^{m\binom{j+1}{2}}\sum_{k=0}^{m-1}\frac{(-1)^{mj}\mathbf{B}_{k}(x_{1},x_{2}, \cdots,x_{k})}{k!q^{jk}(1-xq^{j})^{m-k}}.\]
_where_
\[x_{l}=m(l-1)!\sum_{i=0,i\neq j}^{n}\frac{q^{jl}}{(1-q^{j-i})^{l}}\]
**Corollary 2.10**.: _Let \(Q(x)\) be a polynomial such that \(\deg(Q)<n+1\). Then we have_
\[\frac{(q;q)_{n}Q(x)}{(x;q)_{n+1}}=\sum_{j=0}^{n}(-1)^{j}\genfrac{[}{]}{0.0pt}{}{ n}{j}q^{(j+1)}\frac{Q(q^{-j})}{(1-xq^{j})}.\]
The limiting case of Theorem (2.2), is the following
**Theorem 2.11**.: _Let \(m\) and \(n\) be two positive integers. Let \(Q(x)\) be a polynomial of degree \(l\) with leading coefficient \(a_{l}\). Then we have the following curious identity:_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}^{m}q^{m\binom{j}{2}}\sum_{i=0}^{m- 1}\frac{(-1)^{mj+i+1}\mathbf{B}_{m-1-i}(x_{1},x_{2},\cdots,x_{m-1-i})Q^{(i)}(q ^{-j})}{i!(m-1-i)!}=\]
\[\begin{cases}0&\text{if}\quad 0\leq l<m(n+1)-1,\\ (-1)^{m(n+1)}(q;q)_{n}^{m}q^{-m\binom{n+1}{2}}a_{l}&\text{if}\quad l=m(n+1)-1. \end{cases}\]
_where_
\[x_{l}=m(l-1)!\sum_{i=0,i\neq j}^{n}\frac{q^{jl}}{(1-q^{j-i})^{l}}\]
When \(m=1\), the formula of Theorem (2.11) reads explicitly as
**Corollary 2.12**.: _Let \(n\) be a positive integer and \(Q(x)\) be a polynomial of degree \(l\) with leading coefficient \(a_{l}\). Then we have the following identity:_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}q^{\binom{j}{2}}(-1)^{j}Q(q^{-j})= \begin{cases}0&\text{if}\quad 0\leq l<n,\\ (-1)^{n}(q;q)_{n}q^{-\binom{n+1}{2}}a_{l}&\text{if}\quad l=n.\end{cases}\]
**Identity 2.13**.: _Letting \(Q(x)=(x-1)^{l}\) in the last corollary, we obtain, after some simple calculations, that_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}q^{\binom{j}{2}-jl}(-1)^{j}[j]^{l}= \begin{cases}0&\text{if}\quad 0\leq l<n,\\ (-1)^{n}\frac{(q;q)_{n}}{(1-q)^{n}}q^{-\binom{n+1}{2}}&\text{if}\quad l=n.\end{cases}\]
_We remark that in the limiting case \(q\to 1\) of this identity, we obtain the famous Euler's formula[2, 10, 11, 15]. Therefore, this identity is the q-analog of Euler's formula:_
\[\sum_{j=0}^{n}\binom{n}{j}(-1)^{j}j^{l}=\begin{cases}0&\text{if}\quad 0\leq l<n,\\ (-1)^{n}n!&\text{if}\quad l=n.\end{cases}\]
Opting \(m=2\) in Theorem (2.11), we gain the following important result
**Corollary 2.14**.: _Let \(n\) be a positive integer and \(Q(x)\) be a polynomial of degree \(l\) with leading coefficient \(a_{l}\). Then the following identity holds:_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}^{2}q^{j(j-1)}\bigg{(}Q^{{}^{\prime}}( q^{-j})-\frac{2q^{j}Q(q^{-j})}{1-q}\bigg{(}\mathcal{H}_{j}(q)-q\mathcal{H}_{n-j}(q ^{-1})\bigg{)}\bigg{)}=\begin{cases}0&\text{if}\quad 0\leq l<2n+1,\\ (q;q)_{n}^{2}q^{-n(n+1)}a_{l}&\text{if}\quad l=2n+1.\end{cases}\]
**Identity 2.15**.: _If \(Q(x)=1\), then the formula of Corollary (2.14) gives_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}^{2}q^{j}\bigg{(}\mathcal{H}_{j}(q )-q\mathcal{H}_{n-j}(q^{-1})\bigg{)}=0\]
**Identity 2.16**.: _Choosing \(Q(x)=(1-x)(x-q)^{2}\cdots(x-q^{n})^{2}\) in Corollary (2.14), we obtain_
\[\sum_{j=0}^{n}\genfrac{[}{]}{0.0pt}{}{n}{j}^{2}\genfrac{[}{]}{0.0pt}{}{n+j}{j} ^{2}q^{j(j-1)-2nj}\bigg{(}1+2[j]\mathcal{H}_{n+j}(q)-4[j]\mathcal{H}_{j}(q)+2 q\mathcal{H}_{n-j}(q^{-1})\bigg{)}\bigg{)}=q^{-n(n+1)}\]
It would be interesting to establish some identities by means of Corollary (2.10).
**Theorem 2.17**.: _Let \(n\) be a positive integer and \(y\) any complex number. Then the following identity holds true:_
\[\sum_{j=0}^{n}(-1)^{j}\genfrac{[}{]}{0.0pt}{}{n}{j}q^{\binom{j+1}{2}-jn}\frac{ (y;q)_{n+j}}{(y;q)_{j}(1-xq^{j})}=\frac{(q;q)_{n}(y/x;q)_{n}x^{n}}{(x;q)_{n+1}}.\]
Proof.: Let \(Q(x)\) be the polynomial
\[Q(x)=(x-y)(x-qy)\cdots(x-yq^{n-1})=(y/x;q)_{n}x^{n}\]
Since
\[Q(q^{-j})=(yq^{j};q)_{n}q^{-jn}\ \ \text{and}\ \ (yq^{j};q)_{n}=\frac{(y;q)_{n +j}}{(y;q)_{j}}\]
we get form Corollary (2.10) the desired formula.
Letting \(y=q\) in the last theorem, we obtain the following result
**Theorem 2.18**.: _Let \(n\) be a positive integer, the following identities hold true_
\[\sum_{j=0}^{n}(-1)^{j}\genfrac{[}{]}{0.0pt}{}{n}{j}\genfrac{[}{]}{0.0pt}{}{n+j }{j}\frac{q^{\binom{j+1}{2}-jn}}{(1-xq^{j})}=\frac{x^{n}(q/x;q)_{n}}{(x;q)_{n+1}}. \tag{1}\]
\[\sum_{j=0}^{n}(-1)^{j}\genfrac{[}{]}{0.0pt}{}{n}{j}\genfrac{[}{]}{0.0pt}{}{n+j }{j}\frac{q^{\binom{j+1}{2}-jn}}{(1-q^{j+l})}=0. \tag{2}\]
_for \(l=1,2,\ldots,n\)._
\[\sum_{j=0}^{n-1}(-1)^{n+1+j}\genfrac{[}{]}{0.0pt}{}{n}{j}\genfrac{[}{]}{0.0pt }{}{n+j}{j}\frac{(1-q^{n+l})}{(1-q^{j+l})}q^{\binom{j+1}{2}+n^{2}-jn}=\genfrac{ [}{]}{0.0pt}{}{2n}q^{\binom{n+1}{2}}. \tag{3}\]
_for \(l=1,2,\ldots,n\)._
\[q^{\binom{n+1}{2}}=\sum_{j=0}^{n}(-1)^{n-j}\genfrac{[}{]}{0.0pt}{}{n}{j} \genfrac{[}{]}{0.0pt}{}{n+j}{j}q^{\binom{j+1}{2}-jn}. \tag{4}\]
Setting \(x=a\) in the first statement of Theorem (2.18), we recover Theorem 2 of [12].
The follwoing result is an extension of Theorem 5 of [19].
**Theorem 2.19**.: _Let \(n\) be a positive integer. For \(m=1,2,\ldots,n\), we have_
\[\sum_{j=0}^{n}(-1)^{j}\genfrac{[}{]}{0.0pt}{}{n}{j}\genfrac{[}{]}{0.0pt}{}{n+j}{j} q^{\binom{j}{j}-jn}(\mathcal{H}_{m+j,q}(x)-\mathcal{H}_{j,q}(x))=(-1)^{n}q^{ \binom{n+1}{2}}\sum_{i=1}^{m}q^{i}\frac{(xq^{i-n};q)_{n}}{(xq^{i};q)_{n+1}}.\]
_where_
\[\mathcal{H}_{n,q}(x)=\sum_{i=1}^{n}\frac{q^{i}}{1-xq^{i}}\]
Proof.: By the first identity of Theorem (2.18), it is easy to check
\[\sum_{j=0}^{n}(-1)^{j}\genfrac{[}{]}{0.0pt}{}{n}{j}\genfrac{[}{]}{0.0pt}{}{n+j }{j}q^{\binom{j}{2}-jn}(\mathcal{H}_{m+j,q}(x)-\mathcal{H}_{j,q}(x))=\sum_{i= 1}^{m}q^{i}\frac{(xq^{i})^{n}(q/xq^{i};q)_{n}}{(xq^{i};q)_{n+1}}.\]
By means of
\[(q/xq^{i};q)_{n}=(-1)^{n}q^{n-in}x^{-n}q^{\binom{n}{2}}(xq^{i-n};q)_{n}\]
we deduce the result.
## Declarations
### Ethical Approval
Not applicable.
### Competing interests
No potential conflict of interest was reported by the authors.
### Authors' contributions
The authors contributed equally.
### Funding
This research received no funding.
### Availability of data and materials
Not applicable.
|
2306.17586 | Reconstructing the mid-infrared environment in the stellar merger
remnant V838 Monocerotis | V838 Mon is a stellar merger remnant that erupted in 2002 in a luminous red
novae event. Although it is well studied in the optical, near infrared and
submillimeter regimes, its structure in the mid-infrared wavelengths remains
elusive. We observed V838 Mon with the MATISSE (LMN bands) and GRAVITY (K band)
instruments at the VLTI and also the MIRCX/MYSTIC (HK bands) instruments at the
CHARA array. We geometrically modelled the squared visibilities and the closure
phases in each of the bands to obtain constraints on physical parameters.
Furthermore, we constructed high resolution images of V838 Mon in the HK bands,
using the MIRA and SQUEEZE algorithms to study the immediate surroundings of
the star. Lastly, we also modelled the spectral features seen in the K and M
bands at various temperatures. The image reconstructions show a bipolar
structure that surrounds the central star in the post merger remnant. In the K
band, the super resolved images show an extended structure (uniform disk
diameter $\sim 1.94$ mas) with a clumpy morphology that is aligned along a
north-west position angle (PA) of $-40^\circ$. Whereas in the H band, the
extended structure (uniform disk diameter $\sim 1.18$ mas) lies roughly along
the same PA. However, the northern lobe is slightly misaligned with respect to
the southern lobe, which results in the closure phase deviations. The VLTI and
CHARA imaging results show that V838 Mon is surrounded by features that
resemble jets that are intrinsically asymmetric. This is also confirmed by the
closure phase modelling. Further observations with VLTI can help to determine
whether this structure shows any variation over time, and also if such bipolar
structures are commonly formed in other stellar merger remnants. | Muhammad Zain Mobeen, Tomasz Kamiński, Alexis Matter, Markus Wittkowski, John D. Monnier, Stefan Kraus, Jean-Baptiste Le Bouquin, Narsireddy Anugu, Theo Ten Brummelaar, Claire L. Davies, Jacob Ennis, Tyler Gardner, Aaron Labdon, Cyprien Lanthermann, Gail H. Schaefer, Benjamin R. Setterholm, Nour Ibrahim, Steve B. Howell | 2023-06-30T12:06:42Z | http://arxiv.org/abs/2306.17586v1 | # Reconstructing the mid-infrared environment in the stellar merger remnant V838 Monocerotis
###### Abstract
Context: V838 Mon is a stellar merger remnant that erupted in 2002 in a luminous red novae event. Although it is well studied in the optical, near infrared and submillimeter regimes, its structure in the mid infrared wavelengths remains elusive. Over the past two decades only a handful of mid-infrared interferometric studies have been carried out and they suggest the presence of an elongated structure at multiple wavelengths. However, given the limited nature of these observations, the true morphology of the source could not be determined.
Aims:By performing image reconstruction using observations taken at the VLTI and CHARA, we aim to map out the circumstellar environment in V838 Mon.
Methods:We observed V838 Mon with the MATISSE (\(LMN\) bands) and GRAVITY (\(K\) band) instruments at the VLTI and also the MIRCX/MYSTIC (\(HK\) bands) instruments at the CHARA array. We geometrically modelled the squared visibilities and the closure phases in each of the bands to obtain constraints on physical parameters. Furthermore, we constructed high resolution images of V838 Mon in the \(HK\) bands using the MIRA and SQUEEZE algorithms to study the immediate surroundings of the star. Lastly we also modelled the spectral features seen in the \(K\) and \(M\) bands at various temperatures.
Results:The image reconstructions show a bipolar structure that surrounds the central star in the post merger remnant. In the \(K\) band the super resolved images show an extended structure (uniform disk diameter \(\sim\) 1.94 mas) with a clumpy morphology that is aligned along a north-west position angle (PA) of \(-40^{\circ}\). Whereas in the \(H\) band the extended structure (uniform disk diameter \(\sim\) 1.18 mas) lies roughly along the same PA. However the northern lobe is slightly misaligned with respect to the southern lobe which results in the closure phase deviations.
Conclusions:The VLTI and CHARA imaging results show that V838 Mon is surrounded by features that resemble jets that are intrinsically asymmetric. This is further confirmed by the closure phase modelling. Further observations with VLTI can help to determine whether this structure shows any variation over time and also if such bi-polar structures are commonly formed in other stellar merger remnants.
Conclusions:The VLTI and CHARA imaging results show that V838 Mon is surrounded by features that resemble jets that are intrinsically asymmetric. This is further confirmed by the closure phase modelling. Further observations with VLTI can help to determine whether this structure shows any variation over time and also if such bi-polar structures are commonly formed in other stellar merger remnants.
## 1 Introduction
At the start of 2002 V838 Monocerotis erupted in a luminous red nova event (Munari et al. 2002b; Tylenda 2005) and in a few weeks brightened by almost two orders of magnitude, finally reaching a peak luminosity of \(10^{6}L_{\odot}\)(Tylenda 2005; Sparks et al. 2008; Bond et al. 2003). The event is thought to have been the result of a stellar merger. According to the scenario proposed in Tylenda & Soker (2006), an \(8~{}M_{\odot}\) B-type main sequence star coalesced with a \(0.4~{}M_{\odot}\) young stellar object. The outburst was soon followed by a gradual decrease in temperature, and its spectra soon evolved to that of a late M-type supergiant (Evans et al. 2003; Loebman et al. 2015). Spectra taken in the 2000s revealed the presence of various molecules in V838 Mon, including water and transition-metal oxides (Banerjee & Ashok 2002; Kaminski et al. 2009). Dust was also observed to be produced in the post merger environment (Wisniewski et al. 2008; Kaminski et al. 2021). Additionally, a B-type companion was observed in the vicinity of the central merger remnant, which thus suggests that the merger had taken place in a hierarchical triple system (Munari et al. 2002a; Kaminski et al. 2021). The companion was obscured by dust formed in the aftermath of the 2002 eruption (Tylenda et al. 2009). V838 Mon is the best studied luminous red nova in the Milky Way, although, many others have also been
found within the Galaxy as well as elsewhere in the Local Group (Pastorello et al., 2019).
As the merger remnant in V838 Mon is enshrouded by dust, it is therefore an ideal target for mid-infrared interferometric studies. The first of these studies was conducted by Lane et al. (2005) in which they observed V838 Mon using the Palomar Testbed Interferometer (PTI). By modelling the squared visibilities in the \(K\)-band at \(2.2\,\mu\)m they were able to measure the size of the merger remnant of \(1.83\pm 0.06\) mas. There were also hints of asymmetries in the object, but due to scarce measurements these could not be confirmed. Chesneau et al. (2014) followed up these measurements between 2011 and 2014, using the Very Large Telescope Interferometer (VLTI) instruments: Astronomical Multi-BEam combineR (AMBER; Petrov et al., 2007) in \(H\) and \(K\) bands, and the MID-infrared Interferometric instrument (MIDI; Leinert et al., 2003) in \(N\) band. Fitting uniform disk models to the AMBER measurements gave an angular diameter of \(1.15\pm 0.2\) mas, which -according to the authors - indicates that the photosphere in V838 Mon had contracted by about 40% over the course of a decade. Also, their modeling of the AMBER data suggests that an extended component was present in the system, with a lower limit on the full width at half-maximum (FWHM) of \(\sim\)20 mas. Modeling of the MIDI measurements seems to point towards the presence of a dusty elongated structure whose major axis varies as a function of wavelength between 25 and 70 mas in \(N\) band. Submillimeter observations obtained with the Atacama Large Millimeter/sub millimeter Array (ALMA) in continuum revealed the presence of a flattened structure with a FWHM of 17.6\(\times\)17.6 mas surrounding V838 Mon (Kaminski et al., 2021). Recent \(L\) band measurements by Mobeen et al. (2021) also seem to paint a similar picture. Mobeen et al. (2021) geometrically modelled the squared visibilities and closure phases in the \(L\)-band, obtained using the Multi AperTure mid-Infrared Spectroscopic Experiment instrument (MATISSE) at the VLTI in 2020. They found that the structure in the \(L\)-band is well represented by an elliptical disk tilted at an angle of \(-40^{\circ}\). Furthermore, the closure phases showed small but non-zero deviations, which suggest the presence of asymmetries in the system. The interferometric measurements span across the wavebands (from 2.2 \(\mu\)m to 1.3 mm) and trace a dusty structure oriented roughly along the same direction, with PA in the range \(-10^{\circ}\) (MIDI) to \(-50^{\circ}\) (ALMA). This might indicate either a single overarching structure in the post-merger remnant, or multiple similarly aligned structures. Simulations of stellar merger events also suggest the presence of a disk like structure in the post-merger remnant, which is thought to be a reservoir for the pre-merger binary angular momentum (e.g. Webbink, 1976; Pejcha et al., 2017).
V838 Mon serves as an excellent source to advance our understanding of the post-merger environment decades after the luminous red nova event. Thus, it provides us with crucial insights into the physical processes at play in these merger events and their final products in the long term. In this paper, we analyze and interpret recent interferometric observations obtained with a variety of instruments that span many near to mid infrared wavelengths.
The format of the paper is as follows. In Sect. 2 we present all of our VLTI and CHARA observations and outline the main steps of the data reduction. We also analyze and interpret recent optical speckle interferometric observations obtained at 562 nm and 832 nm. In Sect. 3 we mainly present the results of geometrically modelling the interferometric observables (squared visibilities and closure phases) observed with the MATISSE and GRAVITY instruments at VLTI and with MIRCX/MYSTIC at CHARA. Sect 4 centers around our image reconstruction attempts for the VLTI and CHARA datasets using two distinct image reconstruction algorithms. The modelling and imaging results are discussed in depth in Sect. 5 finally followed by Sect. 6 in which we present the main conclusions of this study.
## 2 Observations and data reduction
V838 Mon was observed with the VLTI located at Paranal observatory in Chile, in 2021 and 2022. Observations were carried out using the 1.8 m Auxiliary Telescopes (ATs) and two instruments, MATISSE (Lopez et al., 2022) and GRAVITY (GRAVITY Collaboration et al., 2017). MATISSE is a four telescope beam combiner which covers the \(L\) (2.8-4.2 \(\mu\)m), \(M\) (4.5-5 \(\mu\)m) and \(N\) (8-13 \(\mu\)m) bands, while GRAVITY combines light in the \(K\) (1.9-2.4 \(\mu\)m) band. For both instruments, we intended to get 18 observing blocks (OBs) to perform image reconstruction. However, we were only able to obtain three OBs for MATISSE and fifteen OBs for GRAVITY. The technical details of our VLTI observations are presented in Tables 1-2. In the case of MATISSE, only the large and small configurations were employed, while for GRAVITY, observations using all three configurations were carried out. In particular, intermediate configurations were also used to sample better the UV plane. The UV coverages obtained for the MATISSE and GRAVITY observations are shown in Figs. 1 & 2, respectively.
MATISSE observations were carried out in low spectral resolution (R\(\sim 30\)) using the GRA4MAT mode, in which the GRAVITY fringe tracker is used to stabilize fringes for MATISSE (Lopez et al., 2022). Each MATISSE observation for V838 Mon consisted of a CAL-SCI-CAL observing sequence in which two calibrator stars, 20 Mon (spectral type K III) and HD 52666 (spectral type K5 III) were observed to calibrate the \(LMN\) bands. The source and calibrator fluxes in the \(LMN\) bands are given in Table 4.
The GRAVITY observations were carried out in medium spectral resolution (R\(\sim\)500) using its single-field mode in which the light is split equally between the fringe tracker channel and the science channel. We adopted a CAL-SCI-CAL sequence for each GRAVITY run. However, one calibrator was observed here, HD 54990 (spectral type K0/1 III).
Interferometric observations of V838 Mon were also obtained at the Center for High Angular Resolution Astronomy (CHARA) Array on UT 2022 Mar 2, 3, and 9. The CHARA
Figure 1: UV tracks for MATISSE observing runs in 2021–2022.
Array is operated by Georgia State University and is located at Mount Wilson Observatory in southern California. The array consists of six 1-meter telescopes arranged in a Y-configuration with baselines ranging in length from 34 m to 331 m (ten Brummelaar et al. 2005). We combined the light from 4 to 5 of the telescopes using the Michigan InfraRed Combiner-eXeter (MIRC-X) beam combiner in the \(H\)-band using the low spectral resolution (R=50) prism that disperses light over 8 spectral channels (Anugu et al. 2020). On the last night, additional simultaneous observations were obtained using the Michigan Young STar Imager (MYSTIC) combiner (Setterholm et al. 2022) in the \(K\)-band in low spectral resolution (R=49) mode (Monnier et al. 2018). The CHARA MIRC-X and MYSTIC observation log is given in Table 3. On each night, we alternated between observations of V838 Mon and calibrator stars to calibrate the interferometric transfer function. The \(H\) band fluxes are presented in Table 6, while the uniform disk diameters of the calibrators in the \(HK\) bands were adopted from Bourges et al. (2014) and Bourges et al. (2017) and are listed in Table 7.
### MATISSE data reduction
The \(LMN\)-band MATISSE data were reduced using version 1.7.5 of the ESO data reduction pipeline. The final products were OIFITS files (version 2) comprising uncalibrated interferometric observables, including six dispersed squared visibilities and three independent closure phase measurements per exposure. The observations of V838 Mon were then calibrated using our measurements of the calibrator stars listed in Table 4, whose diameters and fluxes were taken from the Mid-infrared stellar Diameters and Fluxes compilation Catalog1(MDFC; Cruzalebes et al. 2019, see Table 2). The calibration of squared visibilities was performed by dividing the raw squared visibilities of V838 Mon by those of the calibrators, corrected for their diameter (which is called the interferometric transfer function). The calibrated \(LMN\) spectra of V838 Mon were obtained by multiplying the ratio between the target and calibrator raw fluxes measured by MATISSE at each wavelength by a model of the absolute flux of the calibrator. This model was taken from the PHOENIX stellar spectra grid (Husser et al. 2013).
Footnote 1: MDFC is available through VizieR service at [https://vizier.u-strasbg.fr/viz-bin/VizieR](https://vizier.u-strasbg.fr/viz-bin/VizieR)
Upon completing the data reduction and inspecting the resultant products, we found that the \(N\) band squared visibilities were of poor quality. This was due to the fact that the average \(N\)-band total flux of V838 Mon (\(\sim\)15 Jy) lies very close to the photometric sensitivity limit of MATISSE in the \(N\) band. Furthermore, due to the increased thermal background effects beyond 11 \(\mu\)m, the quality of the photometric data became even worse. This rendered the \(N\)-band photometry, and thus the absolute visibilities, unusable, and they were discarded. However, we were able to make use of the correlated fluxes (\(\sim\)5 Jy), which turn out to be slightly greater than the sensitivity limit of the GRA4MAT mode in \(N\)-band. Those observables allowed us to model geometrically the source in the \(N\) band and put constraints on the extent of the structure in this band.
### GRAVITY data reduction
GRAVITY data reduction was performed with the ESO GRAVITY data reduction pipeline version 1.4.1 This was carried out in the ESO Reflex workflow environment. For the calibration of squared visibilities we used the visibility calibration workflow in Reflex. Furthermore, the reduced \(K\) band spectra were flux calibrated similarly to the \(L\) band spectra obtained from MATISSE. The flux calibration routine from the MATISSE consortium was used for this purpose and, just like in the case of the \(M\) band, the \(K\) band spectra were also similarly normalized and later compared to synthetic spectra.
### CHARA data reduction
CHARA data was reduced and calibrated by the support astronomers at the CHARA array. The data were reduced using the standard MIRC-X and MYSTIC pipeline (version 1.3.5) written in python2 and described by Anugu et al. (2020). The pipeline produces calibrated visibility amplitudes for each pair of telescopes and closure phases for each combination of three telescopes. To assess the data quality, the calibrators were checked against each other on each night. The calibrators showed no evidence for binarity based on a visual inspection of the data, and the diameters derived from the measured visibilities were con
Figure 3: UV coverage for CHARA observing runs in the \(HK\) bands.
Figure 2: UV coverage for GRAVITY observing runs.
sistent with the expected values within uncertainties. The calibrated OIFITS files will be available in the Optical Interferometry Database3 and the CHARA Data Archive4.
Footnote 3: [https://oidb.jmmc.fr/index.html](https://oidb.jmmc.fr/index.html)
Footnote 4: [https://www.chara.gsu.edu/observers/database](https://www.chara.gsu.edu/observers/database)
### Gemini South observation
V838 Mon was also observed twice with the Gemini South 8-m telescope using the high-resolution Zorro instrument (Scott et al., 2021; Howell & Furlan, 2022). Zorro provides simultaneous speckle imaging in two optical bands, here centered at 562 nm and 832 nm. V838 Mon was observed on 25 Feb 2021 UT and, about one year later, on 20 March 2022 UT. The speckle imaging on each night consisted of a number of 60 ms frames taken in a row, the February 2021 observation taking 12,000 frames and the March 2022 observation taking 16,000 frames. The February observations occurred during a night of average seeing (0\(\aas@@fstack{\prime\prime}\)6) while the longer March 2022 observations occurred during good seeing (0\(\aas@@fstack{\prime\prime}\)45). The data were reduced using a standard speckle imaging pipeline with reduced output data products including reconstructed images and 5\(\sigma\) contrast limits (Howell et al., 2011). The two sets of observations agreed well with the March 2022 data, providing a higher S/N result.
Fig. 4 shows the resultant contrast curves for both filters and our 832 nm reconstructed image from March 2022. Fig. 5 shows a similar plot but for February 2021 along with image reconstructions at 832 nm and at 562 nm. There are no close (\(<\)1\(\aas@@fstack{\prime\prime}\)2) stellar companions detected within the angular and contrast limits achieved. However, as seen in detail in Fig. 6, the image at 832 nm is extended beyond just a point source with a slight elongation in the north-south direction and to the east by about 0\(\aas@@fstack{\prime\prime}\)03. Similarly, at 562 nm, we see also see a noticeable elongation in the north-western direction, as shown in Fig. 7. The 562 nm elongation is in very good agreement with what we observe in the mid-infrared bands (see Fig. 30). While the elongation varies significantly at 832 nm, we note that the orientation (north-south) of this particular structure is remarkably similar to the PA (\(\sim\)3\({}^{\circ}\)) we obtain by fitting the \(N\) band visibility amplitudes with an elliptical disk model. Only with future extensive observations at Gemini South will it be possible to reliably constrain the orientation of the elongation of V838 Mon in the \(V\) and \(I\) bands.
\begin{table}
\begin{tabular}{c c c} \hline Date & Configuration & Seeing \\ \hline
01/04/2022 & A0-G1-J2-J3 & 0\(\aas@@fstack{\prime\prime}\)42 \\
02/03/2022 & A0-B2-C1-D0 & 0\(\aas@@fstack{\prime\prime}\)86 \\
02/03/2022 & A0-B2-C1-D0 & 0\(\aas@@fstack{\prime\prime}\)82 \\
02/04/2022 & A0-B2-C1-D0 & 0\(\aas@@fstack{\prime\prime}\)90 \\
02/04/2022 & A0-B2-C1-D0 & 1\(\aas@@fstack{\prime\prime}\)00 \\
02/04/2022 & A0-B2-C1-D0 & 0\(\aas@@fstack{\prime\prime}\)62 \\
02/05/2022 & A0-D0-G2-J3 & 0\(\aas@@fstack{\prime\prime}\)57 \\
02/06/2022 & D0-G2-J3-K0 & 0\(\aas@@fstack{\prime\prime}\)50 \\
02/06/2022 & D0-G2-J3-K0 & 1\(\aas@@fstack{\prime\prime}\)04 \\
02/10/2022 & D0-G2-J3-K0 & 0\(\aas@@fstack{\prime\prime}\)56 \\
02/11/2022 & A0-G1-J2-J3 & 0\(\aas@@fstack{\prime\prime}\)69 \\
02/13/2022 & A0-G1-J2-J3 & 0\(\aas@@fstack{\prime\prime}\)66 \\
03/11/2022 & A0-G1-J2-J3 & 0\(\aas@@fstack{\prime\prime}\)55 \\
03/25/2022 & A0-G1-J2-J3 & 0\(\aas@@fstack{\prime\prime}\)41 \\
03/25/2022 & A0-G1-J2-J3 & 0\(\aas@@fstack{\prime\prime}\)40 \\ \hline \end{tabular}
\end{table}
Table 1: VLTI/GRAVITY observation log.
Figure 4: Contrast (in magnitudes) curves for both filters from March 2022. The speckle image reconstruction at 832 nm is shown in the top right corner.
Figure 5: Same as 5 but for February 2021. The speckle image reconstruction at 562 nm is also shown.
Figure 6: Speckle image reconstruction from March 2022 at 832 nm. A noticeable north-south elongation is visible. The image field of view (FoV) is 0\(\aas@@fstack{\prime\prime}\)85 by 0\(\aas@@fstack{\prime\prime}\)55. North is up, east is left.
## 3 Geometrical modelling
The resultant data products for the above-mentioned VLTI and CHARA instruments are the interferometric observables, i.e. squared visibilities (\(V^{2}\)) and the closure phases. The squared visibilities represent the fringe contrast, thus an object is said to be completely resolved in the case that the value for the squared visibility is zero, and completely unresolved in the case the value is one. The squared visibilities can be used to constrain the size of the source. Closure phases are the sum of the individual phase measurements by telescopes within a particular triangular configuration in the array. This results in the atmospheric phase cancelling out, thereby only leaving the sum of intrinsic phases of the object (Jennison, 1958). Closure phases are a probe for asymmetries, so deviations from values of 0\({}^{\circ}\) or 180\({}^{\circ}\) would indicate some deviation from centro-symmetry of the source.
### \(L\)-band geometrical modelling (MATISSE)
Previous geometrical modelling and preliminary imaging results in the \(L\) band seem to suggest the presence of an elongated and tilted structure that enshrouls the central merger remnant (Mobeen et al., 2021). The previous VLTI observations in 2020 were obtained using only the large configuration. In the current study, V838 Mon was observed with the small (maximal baseline is \(\approx\) 30 m) and large (minimal baseline \(\approx\) 140 m) configurations in the \(L\)-band. A quick look at the squared visibilities as a function of spatial frequency suggests that the source is mostly resolved at the longer baselines (see Fig. 8). However, even with the most extended array configuration, V838 Mon is not completely resolved as the squared visibilities drop to a minimum of about 0.3. At the shortest baselines, V838 Mon is mostly unresolved, with the squared visibilities being at around 0.9. As a function of wavelength, though, it is clear that the squared visibilities do not show much variation, which suggests that no major chromatic effects are in play. At the very edge of the wavebands, the data was much noisier. We excluded it from our analysis and restricted ourselves to the wavelength range of 3-4 \(\mu\)m in the \(L\) band.
A qualitative look at the closure phases (see Fig. 8) indicates that they are mostly close to zero, with some very small deviations of about a few degrees (maximum \(\sim\)3\({}^{\circ}\)). This suggests a slight asymmetry in the system. Again, similar to the squared visibilities we do not see much variation in the closure phases as a function of wavelength, which suggests that the shape of the object is not changing over the covered wavelength range.
To interpret the MATISSE data, we employed geometrical modelling, because the UV coverage of the MATISSE
\begin{table}
\begin{tabular}{c c c} \hline Object & \(H\) band & Reference \\ & [mag] & \\ \hline V838 Mon & 5.86 & 1 \\ HD 54990 & 3.89 & 2 \\ HD 61039 & 5.54 & 2 \\ HD 59230 & 4.16 & 2 \\ \hline \end{tabular}
\end{table}
Table 6: Total \(H\)-band fluxes of V838 Mon and the calibrator.
\begin{table}
\begin{tabular}{c c c c} \hline \multicolumn{1}{c}{Date} & Combiner & Mode & Configuration \\ \hline
03/02/2022 & MIRC-X & H-Prism50 & S1-S2-E2-W1-W2 \\
03/03/2022 & MIRC-X & H-Prism50 & S1-S2-E2-W1-W2 \\
03/09/2022 & MIRC-X & H-Prism50 & S1-S2-E2-W1-W2 \\
03/09/2022 & MYSTIC & K-Prism49 & S1-S2-E2-W1-W2 \\ \hline \end{tabular}
\end{table}
Table 3: CHARA observation log.
\begin{table}
\begin{tabular}{c c c} \hline Object & \(L\) band & \(M\) band & \(N\) band & Reference \\ & [Jy] & [Jy] & [Jy] & \\ \hline V838 Mon & 5 & 4 & 30 & 1 \\ HD 52666 & 132 & 67 & 18 & 2 \\
20 Mon Mon & 30.3 & 19.2 & 4.31 & 2 \\ \hline \end{tabular}
\end{table}
Table 4: Total \(LMN\)-band fluxes of V838 Mon and the calibrators.
\begin{table}
\begin{tabular}{c c c} \hline Object & \(K\) band & Reference \\ & [mag] & \\ \hline V838 Mon & 5.08 & 1 \\ HD 54990 & 3.77 & 2 \\ \hline \end{tabular}
\end{table}
Table 5: Total \(K\)-band fluxes of V838 Mon and the calibrator.
\begin{table}
\begin{tabular}{c c c} \hline Object & \(H\) band & Reference \\ & [mag] & \\ \hline V838 Mon & 5.86 & 1 \\ HD 54990 & 3.89 & 2 \\ HD 61039 & 5.54 & 2 \\ HD 59230 & 4.16 & 2 \\ \hline \end{tabular}
\end{table}
Table 6: Total \(H\)-band fluxes of V838 Mon and the calibrator.
\begin{table}
\begin{tabular}{c c c} \hline Object & \(H\) band & Reference \\ & [mag] & \\ \hline V838 Mon & 5.86 & 1 \\ HD 54990 & 3.89 & 2 \\ HD 61039 & 5.54 & 2 \\ HD 59230 & 4.16 & 2 \\ \hline \end{tabular}
\end{table}
Table 6: Total \(H\)-band fluxes of V838 Mon and the calibrator.
\begin{table}
\begin{tabular}{c c c} \hline Calibrator & UDD, \(H\) & UDD, \(K\) & e\({}_{\text{UDD}}\) \\ & [mas] & [mas] & [mas] \\ \hline HD 54990 & 0.863 & 0.867 & 0.079 \\ HD 61039 & 0.377 & 0.379 & 0.009 \\ HD 59230 & 0.749 & 0.753 & 0.072 \\ \hline \end{tabular}
\end{table}
Table 7: \(H\)-band calibrator uniform disk diameters.
Figure 7: Speckle image reconstruction from February 2021 at 562 nm. A prominent north-west elongation is visible. The image FoV is 0\(\aas@@fstack{\prime\prime}\)21 by 0\(\aas@@fstack{\prime\prime}\)14. North is up, east is left.
\begin{table}
\begin{tabular}{c c c} \hline Object & \(K\) band & Reference \\ & [mag] & \\ \hline V838 Mon & 5.08 & 1 \\ HD 54990 & 3.77 & 2 \\ \hline \end{tabular}
\end{table}
Table 5: Total \(K\)-band fluxes of V838 Mon and the calibrator.
2021/2022 data alone is insufficient for comprehensive imaging. We used the modelling software LITPRO provided by Jean-Marie Mariotti Center5 (JMMC), which allows the user to fit simple geometrical models such as disks, point sources and ellipses to the observed visibilities and closure phases. LITPRO uses a \(\chi^{2}\) minimization scheme to compute the model parameters and their error bars. For the purpose of fitting the V838 Mon data, we tried four models: a uniform circular disk (CD), an elliptical disk (ED), a circular Gaussian, and an elliptical Gaussian. With a reduced \(\chi^{2}\) of 6, we find the elliptical models to better represent the data. The plots of the modelled and observed visibilities are shown in Fig. 8. Additionally, we are able to obtain precise estimates for the size and PA for the various models. These are listed in the Table 6. The obtained parameters for size and orientation are in good agreement with what previous MATISSE observations had indicated (Mobeen et al. 2021). In Mobeen et al. (2021), the elliptical disk model yielded a semi-major axis PA of \(-40^{\circ}\pm 6^{\circ}\), an angular diameter of \(3.28\pm 0.18\) mas, and a stretch ratio of \(1.40\pm 0.1\). In this study, we get values of \(-41\aas@@fstack{\circ}34\pm 0\aas@@fstack{\circ}49\), \(3.69\pm 0.02\) mas, and \(1.56\pm 0.01\), respectively. It is evident that these matters have remained largely the same between Jan/Mar 2020 and Oct 2021/Mar 2022. This means that the feature in the \(L\) band is non-transient on a timescale of years. The absence of any change in PA also suggests that no dynamical variations have occurred in the post merger environment. Thus, it would seem that the disk-like feature is stable and long-lived in the \(L\) band. This allowed us to combine the two datasets and attempt imaging (see Sect. 4.1).
Footnote 5: [https://www.jmmc.fr/english/tools/data-analysis/oifits-explorer/](https://www.jmmc.fr/english/tools/data-analysis/oifits-explorer/)
### \(N\)-band geometrical modelling (MATISSE)
As stated previously, despite the generally poor quality of the \(N\) band data due to thermal background and sensitivity limits, we were still able to obtain the correlated fluxes for V838 Mon for two of the three nights. Being measurements of the fringe amplitude, correlated fluxes are a proxy for fringe contrast and allow the determination of the source size. We modelled the \(N\) band visibility amplitudes similar to the geometrical modelling we did for the squared visibilities in other bands. Given the scarce number of data points, we used simple geometries to model the observables. In the \(N\) band, our goal was to obtain a size measurement for the structure and also constraints on its ellipticity. We used two models, a uniform disk and an elliptical disk. The uniform disk model fit yields an angular diameter of \(87.73\pm 0.01\) mas, while the elliptical disk model yields an angular diameter of \(48.70\pm 2.40\) mas, a PA of \(5\aas@@fstack{\circ}3\pm 0\aas@@fstack{\circ}70\), and a stretch ratio of \(5.68\pm 0.31\). These fits are shown in Fig. 9. If we consider the size estimate obtained from the elliptical disk model (semi-major axis of \(\sim 300\) mas), then it would seem that the most extended structure surrounding V838 Mon exists in the \(N\) band. The PA points to a north-south elongation in the structure, which is drastically different from that seen in other bands. This means that the \(N\) band structure is a distinct feature that may not be dynamically related to structures seen in the \(HKL\) bands.
### \(M\)-band geometrical modelling (MATISSE)
For the first time, we were able to observe V838 Mon in the \(M\) band using MATISSE. However, due to the scarce data in this band, we did not attempt any image reconstruction. As can be seen in Fig. 10, the shape of the squared visibilities as a function of spatial frequency shows a sinusoidal-like modulation. It is prominent across all spatial frequencies, but becomes more noticeable at longer baselines. The amplitude variation lies in the range 0.3-0.4, and the frequency of these modulations increases at lower spatial frequencies. The \(M\)-band closure phases are mostly close to zero. However, at larger spatial frequencies they tend to deviate by a few degrees (maximum \(\sim 5^{\circ}\)), thereby hinting towards asymmetries similar to what we see in the \(L\)-band.
The visibilities vary also as a function of wavelength and show a broad depression near \(4.7\,\mu\)m. This is illustrated in Fig. 11 for visibilities averaged over all baselines. A minimum is reached at \(4.7\,\mu\)m which is close to the fundamental band of CO (Fig. 12). The presence of the feature explains the sinusoidal variations seen in Fig. 10. Closure phases do not seem to show much wavelength dependence throughout this particular waveband. Our flux calibrated spectra, however, readily show an absorption feature at around \(4.6\)\(\mu\)m, which we identify as the
Figure 8: Observed (blue) and best fits for the \(L\) band squared visibilities and closure phases. The models ED and CD refer to elliptical disk and circular disk, respectively.
Figure 9: Observed (blue) and best fits for the \(N\) band visibility amplitudes. The models ED and CD refer to elliptical disk and circular disk, respectively.
fundamental band of CO. In Fig. 12, we compare the entire calibrated \(LM\) spectrum to a simulation of CO \(\Delta v=\pm 1\) absorption from an optically thin slab at thermal equilibrium and at a gas temperature of 30 K. The match in wavelength is not perfect, but can be explained by the complex pseudo-continuum baseline on both sides of the feature. The observed feature may not be pure absorption. Also, telluric absorption of water makes this part of the band particularly difficult to observe and calibrate. Our identification of the CO band is strengthened by observations of the same band in V838 Mon 2005 at a much higher spectral resolution (Geballe et al. 2007), and its presence in other red novae, for instance in V4332 Sgr (Banerjee et al. 2004).
Although the \(L\)-band spectrum shows clear signatures of circumstellar molecular features, limited by the small number of measured visibilities, we are currently able to make basic size measurement only by combining data in the entire observed band. We used a wide number of achromatic models, which include a circular disk (CD), circular Gaussian (CG), elliptical disk (ED), elliptical Gaussian (EG) and a two point source model (2P). The fits to these models and their details are given in Table 1, while the models of the visibilities and closure phases are also shown in Fig. 10. None of the models are able to reproduce exactly the modulation shown by the visibilities. Disk-like models with a size of 4-6 mas fit the data much better than a double source. The derived size most likely represents the dominant circumstellar component.
### \(K\)-band geometrical modelling (GRAVITY and MYSTIC)
The GRAVITY \(K\) band observations for V838 Mon are the most extensive of all the VLTI observations that we have obtained so far. Observations on all three array configurations, including the intermediate configurations, were successfully executed, which resulted in good sampling of the UV plane and allowed us to perform image reconstruction (see Sec. 4.2). We obtained 15 out of the requested 18 observing runs. Results are shown in Fig. 13. The source is well resolved at the longest baselines, as the squared visibilities fall to a minimum of 0.1. The longer baselines also reveal smaller peak-like features, which point to a complex morphology in the \(K\) band that cannot be explained by just a simple disk or a Gaussian model (shown more clearly in Fig. 14). The squared visibilities do not change considerably with wavelength. This suggests that the shape of the structure is not wavelength dependent throughout the entire \(K\) band. We do see, however, subtle spectral effects near 2 and 2.3 \(\mu\)m which we assign to water and CO absorption (see Figs. 16 and 17). The closure phases seem to scatter around zero by a few degrees and reach a maximum of \(\sim\)5\({}^{\circ}\)). The root-mean-square (RMS) scatter is 2\({}^{\circ}\) which suggests that the observed deviations in the closure phases can be attributed mostly to observational noise (within 4\(\sigma\)) and maybe some very small asymmetries present in V838 Mon. They also do not seem to vary with wavelength, thus indicating a non-variable morphology in the \(K\) band.
Although the \(K\) band data were used extensively for imaging, we nevertheless thought it prudent to also model the visibilities directly using simple geometrical models in order to obtain constraints on the size of the structure. The best fit parameters for all models are shown in Table 1. The size of the source lies in the range 1.2-2.6 mas, depending on the choice of model. The
Figure 11: Observed squared visibilities as a function of wavelength near the 4.7 \(\mu\)m dip. Data for all baselines were averaged.
Figure 12: Observed \(LM\)-band spectrum (solid line) is compared to a simple simulation (dashed line) of absorption at the CO fundamental band near 4.6 \(\mu\)m and at a gas temperature of 30 K. The simulated spectrum was smoothed to the spectral resolution of MATISSE.
Figure 10: Observations (blue) and best fit models for the \(M\) band squared visibilities and closure phases. Models ED, CD and 2P refer to elliptical disk, circular disk, and two point sources, respectively.
value of the PA in the \(K\) band is similar to measurements in the other bands.
We also were able to obtain \(K\) band spectra with a medium resolution (R\(\sim\)500). The spectrum was calibrated using the star HD 54990. This resulted in the removal of the major telluric feature caused by H\({}_{2}\)O near 2 \(\mu\)m, leaving us with absorption bands in the range 2.3 - 2.35 \(\mu\)m and less intense absorption features near 2 \(\mu\)m. We identified these as CO and H\({}_{2}\)O features, respectively. To ensure that this identification was accurate, we modelled the absorption at various temperatures using pgopher (Western, 2017) and spectroscopic data from the EXOMOL database (Tennyson et al., 2020). The models represent a simple slab of gas in local thermodynamic equilibrium (i.e., characterized by a single temperature) and optically thin absorption. Observed spectra are compared to the simple simulations in Figs. 16, 17, and 18. After experimenting with a wide variety of temperatures for both species, we find that a model with a CO temperature of 3000 K and an H\({}_{2}\)O temperature of 1000 K best represent the data, but due to the poor spectral resolution these are only very rough constraints. The absorption may arise in the photosphere or very close to it in the circumstellar gas (cf. Geballe et al., 2007). We use geometrical modelling to determine the sizes of the source when the visibilities are limited to spectral regions where molecular absorption is strongest. For CO, we restrict the GRAVITY data to 2.29-2.37 \(\mu\)m, while for water 1.97-2.07 \(\mu\)m was selected. By fitting circular disk models to each of these datasets, we obtain angular diameters of \(2.1\pm 0.1\) mas and \(2.0\pm 0.1\) mas, respectively. With the uncertainties, these are consistent with the size obtained for the entire band.
We also obtained a single night of MYSTIC \(K\) band observations and geometrically modelled the visibilities with a uniform disk and an elliptical disk model. The results are shown in Fig. 15. The uniform disk model yields an angular diameter of \(1.57\pm 0.01\) mas while the elliptical disk model yields an angular diameter of \(1.72\pm 0.01\) mas, a PA of \(-41.6\pm 1.5^{\circ}\), and a stretch ratio of \(1.28\pm 0.01\). Due to the longer CHARA baselines, V838 Mon is much better resolved, which yields a smaller angular diameter than that obtained with just the GRAVITY observations. Additionally, we also combined both the GRAVITY and CHARA squared visibilities and performed a combined uniform disk fit to the dataset, which gave an angular diameter estimate of \(1.94\pm 0.01\) mas. This value is essentially the same as the one obtained from fitting only the GRAVITY squared visibilities, which suggests that combining them with the MYSTIC squared visibilities does not result in a significant change in the size estimate for V838 Mon.
The mean of the MYSTIC closure phases is \(\sim\)2\({}^{\circ}\) with an RMS of \(\sim\)11\({}^{\circ}\), which strongly hints towards the presence of asymmetries. Unlike the GRAVITY dataset, the MYSTIC observations lasted only a single night because of which we could only geometrically model the closure phases. The results of this modelling are presented in Sect. 4.5.
### \(H\)-band geometrical modelling
CHARA MIRC-X observations allowed us to observe V838 Mon in the \(H\) band as well. Previous measurements in the
Figure 14: Zoomed-in section on some of the ‘bump’ features in the squared visibilities from Fig. 13.
Figure 13: \(K\) band squared visibilities and closure phases measured with GRAVITY (gray). Best-fit geometrical models for elliptical disk (ED) and circular disk (CD), are also shown (both have zero closure phases).
Figure 15: \(K\) band squared visibilities as measured with MYSTIC (blue). Best-fit geometrical models for elliptical disk (ED) and circular disk (CD) are also shown.
band were taken in 2012 using the AMBER instrument, and prior to that in 2004 (Lane et al. 2005) using the Palomar Testbed Interferometer (PTI). These recent CHARA measurements serve as a direct follow-up to the previous \(H\) band observations. The squared visibilities, shown in Fig. 19, suggest that the source is resolved at the longest CHARA baselines (\(\lesssim\)331 m), as the visibilities fall to a minimum of 0.1. The closure phases split into two groups. One is scattered around null phase, and the other, more populous group, is centered at phases \(\sim\)6deg, thereby hinting towards deviations from centro-symmetry in some parts of the remnant. This pattern is observed at all spatial frequencies and requires a more complex source structure, which we discuss in Sect. 4.3.
## 4 Image reconstruction
### \(L\) band image reconstruction
For the purpose of image reconstruction, we combined the 2021/2022 MATISSE \(L\) band data set with the one from 2020 used by Mobeen et al. (2021), since no significant variability occurred in the source over the course of the year. This is further confirmed by our modelling of the recent visibilities and phases presented in Sect. 3.1. Since both of the datasets, except for one, consist of observations made with the large configuration, any reconstructed image will be sensitive to compact features in V838 Mon. The image was constructed using the image reconstruction algorithm Multi-aperture Image Reconstruction Algorithm MIRA (Thiebaut 2008) which uses a likelihood cost function minimization method. For the \(L\) band image, we set a pixel size of 0.1 mas, which is smaller than the theoretical best resolution (\(\lambda_{L}/2B_{\rm max}\approx 2\) mas). This was done as an attempt to reveal any finer substructure in V838 Mon at this band. A FoV of 5 mas was chosen, taking into consideration the size of the \(L\) band structure inferred from geometrical modelling (see
Figure 16: Synthetic spectra generated for H\({}_{2}\)O at 1000 K and for CO at 3000 K are compared to the observed \(K\) band spectrum from GRAVITY over the entire observed band. The synthetic spectra are arbitrary scaled. The simulation does not include high-opacity effects, which may result in the poor fit to the observed saturated water band near 2 \(\mu\)m. The synthetic spectra were convolved to the spectral resolution of the GRAVITY resolution.
Figure 19: Observations (dark blue) and best fit models for the \(H\) band squared visibilities and closure phases. Circular (CD) and elliptical (ED) disk models are shown.
Figure 17: Synthetic spectra of water generated at various temperatures are compared to the observed spectrum (blue).
Figure 18: Synthetic spectra generated for the first-overtone CO bands at various temperatures. The solid blue line is the observed spectrum.
Table 1). The regularization scheme we employed was the hyperbolic regularization with a regularization parameter of \(10^{6}\). These regularization settings resulted in images with the lowest \(\chi^{2}\) values.
The resulting image displayed in Fig. 20 shows an asymmetrical, elliptical feature. The geometric orientation is along the PA that we obtained from our modelling presented in the prior sections.
### \(K\) band image reconstructions
Given the comprehensive nature of our GRAVITY observations, we were able to perform detailed image reconstruction of V838 Mon from the measured interferometric observables in the \(K\) band. For this purpose, we used two separate image reconstruction algorithms, SQUEEZE (Baron et al. 2010) and MIRA (Thiebaut 2008). Both these algorithms attempt to solve for the best image given the data and a model of the image. They employ different methodologies and minimization techniques from one another. Thus, by using multiple algorithms to reconstruct an image and comparing them to one another, one can distinguish between authentic features present in a source and artifacts that could have resulted from the different algorithms. For the purpose of our image reconstruction, we limited our FoV to 10 mas and set the pixel size to 0.1 mas, thus making the image 100 by 100 pixels. This choice for FoV and pixel size was adopted in order to super resolve any potential sub-mas features. With MIRA, just like in the \(L\) band, we used the hyperbolic regularization and found that the \(\chi^{2}\) was minimized with a regularization parameter of \(10^{6}\). While with the SQUEEZE algorithm, we used total variation and found the \(\chi^{2}\) to be minimized with a parameter value of \(10^{3}\). Total variation with a high enough regularization parameter value is identical to the hyperbolic regularization scheme. The values for the regularization parameters were determined by reconstructing images for a wide range of parameter values and by noting the value at which no further reduction in \(\chi^{2}\) takes place. This parameter is significant since it sets the weight for the image model against the data when the minimization scheme is implemented. We combined all the GRAVITY OIFITS files and merged them into a single file by using OI-Tools6. This yielded a single OIFITS file that was then used as the input file in the image algorithms. The number of iterations was set at 500. Finally, a randomly generated image was used as the starting image for all of our reconstruction attempts. The reconstruction results with both MIRA and SQUEEZE are shown in Fig. 21 and 22 respectively. We performed image reconstruction using a different strategy as well, whereby the combined GRAVITY data OIFITS file was binned into smaller wavelength increments of 0.05 \(\mu\)m. The image in the first wavelength bin in this process was constructed with a random starting image. However, in subsequent reconstructions for the other bins, the starting image used would be the image obtained for the preceding bin. Using this chain imaging method helped us to significantly decrease the value of the reduced \(\chi^{2}\), i.e. from \(\approx 150\) to \(\approx 8\) in the final image. Thus, the final image is at the endpoint of the waveband. The result of this strategy is displayed in Fig. 23.
Footnote 6: [https://github.com/fabienbaron/OITOOLS.j](https://github.com/fabienbaron/OITOOLS.j)!/tree/main/demos
The resulting images (Figs. 21 and 22) from both the algorithms show a similar morphology in the \(K\) band. At 0.1 mas, multiple features are immediately noticeable. These include the clump-like features (labelled as A, B, D, E, and G in Fig. 21) that are roughly distributed along an axis at the PA value (\(\sim\) 40\({}^{\circ}\)) which we have obtained from geometrical modelling and which is similar to the PA found for the disk-like feature found in V838 Mon in other bands. Furthermore, we also see two narrow linear features (labelled C and F in Fig. 21) stretching along the northeast direction, which appear to be nearly perpendicular to the axis defining the brighter clumps. When we look at the reconstructed image from SQUEEZE, we see that despite being generally similar to its MIRA counterpart, the structure in the SQUEEZE image is a lot more continuous, somewhat resembling a clumpy disjointed ring seen at an intermediate inclination. The features that correspond to regions A, B, D, E, and G in Fig. 21, are present in Fig. 22, however, features C and F are not clearly distinguishable in the SQUEEZE image. This could mean that the linear features are imaging artifacts.
Our highest-resolution (0.1 mas) \(K\)-band images reveal a very clumpy morphology that surrounds the central elliptical
Figure 21: Reconstructed \(K\) band image of V838 Mon obtained using the MIRA imaging algorithm. Flux level is logarithmic, and the units are arbitrary.
Figure 20: Image reconstruction of V838 Mon done in the \(L\) band using the combined 2020–2022 MATISSE data. Flux is in a logarithmic scale in arbitrary units.
structure (see Fig. 21). The major clumps seem to almost trace out a ring that has a size of about 5 mas, while the inner source spans 2 mas. It is worth noting that at the adopted pixel size of 0.1 mas, the source is already super-resolved when compared to the actual achievable resolution with the longest baseline at the VLTI (i.e. \(\sim\)1 mas). Thus, the finer details such as the clumps and linear features could be simply image artefacts that arise as a consequence of using too high a resolution. To further explore the effects of super-resolution, we performed image reconstruction using larger pixel sizes of 0.5 and 0.8 mas. These results can be seen in Figs. 15-16. The resulting less super-resolved images show only a single extended feature, whereas the inner feature seen in the more resolved images ceases to be distinguishable. Furthermore, the clumps which were prominent in the 0.1 mas resolution image also seem to have vanished, however, the extended structure in the 0.5 mas and 0.8 mas images seems to suggest that the circumstellar environment is asymmetrical and with a morphology that could be described as bipolar. The orientation of this bipolar structure is along the same axis defined by the PA from the geometrical model fitting. It is likely that the outer structure is indeed an example of outflows that are expected to be produced in post-merger environments, similar to what is seen in the sub-mm regime in the stellar merger remnant V4332 Sgr (Kaminski et al. 2018).
### \(H\) band image reconstruction
Similar to how we analyzed the \(K\) band data, we employed the same imaging algorithms and attempted image reconstruction for the CHARA observations in the \(H\) band. The image FoV was set to 10 mas, while a pixel size of 0.1 mas was chosen and the starting image was randomly generated. The value of the regularization was set to 1000 for both MIRA and SQUEEZE imaging. The resulting images (Figs. 24 and 25) show an elongated structure that is oriented at a PA of \(-40^{\circ}\). The physical extent of this feature is about 2 mas, thus allowing us to probe the innermost parts of the circumstellar environment. Despite the fact that the images provide a very good fit to the data (i.e. reduced \(\chi^{2}=1\)) some artifacts are also present. These might have occurred due to the fact that we were only able to use 4-5 telescopes on any given night and not the full six telescope configuration of CHARA.
It would seem that in the \(H\) band too we are seeing a similar bipolar feature but at much smaller spatial scales, placing it closer to the central merger remnant. The reconstructed images in the \(H\) band show a structure that is similar in morphology to its counterparts in other wavelength regimes. The images indicate the presence of asymmetrical structures in the south-east and north-west regions of V838 Mon. The asymmetry is most noticeable in the northern part of the bipolar structure, which seems to be misaligned with respect to the southern feature. A crucial difference in the \(H\) band structure is the relatively high closure phase deviations and the apparent asymmetrical shape of the northern lobe visible in the image reconstructions. The orientation of the CHARA feature is in good agreement with what is seen in the \(L\), \(M\), and \(K\) bands. This can be seen in the sketch of the mid-infrared structures in Fig. 30.
Figure 23: Reconstructed \(K\) band image of V838 Mon obtained using the chain imaging method with the MIRA imaging algorithm.
Figure 22: Reconstructed \(K\) band image of V838 Mon obtained using the SQUEEZE imaging algorithm.
Figure 24: Reconstructed \(H\) band image of V838 Mon obtained using the MIRA imaging algorithm.
### \(H\) band closure phase modelling
As mentioned previously, we were unable to explain \(H\) band closure phases with simple geometrical models (circular and elliptical disks, see Sect. 3.5). Given the magnitude of these closure phase deviations, it is apparent that the circumstellar structure in V838 Mon possesses significant asymmetries. This is further corroborated by our imaging results presented in Sect. 4.3. Similar to the \(K\) band images, the \(H\) band ones seem to show a bipolar structure which is roughly oriented along the same PA as all the other structures that encase the merger remnant. Given our imaging results in the \(H\) band, we again attempt to reproduce the closure phase deviations, this time using a more complicated multi-component model that better represents the structure as revealed by the \(H\) band image. A sketch of this model is shown in Fig. 26. The model includes a central 1 mas disk that represents the central star. In addition to that, we also include two elliptical disks ("clumps"). The semi-major axis of the lower ellipse is oriented roughly along the PA that the structures in other bands follow, while that of the upper ellipse is nearly horizontal. The modelled and observed closure phases are plotted as a function of spatial frequency in Fig. 27. We see that the more complex model inspired by the image reconstructions is able to explain the closure phase deviations quite satisfactorily.
### MYSTIC \(K\) band closure phase modelling
As stated before, the \(K\) band closure phases as observed by MYSTIC do show significant non-zero deviations. Therefore, we decided to model these closure phases geometrically to see if the observations can be well explained with an asymmetric intensity distribution. We used the interferometric modelling software PMOIRED (Merand, 2022) for this purpose. Using PMOIRED, we experimented with various combinations of geometrical building blocks (disks, Gaussians, etc.) in order to reproduce the closure phases as accurately as possible. Ultimately, we found the best fit model (\(\chi^{2}\) \(\sim\) 1.5) which consists of an elongated primary disk with a size of 1 mas and a smaller (0.5 mas) less intense secondary disk that is separated from the primary by \(\sim\)1.7 mas. The geometry of this model is shown in Fig. 28.
The closure phases produced by the model and their fit to the observations are shown in Fig. 29. We can see from the figure that the non-zero closure phases are explained quite satisfactorily by our model at all baselines. The orientation of the primary disk seems to agree well with the general orientation of V838 Mon as seen in the \(HK\) band image reconstructions. Furthermore, the PA of the smaller disk component with respect to the primary also closely resembles the general orientation of V838 Mon in the near to mid infrared bands. The modelling results prove that the closure phase deviations are well explained with the addition of an off-centre "clump".
## 5 Discussion
### Jets
Lane et al. (2005) suggested in their study that V838 Mon could potentially possess an asymmetric structure, however due to their limited observations and absence of closure phase measurements they were unable to prove this conclusively. In Mobeen et al. (2021), the \(L\) band closure phases seemed to be hinting towards
Figure 27: Simulated and observed \(H\) band closure phases as a function of spatial frequency. The simulated closure phases are represented by red triangles, while the observations are represented by the blue circles. Simulations correspond to the structure shown in Fig. 26 and match the observations better than the simpler models described in Sect. 3.5.
Figure 25: Reconstructed \(H\) band image of V838 Mon obtained using the SQUEEZE imaging algorithm.
Figure 26: Model we used for explaining the \(H\) band closure phases. The red central component represents the star, while the blue ellipses may represent bipolar outflows hinted by the \(H\) band image reconstructions (cf. Figs. 24 and 25). About 95% of the model flux is contained within the central component.
minor asymmetries, as indicated by the small but non-zero deviations. However, we interpreted them with great caution given their small magnitude and the limited data. In this study, we have conclusively shown the existence of these asymmetries via geometrical modelling and image reconstruction in the \(HK\) bands. The GRAVITY and CHARA images (see Figs. 21 and 24) show an inhomogeneous bipolar structure. This inhomogeneous structure is more pronounced in the \(H\) band image, as we can see that the northern lobe is oriented at a different PA than its southern counterpart. The asymmetric lobes model is further corroborated by \(H\) band closure phases that are satisfactorily explained when we assume such a morphology for our geometrical model (see Figs. 26 and 27).
These outflows resemble jets, which are strongly believed to be present in Intermediate Luminosity Optical Transients (ILOT) such as Luminous Red Novae (LRN) and are thought to play an important role by providing the extra energy needed to unbind the common envelope. Soker (2016) shows that energy from the jets is transferred to the primary envelope during the grazing envelope evolution stage, which means that the jets could be the mechanism responsible for the outbursts seen in LRNs including V838 Mon. In a subsequent study, (Soker 2020) it was shown that the luminosity and total energy output of V838 Mon could be adequately explained if jets present in the system collide with a slow moving shell of material. The author concluded that this was a plausible scenario for V838 Mon and V4332 Sgr. In the case of V4332 Sgr it was shown using sub millimetre observations carried out at ALMA (Kaminski et al. 2018) that the object does indeed possess bipolar outflows consisting of molecular material. This means that jets were a main component in the LRN event in V4332 Sgr. If jets are the reason for the LRN in V838 Mon, then that necessitates that we should be able to observe jet like features in the source. Since we see a clear bipolar structure in the image reconstructions for V838 Mon presented in Sects. 4.2 and 4.3 this means that jets played a role in the energy output of the outburst in 2002 which resulted in V838 Mon brightening by several orders of magnitude. Soker (2023) shows that only jets can satisfactorily explain the rapid rise in luminosity that is seen in ILOTS such as V838 Mon and V1309 Sco just prior to the outbursts. As of now, no jets have been observed in V1309 Sco, but the clear detection of jets in V838 Mon, V4332 Sgr and CK Vulpeculae suggests that such bipolar features are perhaps universal amongst many other stellar merger remnants. Recently the Blue Ring Nebula was found to possess bipolar conical outflows which led Hoadley et al. (2020) to conclude that the object is a result of a stellar merger. Our interferometric observations of V838 Mon seem to be giving credence to the
Figure 28: Image for the best-fit geometric model comprising two components in the \(K\) band. The primary component is the elliptical disk with a size of \(\sim\)1 mas and elongated at a PA of \(-40\degr\). The secondary component is a smaller circular disk with a diameter of 0.5 mas and separated from the primary by \(\sim\)1.7 mas. The bulk of the model flux (\(\sim\) 99%) is contained in the primary component.
Figure 30: A sketch of the best fitting elliptical models for V838 Mon in the \(HKL\) bands as observed by MIRC-X, GRAVITY and MATISSE.
Figure 29: MYSTIC \(K\)-band closure phases (top) for the observed (points) and modelled (lines) visibilities representing the model in Fig. 28. The bottom panel shows the respective residuals.
idea that jets are an intrinsic feature of LRNs and persist even in the post-merger environment. In V838 Mon it seems that the jets are only prominent in the mid-infrared \(HK\) bands since the ALMA maps show the merger ejecta to be spherically symmetric at a spatial scale of \(\sim\) 25 mas and any jet-like feature seems to be absent (Kaminski et al., 2021). This contrasts with the ALMA maps for V4332 Sgr which show a very clear signature of a bipolar outflow (Kaminski et al., 2018). With extensive future VLTI observations of V4332 Sgr we will be able to determine the extent to which the milliarcsecond scale environment differs from the large scale structures as revealed by ALMA. It could be that the nature of every LRN jet is unique and is determined by a number of other factors such as the mass and stellar type of the progenitor stars, the surrounding medium and duration of the outburst, etc.
### Clumps
The super resolved GRAVITY \(K\) band image (see Fig. 21) points to the possible presence of clumps in the immediate circumstellar vicinity in V838 Mon. The GRAVITY closure phases alone do not seem to give us much insight, as the deviations are much too small (see lower panel in Fig. 13). However, since the CHARA baselines are longer by a factor of about three, the closure phases measured using MYSTIC show much higher deviations from 0\({}^{\circ}\) (discussed in Sect. 4.5). These MYSTIC closure phases, as evidenced by the modelling, suggest the presence of a small clump separated from the central star by \(\sim\) 1.7 mas. The size of the clump is 0.5 mas, which makes it too small to be detected by the VLTI but large enough to appear in our CHARA observations.
The clump lies approximately along the PA traced by the bipolar structure in V838 Mon, in fact its orientation seems to be quite similar to that of the northern lobe seen in the \(H\) band. This clump is the first ever detection of a sub milliarcsecond feature in V838 Mon. The exact nature of the clump, whether stable or transient as of now, remains elusive. ALMA maps of V838 Mon by Kaminski et al. (2021) also showed the presence of prominent clumps in the wind component of the merger remnant in the submillimeter regime. The formation of dust clumps is a prevalent phenomenon amongst red supergiants (RSGs) and asymptotic giant branch stars. VX Sagittarri which is a variable supergiant star has been found to possess an inhomogeneous atmosphere which contains spots (Chiavassa et al., 2010). It was shown by Chiavassa et al. (2009) using radiative hydrodynamical simulations that the interferometric observations for the RSG \(\alpha\) Ori could be explained using limb darkening models, which suggested the presence of asymmetric surface features. In addition to this, Kaminski (2019) showed using ALMA observations that multiple dust clumps had formed in the RSG VY Canis Majoris (VY CMa). A dust clump is also thought to have formed in the (RSG) Betelgeuse during the Great Dimming event of 2019-2020 (Montarges et al., 2021). Follow-up VLTI/MATISSE observations in 2020 were obtained by Cannon et al. (2023) which showed a complex closure phase signal in Betelgeuse. The authors constructed multiple radiative transfer models, such as those consisting of clumps and spots. Although they could not reproduce the closure phase signal exactly due to their use of only rudimentary models, they concluded that either clumps or other surface features were the main source of the closure phase deviations. In the case of V838 Mon, the closure phases are not as high as those in Betelgeuse observed by Cannon et al. (2023), but, as shown in Sect. 4.5, the MYSTIC closure phases can only be explained by the smaller clump (see Fig. 28).
Since the spectra for V838 Mon resembles that of a late M-type supergiant, (Evans et al., 2003; Loebman et al., 2015) we expect the circumstellar environment in V838 Mon to be somewhat similar to other evolved stars with clumpy surroundings. This particular clump that we have been able to constrain with MYSTIC could in fact be a smaller feature within the lobe/jet like structure that is observed at longer wavelengths. This would mean that the lobes themselves could be clumpy and inhomogeneous. These clumps could be the result of the jet interacting with the surrounding less dense material in the V838 Mon during its common envelope phase. Soker (2019) propose that this interaction between the jet and the envelope results in the formation of jet-inflated low-density bubbles. As a consequence, the surrounding medium is prone to clumping due to Rayleigh-Taylor instabilities. This would result in the jet departing from a pure axisymmetric morphology and become variable in direction and brightness. It could be the case that the observed \(H\) band clump was the result of such jet-inflated bubbles.
If the clump is a transient feature, then multiple follow-up observations with MIRCX/MYSTIC at CHARA could help to trace out its trajectory with respect to the central merger product. These observations could also help determine the time evolution of the size and brightness of the clump, which can help to determine whether the jet-inflated bubble formation scenario is plausible.
### Clump ejection scenario
The outburst in 2002 showed a complex multi-peaked light curve that was explained to be the result of multiple shell ejection events in V838 Mon. These short duration bursts of energy are thought to coincide with the ejection of the shells and material outflows (Tylenda, 2005; Soker & Tylenda, 2007). Recent \(V\) band photometry of V838 Mon also shows that the source has been gradually increasing in brightness over the past decade, especially in the \(UBV\) bands (Lilmets et al., 2023). The photometry also shows amplitude variations of about \(\sim\)0.4, which occur with an approximate period of \(\sim\)300 days 7.
Footnote 7: [https://asas-sn.osu.edu](https://asas-sn.osu.edu)
If the observed clump is a mass ejection event or part of the bipolar outflows seen in the \(H\) band images (see Fig. 24), then it could very likely be the case that the formation of such features correlates with the small brightness increases that are seen in the light curve modulations for V838 Mon. If true, then this would mean that the post-merger circumstellar environment is quite dynamic and is evolving over time scales of about a year. This is in stark contrast to the \(L\) band structure that appears to be stable and non-transient over timescales of years. Future extensive observations of V838 Mon with MYSTIC/MIRC-X could help to determine if the observed amplitude variations correlate with the appearance of 'clumps', which can help to constrain the source of the brightness fluctuations observed in V838 Mon.
### Contraction controversy
Previously, it was claimed that V838 Mon was gradually shrinking due to the geometrical thinning of the surrounding ejecta (Chesneau et al., 2014). Our recent size measurement in the \(K\) band shows that contraction has apparently "ceased" and the size of V838 Mon at 2 \(\mu\)m is now greater than that measured by Lane et al. (2005) just a few years after the outburst. This would imply that V838 Mon instead has now started expanding since the previous size measurement by Chesneau et al. (2014). Previous
interferometric observations by Lane et al. (2005) and Chesneau et al. (2014) were limited in scope due to a wide variety of factors.
For example, the observations by Lane et al. (2005) were done using the PTI which had only two baselines with a maximum baseline length of 85 m. As a result of this, V838 Mon was not properly resolved, as is evidenced by the fact that the lowest squared visibility measured by Lane et al. (2005) was about 0.6. Similarly, the VLTI-AMBER squared visibilities at 2 \(\mu\)m fitted by Chesneau et al. (2014) reached a minimum of about 0.2, which is a significant improvement over the previous measurement, but a small fraction of the source flux remained unresolved. This also explains why the size measurement obtained by Chesneau et al. (2014) is smaller than that estimated by Lane et al. (2005). Their fit yielded a size of 1.15 \(\pm\) 0.20 mas, which is about 40% less than the value measured by Lane et al. (2005) (1.83 \(\pm\) 0.06 mas), which seemed to have indicated that a contraction had taken place over the course of a decade.
Our GRAVITY and MYSTIC \(K\) band observations on the other hand were able to better resolve V838 Mon due to the longer baselines (maximum baselines of \(\sim\) 130 m and \(\sim\) 300 m on the VLTI and CHARA respectively) and greater instrument sensitivity. The smallest observed value for \(V^{2}\) with VLTI was about 0.1 while with CHARA it was \(\sim\) zero. These size estimates for V838 Mon are based on observations that almost fully resolve V838 Mon, which was not the case in the above-mentioned previous studies. For example the uniform disk diameter measured with CHARA-MYSTIC as stated earlier is 1.57 \(\pm\) 0.01 mas while with VLTI-GRAVITY it is 1.939 \(\pm\) 0.001 mas (see Table A.1), since the CHARA baselines are longer than the VLTI ones by about a factor of three we conclude that the CHARA size estimate is indeed the most accurate. If the initial size estimate by Lane et al. (2005) is assumed to be accurate, then our results imply that indeed V838 Mon has begun to expand. However, as we have discussed above, the 2005 measurements were limited by baseline length, and therefore it is highly likely that the angular diameter for V838 Mon was overestimated. This would explain the apparent contraction that was believed to be occurring in V838 Mon. Future observations with CHARA can help to determine the actual evolution of the size of V838 Mon by comparing them to the previous measurement from 2013 and the ones presented in this study.
### Polarization in V838 Mon
The Gemini speckle image reconstructions in the \(V\) and \(I\) band speckle images show a morphology that is consistent with the interferometric imaging and modelling results in the mid-infrared. Both images (see Figs. 6 and 7) seem to suggest that the innermost vicinity in V838 Mon is quite noticeably bright at 562 nm and 832 nm. If this emission results from scattered light, then it is likely to produce a net polarization. It was shown by Wisniewski et al. (2003b) that a low degree of polarization (1%) was present in 2003 in the \(V\) band, with a measured PA of 150\({}^{\circ}\). The authors suggested that this polarization signal was pointing to asymmetries in the surroundings of V838 Mon. A follow-up measurement, six months later, showed that the PA of the polarisation vector flipped by 90\({}^{\circ}\)(Wisniewski et al. 2003a). This change over the course of six months seemed to suggest the dynamic, evolving nature of the dusty environment in V838 Mon. The Gemini observations reveal a complex circumstellar medium in V838 Mon that might still be dynamic and could show signs of varying intrinsic polarization and PA, especially in the \(V\) band. The above-mentioned observations by Wisniewski et al. (2003b) and Wisniewski et al. (2003a) were carried out over two decades ago, and since the dramatic change in PA occurred over a period of only six months, we expect the environment in V838 Mon to have significantly changed since then. Future polarimetric measurements could help to determine if linear polarization in V838 Mon has increased significantly in the last two decades. As we discuss earlier in the section, it could be possible that the observed jet like features in the \(K\) band could be transient phenomenon, in which case it would be important to study how the PA and degree of polarization change alongside the jets. We also note that the clumping scenario that we have discussed before could also be probed with further polarimetric observations. For instance, if the clump like feature that we see in our MYSTIC \(K\) band modelling results is indeed a mass ejection event, then it could cause significant changes in the degree of polarization and the orientation of the PA as it propagates through the dusty circumstellar environment.
These polarimetric observations can be combined alongside our interferometric results in this paper for radiative transfer modelling of the dusty environment in V838 Mon. Synthetic images at the \(HK\) bands can be produced from these simulations, which can then be compared to the reconstructed images that we presented in this study.
## 6 Conclusions
In this study, we were able to perform a multi-wavelength interferometric analysis of the stellar merger remnant V838 Mon in the \(HKLM\) bands using the VLTI instruments MA-TISSE and GRAVITY as well as the CHARA instrument MIRC-X/MYSTIC. In the \(HK\) bands we imaged the circumstellar environment in the immediate vicinity of the central merger remnant, while in the other bands we resorted to geometrical modelling to obtain constraints on the geometry and orientation of the post-merger environment. We were also able to compare the mid-infrared image reconstructions with \(I\) and \(V\) band speckle imaging from Gemini South. We also modelled the spectral features that were present in the \(K\) and \(M\) bands and were able to put constrains on the temperatures. Our main findings are as follows.
* Geometrical modelling suggests that the \(L\) band structure has not varied at all since previous observations presented in Mobeen et al. (2021).
* Image reconstruction in the \(HK\) bands reveals a bipolar morphology that resembles jets. These 'jets' are present in both bands, while in the \(H\) band they appear to be slightly asymmetric as well.
* The super resolved GRAVITY \(K\) band images at 0.1 mas reveal clumpy outflows that surround the inner feature.
* The MYSTIC \(K\) band closure phases are well explained by a small (0.5 mas), off-centre circular feature that we identify as a potential clump.
* The orientation of the extended circumstellar feature (i.e. the outflows) tends to be along the same general direction i.e. north-west, with a PA that varies from -30\({}^{\circ}\) to -50\({}^{\circ}\) and an ellipticity in the range 1.1-1.6, depending on the wavelength, which is in agreement with prior studies (Chesneau et al. 2014; Kaminski et al. 2021). The \(N\) and \(I\) band structure however is oriented along the north-south direction
* 3000 K, thus hinting at their photospheric origin. We were also able to measure the sizes of the CO and water emitting regions, which are 2.1 mas and 1.985 mas respectively. In the \(M\) band, we observe CO for the first time at 4.6 \(\mu\)m for the first time using interferometry.
###### Acknowledgements.
T. K. and M. Z. M. acknowledge funding from grant no. 2018/30/E/ST9/00398 from the Polish National Science Center. Based on observations made with ESO telescopes at Paranal observatory under program IDs 0104.D-0101(C) and 0108.D-0628(D). This research has benefited from the help of SUV, the VLTI support service of the Jean-Marie Martiori Center ([http://www.jmc.fr/sur.htm](http://www.jmc.fr/sur.htm)). This research has also made use of the MMCs Searchera, LITpro, OHriEkspider, and Agro services. This work is based upon observations obtained with the Georgia State University Center for High Angular Resolution Astronomy Array at Mount Wilson Observatory. The CHARA Array is supported by the National Science Foundation under Grant No. AST-1636624 and AST-203436. Institutional support has been provided from the GSU College of Arts and Sciences and the GSU Office of the Vice President for Research and Economic Development. Time at the CHARA Array was granted through the NORILab community access program (NORILab) ProoplID: 2022A-426176; PI: M. Moheen). SK acknowledges funding for MIRC-X from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Starting Grant No. 639898 and Consolidated Grant No. 101003096). JDM acknowledges funding for the development of MIRC-X (NASA-XP NNKDA0436, NSF-AST 1099165) and MYSTIC (NSF-AFT 150564, SNP-AST 1999165). Some of the observations in this paper made use of the High-Resolution Imaging instrument Zorwe and obtained under Gemini II-P Proposal Number: GNS-2021A-D1-D5. 2007 sara funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmet Quigley. Zero was mounted on the Gemini South telescope of the international Gemini Observatory, a program of NSFS in Ida, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini partnership. The National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologica Innovacion (Argentina), Ministerio di Ciencia, Tecnologia, Innovaciones e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).
|
2309.11014 | Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label
Regression Emotion Share from Speech | Speech emotion recognition has evolved from research to practical
applications. Previous studies of emotion recognition from speech have focused
on developing models on certain datasets like IEMOCAP. The lack of data in the
domain of emotion modeling emerges as a challenge to evaluate models in the
other dataset, as well as to evaluate speech emotion recognition models that
work in a multilingual setting. This paper proposes an ensemble learning to
fuse results of pre-trained models for emotion share recognition from speech.
The models were chosen to accommodate multilingual data from English and
Spanish. The results show that ensemble learning can improve the performance of
the baseline model with a single model and the previous best model from the
late fusion. The performance is measured using the Spearman rank correlation
coefficient since the task is a regression problem with ranking values. A
Spearman rank correlation coefficient of 0.537 is reported for the test set,
while for the development set, the score is 0.524. These scores are higher than
the previous study of a fusion method from monolingual data, which achieved
scores of 0.476 for the test and 0.470 for the development. | Bagus Tris Atmaja, Akira Sasou | 2023-09-20T02:28:00Z | http://arxiv.org/abs/2309.11014v1 | Ensembling Multilingual Pre-Trained Models for Predicting Multi-Label Regression Emotion Share from Speech
###### Abstract
Speech emotion recognition has evolved from research to practical applications. Previous studies of emotion recognition from speech have focused on developing models on certain datasets like IEMOCAP. The lack of data in the domain of emotion modeling emerges as a challenge to evaluate models in the other dataset, as well as to evaluate speech emotion recognition models that work in a multilingual setting. This paper proposes an ensemble learning to fuse results of pre-trained models for emotion share recognition from speech. The models were chosen to accommodate multilingual data from English and Spanish. The results show that ensemble learning can improve the performance of the baseline model with a single model and the previous best model from the late fusion. The performance is measured using the Spearman rank correlation coefficient since the task is a regression problem with ranking values. A Spearman rank correlation coefficient of 0.537 is reported for the test set, while for the development set, the score is 0.524. These scores are higher than the previous study of a fusion method from monolingual data, which achieved scores of 0.476 for the test and 0.470 for the development.
speech emotion recognition, ensemble learning, multilingual speech processing, pre-trained model, emotion share
## I Introduction
Research on speech emotion recognition (SER) has been focused mainly on recognizing categorial emotions like happy, angry, sad, or neutral. There are attempts to predict dimensional emotions like valence, arousal, and dominance; however, their application is still limited. The reason is that the dimensional emotion is not as straightforward as the categorical emotion. Nevertheless, the previous studies on categorial emotion recognition are designed to choose a single class of emotions in multiclass settings. In contrast to categorial emotions, the emotions, particularly conveyed by vocal bursts, are bridged by smooth gradients (instead of clear separation between discreet emotions) with continuously varying meanings [1]. In that study, the authors found 24 categorial emotions, which may not have been limited to vocal bursts only but also to general emotion perception or speech perception (since vocal bursts are commonly found within speech).
An attempt to analyze emotion fine-grained emotion (i.e., as a regression task) instead of coarse-grained emotion (i.e., by classification task) has been conducted from a linguistic point of view [2]. The authors mapped the degrees of valence and arousal given the sentence into six basic categorical emotions via several lexicons and corpus. The top model (using WKB lexicon) obtained correlation scores of 0.65 for dimensional emotions (valence, arousal, dominance) and 0.416 for categorial emotions. Besides only using linguistic information, the evaluation was conducted on English data only.
Recently, the multilingual approach to tackle the limitation of speech technology has been proposed in the parent tasks of speech processing, automatic speech recognition, speech synthesis, and speech translation. Models like Whisper [3] and MMS [4] can precisely recognize speech in multiple languages. Stirred by this direction, research on speech emotion recognition has been directed to tackle its limitation under multilingual settings. Assuming that emotion is a universal language (i.e., the pattern of angry speech is the same across languages or cultures), the multilingual approach is expected to work also on speech-emotion recognition with diverse language samples.
Multilingual SER attempts to recognize emotion regardless of the language. One can build a single model (including a model trained on multilingual data) for each language with language detection as pre-processing; another can build a single model for all languages. This study is aimed at the latter approach. The previous studies utilized transfer learning for cross-lingual SER [5], distribution adaption [6], and implicit distribution alignment [7]. The last two studies were developed for cross-corpus SER with different languages.
The use of multiple models for SER has shown to be effective for monolingual data. The technique of ensemble was usually adopted to fuse several models from different modalities and/or features. Examples are acoustic and linguistic for dimensional SER [8, 9], acoustic and facial expression [10], and multi-feature fusion [11]. In the previous study [12], ensemble learning was also utilized to tackle the limitation of skewed training data in monolingual SER by maximizing, averaging, and log averaging of the output probabilities of the component classifiers.
This paper contributes to the evaluation of ensemble learning, combining several models via late fusion methods for predicting emotion share from speech in multilingual settings. The fusion was performed by averaging predictions of different
models. We evaluated eight multilingual pre-trained models since the dataset contains multilingual data from English and Spanish. The pre-trained models are chosen from those languages. We showed that by combining the models via ensemble learning with the late fusion method, we could improve the performance of the baseline model with a single model. The performance is measured using the Spearman rank correlation coefficient since the task is a regression problem with ranking values.
## II Dataset and Task
The dataset used in this study is obtained from the dataset for The ACM Multimedia 2023 Computational Paralinguistic Challenge (ComParE) Emotion Request Sub-Challange [13], which is provided by Hume AI [14]. There are nine categorical emotions in the dataset: 'Anger', 'Boredom', 'Calmness', 'Concentration', 'Determination', 'Excitement', 'Interest', 'Sadness', and 'Tiredness'. The choice of these nine emotions is based on their more balanced distribution in the valence-arousal space. The basis for the dataset itself is more than 5,000'seed' samples.
Seeds consist of various emotional expressions, which were gathered from openly available datasets, including MELD [15] and VENEC [16]. The seed samples were mimicked by speakers recruited via Amazon Mechanical Turk by the provider of the dataset [14]. The dataset consists of 51,881'mimic' samples (total of 41:48:55 h of data, mean 2.9 s, range 1.2 - 7.98 s) from 1,004 speakers aged from 20 to 66 years old. The data were collected from 3 countries with broadly differing cultures: the United States (English/EN), South Africa (English/EN), and Venezuela (Spanish/SP). Each seed sample was rated by the individual who imitated it using a'select-all-that-apply' method. Seeds were assigned a mean of 2.03 emotions per rater (max: 7.11, min: 1.00), with a standard deviation of 1.33 emotions. The proportion of times a given seed sample was rated with each emotion was then applied to all mimics of that seed sample. This method results in the share per emotion assigned by the speakers. The label then is normalized by dividing all (floating point) scores with their highest score per sample; hence, each sample has a maximum score of 1.0 for one emotion category and other scores less than 1.0 for other emotion categories. The baseline results for this dataset is a Spearman score of 0.500 for the development set and 0.514 for the test set (using wav2vec finetuned of affective speech dataset with SVM method). The labels of the test set are not open to the public; hence, only performance on the development/validation data can be calculated directly.
Table I shows the characteristics of the dataset. The ratio for training/test split follows the practical machine/deep learning practice, i.e., 80/20. About 18.3% of the samples are allocated for the test (without labels), while the rest are for training (including validation/development). The ratio of female/male is about 63.5/36.5, with the number of female samples being higher than male. The splitting of training/test follows speaker-independent criteria with a total of 1004 speakers; 20% of them is allocated for the test set while the rest is for training and development.
## III Pre-Trained Models
We evaluated nine pre-trained models as acoustic embedding feature extractors and fused the results of these models for ensemble learning. Ensemble learning leverages the advantages of multiple models to achieve a better performance score. In this study, we evaluated a single late fusion method by averaging the emotion share predictions of different models. The baseline model is the robust version of on wav2vec 2.0 [17] finetuned the affective dataset [18]. For multilingual pre-trained models, we employed XLS-R 53 [19] and its variants, XLSR variants [20] (XLSR-300M, XLSR-1B, and XLSR-2B). For the XLS-R 53 and XLSR-1B, we also evaluated the finetuned version of this model on the English (EN) and Spanish (SP) datasets. Finetuning for these languages is not available on other XLSR variants. The complete pre-trained models are listed in Table II.
## IV Classifier and Hyperparameter Search Space
We employed a support vector machine (SVM) classifier for regression (SVR, support vector regression). The type of kernel for SVR is linear (LinearSVR in the scikit-learn toolkit). The optimal parameters for this SVR were searched using the Grid Search algorithm, i.e., the regularization parameter C and the algorithm to solve either the 'dual' or 'primal' optimization problem. Other parameters like scoring were also searched. The scoring is either using negative mean absolute error (NMAE) or negative mean squared error (NMSE). The maximum iteration is fixed at 5000. The value range for these parameters is shown in Table III.
For the fusion of all nine models, we employed the average of the predicted values (arithmetic mean). The predicted values are the continuous values in either development or test for each of the nine emotion categories. The ordinal values are then
used to calculate performance metrics. The final score is an average of nine emotional categories.
The performance for evaluating the models is measured as the Spearman correlation coefficient (\(\rho\)) [21]. This Spearman's \(\rho\) metric is similar to the Pearson correlation coefficient but for ranked data. This metric is chosen since the annotation of the dataset is ordinal (based on rank) and for consistency with the baseline model [13]. Spearman correlation coefficient is calculated as follows:
\[\rho=\frac{\text{cov}(\text{R}(X),\text{R}(Y))}{\sigma_{\text{R}(X)}\sigma_{ \text{R}(Y)}}, \tag{1}\]
where \(\text{cov}(\text{R}(X),\text{R}(Y))\) denotes the covariance of the ranked data \(\text{R}(X)\) and \(\text{R}(Y)\), and \(\sigma_{\text{R}(X)}\) and \(\sigma_{\text{R}(Y)}\) denote the standard deviation of the ranked data \(\text{R}(X)\) and \(\text{R}(Y)\), respectively.
A repository is created to ensure the reproducibility of the experiments 1. The repository contains the experiment settings, including the hyperparameter search space, and the results of the experiments. The requirements to run the experiments are also provided in the repository with the exact version of the libraries used at the time of the experiment.
Footnote 1: [https://github.com/bagustris/ComParE2023](https://github.com/bagustris/ComParE2023)
## V Experiment Results and Discussions
We divided our results into validation or development (Table IV) and test results (Table V). In the development stage, we ensure that the result of ensemble learning (Fusion all in Table IV) is higher than baseline results, either from a single model or late fusion. We submitted our prediction of the test result to the provider of the dataset [13] to obtain the performance on the test set.
Table IV shows our result in the development stage using nine pre-trained models and a fusion of all these models. On a single model evaluation, it is shown that using pre-trained models (such as XLS-R 53 and XLSR variants) achieves higher accuracies than conventional acoustic feature extractors like auDeep and ComParE. For instance, XLS-R 53 (the smallest multilingual model) gained a Spearman score of 0.4328, while ComParE achieved 0.359. For the fusion models, our fusion of all nine pre-trained models also overcomes the previous late fusion of wav2vec2, auDeep, DeepSpectrum, and ComParE.
Having good results in the development stage, we evaluated the same model on the test set. Table V shows the result of the test set. The result shows that the fusion of all models achieves the highest Spearman score of 0.537 (compared to the 0.476 of the late fusion from the previous study [13]). This result is also higher than the previous best result of 0.500 from wav2vec2 [18]. We assume ensemble learning works by leveraging information across different languages and pooling maximum information for multilingual SER.
Similar trends between test and development sets are observed where the performance of the test set is slightly higher than the development set. An exception only applies to the DeepSpectrum result where the test result is 0.004 lower than the development set. This trend indicates the generalization of the evaluated method (including the baseline methods that use a similar SVM classifier) to unseen data and shows that, perhaps, the distribution of the test set is similar to the development set.
Finally, Table VI breaks down the average Spearman's \(\rho\) in the test set (\(\rho=0.537\)) into each emotion category. The result is from 'Fusion all' of nine pre-trained models on the test set. The result shows that the model performs best in the 'Calmness' emotion category (0.6061) and worst in the 'Interest' emotion category (0.4238). The result is similar to the previous study [13] where the 'Calmness' emotion category is the highest (0.559), and the 'Anger' emotion category is the lowest (0.428). The previous study's result was achieved with wav2vec 2.0 but with a different setup from this study (particularly on the scoring method).
## VI Conclusions
In this study, we evaluated nine pre-trained speech models and a fusion of all these nine models for emotion share recognition from speech. The idea was to collect multilingual models from different languages and apply the fusion of these models to multilingual SER data. A dataset from three countries with English and Spanish languages was selected. First, the fusion
of nine pre-trained models defeats the previously reported best result of a single model (in which the performance of this single model is higher than the late fusion of four methods). Second, the results of any single model in this study generally are also higher than the single models/methods in the previous study. Third, there is a trend in which the performance of the test set is slightly higher than the development set. Future studies could be directed to explore more about the dataset and improve the performance using more advanced methods with more recent speech embeddings.
## Acknowledgment
This paper is partly based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan.
|
2309.05061 | First constraints on compact binary environments from LIGO-Virgo data | The LIGO-Virgo analyses of signals from compact binary mergers observed so
far have assumed isolated binary systems in a vacuum, neglecting the potential
presence of astrophysical environments. We present here the first investigation
of environmental effects on each of the events of GWTC-1 and two low-mass
events from GWTC-2. We find no evidence for the presence of environmental
effects. Most of the events decisively exclude the scenario of dynamical
fragmentation of massive stars as their formation channel. GW170817 results in
the most stringent upper bound on the medium density ($\lesssim
21\,\mathrm{g/cm^3}$). We find that environmental effects can substantially
bias the recovered parameters in the vacuum model, even when these effects are
not detectable. We forecast that the Einstein Telescope and B-DECIGO will be
able to probe the environmental effects of accretion disks and superradiant
boson clouds on compact binaries. | Giada Caneva Santoro, Soumen Roy, Rodrigo Vicente, Maria Haney, Ornella Juliana Piccinni, Walter Del Pozzo, Mario Martinez | 2023-09-10T16:05:21Z | http://arxiv.org/abs/2309.05061v3 | # First constraints on compact binary environments from LIGO-Virgo data
###### Abstract
The LIGO-Virgo analyses of signals from compact binary mergers observed so far have assumed isolated binary systems in vacuum, neglecting the potential presence of astrophysical environments. We present here the first investigation of environmental effects on the events of GWTC-1. We find no evidence for the presence of environmental effects. Most of the events decisively exclude the scenario of dynamical fragmentation of massive stars as their formation channel. GW170817 results in the most stringent upper bound on the medium density (\(\lesssim 21\)g/cm\({}^{3}\)). We find that environmental effects can substantially bias the recovered parameters in the vacuum model, even when these effects are not detectable. We forecast that the Einstein Telescope and B-DECIGO will be able to probe the environmental effects of accretion disks and superradiant boson clouds on compact binaries.
_Introduction._ In-depth analyses of gravitational-wave (GW) data have been routinely conducted by the LIGO-Virgo-KAGRA (LVK) collaboration, through parameter estimation, population studies, cosmology, and tests of general relativity (GR) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. These analyses relied on the assumption that the sources of GWs are in a _vacuum_ environment. However, there is a growing interest in exploring the potential effects of astrophysical environments on these observations, particularly with regard to binary black hole (BBH) formation in dense regions, such as star clusters [15, 16, 17, 18, 19, 20], active galactic nuclei (AGN) accretion disks [21, 22, 23], or through the dynamical fragmentation of very massive stars [24]. The physical processes that take place in environments that harbour black holes (BHs) significantly impact their formation, dynamics and evolution, and it is therefore crucial to characterise them. These effects are also relevant to multi-messenger astronomy, as electromagnetic counterparts are expected to accompany GWs for BBH mergers in non-vacuum environments [24, 25, 26, 27].
Most studies of environmental effects on GW signals (e.g., [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]) have focused on sources relevant to the frequency bandwidth of the LISA experiment [48], like intermediate and extreme mass ratio inspirals. These works forecast that LISA will be sensitive to the imprints of astrophysical environments, potentially being able to identify the type of environment and reconstruct its model parameters [41]. The reason for the generalized focus on LISA sources is mainly twofold: (i) the long observation window of the inspiral stage, which allows for the accumulation of usually small environmental effects, and (ii) LISA's sensitivity to low frequencies, which are the most affected by environmental dephasing. LISA is expected to become operational only by 2037 [49] and so it is important to quantify the _current_ ability of LVK to "see" environments.
Here, for the first time, a systematic Bayesian analysis is performed in the framework of the existing LIGO-Virgo GWTC-1 data, testing for the presence of environmental effects, trying to constrain the environmental properties, and determining whether the inclusion of environmental effects in waveforms can bias the estimations of the GW source parameters. A previous study using LIGO-Virgo data focused on the first observed event GW150914 by comparing numerical relativity waveforms in vacuum versus environment [50], which is computationally challenging and out of scope for low-mass binaries. We perform, for the first time, a parameterized post-Newtonian (PN) test of environmental effects with LIGO-Virgo data. The test allows for modifications to the in-vacuum GW phase of the binary that could be induced by environments during the inspiral stage. Similar types of parameterized tests, considering waveform deformations at PN orders higher than the ones induced by environmental effects, are routinely performed by LVK to test GR [11, 12, 13]. The same framework can be used to detect non-GR signals [51]. However, these methods do not specifically test scenarios of a binary immersed in an astrophysical medium.
We use geometrized units with \(G=c=1\). For reference, a supermassive BH (SMBHs) with mass a \(M_{\rm SMBH}\) can exist in environments with densities as large as: \(\sim 10^{-8}(10^{5}M_{\odot}/M_{\rm SMBH})\,\)g/cm\({}^{3}\) for thick accretion disks
and \(\sim 0.1(10^{5}M_{\odot}/M_{\rm SMBH})^{\frac{7}{10}}\,{\rm g/cm^{3}}\) for thin ones [30], \(\sim 10^{-6}\,{\rm g/cm^{3}}\) for cold dark matter spikes [52; 53], and \(\sim 0.1(10^{5}M_{\odot}/M_{\rm SMBH})^{2}\,{\rm g/cm^{3}}\) for superradiant clouds of light bosons [54]. Compact binaries formed through the dynamical fragmentation of a massive star [24] are expected to subsequently merge in a gaseous environment with density \(\gtrsim 10^{7}\,{\rm g/cm^{3}}\)[50].
_Environmental effects._ When in astrophysical environments, the phase evolution of compact binaries is expected to be slightly modified with respect to the vacuum "chirp" by effects like accretion [55; 56; 57] or dynamical friction (DF) [58; 59; 60; 61; 62] (for details, see Ref. [63]). While the specifics of these effects and their relative impact on waveforms depend on the particular environment and binary source and can only be completely captured by numerical simulations, there are generic (agnostic) features that can be well captured by semi-analytic expressions.
While environmental effects may lead to an increase in the BBH eccentricity [64; 23], for small enough densities the GW radiation reaction dominates leading to the circularization of these binaries [65]. For quasi-circular orbits DF from the gravitational wake in the medium results in an energy loss
\[\dot{E}_{\rm DF}\approx\frac{4\pi\rho M^{2}\mathcal{I}(v,\eta)}{v}\Big{(} \frac{1-3\eta}{\eta}\Big{)}, \tag{1}\]
with \(v\coloneqq(\pi M_{z}f)^{\frac{1}{3}}\), where \(f/2\) denotes the orbital frequency and we define \(M_{z}\coloneqq(1+z)M\), with \(M\) the binary's total mass (in the source frame) and \(z\) the cosmological redshift to the source. The symmetric mass ratio is defined as \(\eta\coloneqq m_{1}m_{2}/M^{2}\), \(\rho\) is the (local) average mass density of the environment, and \(\mathcal{I}\) is a function of \(v\) and \(\eta\) which depends on the type of environment; sources in the LVK mass range in a gaseous media with asymptotic speed of sound \(c_{\rm s}\) have \(\mathcal{I}\sim O\big{(}c_{\rm s}/10^{-3}\big{)}\), and so we treat \(\mathcal{I}\) as a constant in our analysis. Analytic expressions for \(\mathcal{I}\) can be found, e.g., in [66] for gaseous media, or [67; 68; 69; 70; 71] for (ultra-light) dark matter. In this work we assume a static homogeneous density profile, neglecting the feedback of the BBH on the environment (see, e.g., [72]).
Accretion causes the masses of the binary components to change through the inspiral, which also affects the evolution of the orbital phase [73]. In collisional media, numerical simulations (e.g., [74; 75; 76; 77; 78]) show that binary accretion can be well approximated by Bondi-Hoyle-Lyttleton accretion (BHLA), with the usual fudge parameter \(\lambda\sim O(1)\)[79]. In media constituted by particles with larger mean free path (like particle dark matter overdensities [80; 30], or plasmas around BHs [81]), accretion is less efficient and it is better described by collisionless accretion (CA) [82; 83].
The environment can alter the evolution of the GW phase also through other effects (like the medium's gravitational potential [84; 80; 85], acceleration of the binary's center of mass [86; 33], planet-like migration [28; 29; 39], and others), but for LVK binaries the effects imparted by DF and accretion are expected to be the most important [30; 34]. We consider diluted environments causing only small corrections to the dominant "chirp" due to vacuum radiation-reaction. As shown in _Supplemental Material_ (see also Ref. [30]), at _linear order_ in \(\rho M^{2}\), the GW phase in environments differs from the vacuum one by additive terms \((3/128\eta)\widehat{\delta\Phi}_{k}v^{k-5}\), where \(k=-9\) (\(-4.5\)PN) for CA and \(k=-11\) (\(-5.5\)PN) for DF and BHLA, with the coefficients given in terms of physical parameters in Tab. 1. The same \(-5.5\)PN effect on the GW phase has also been observed in numerical relativity simulations of BBHs in gaseous media (see Fig. 2 in Ref. [50]). We have also checked that, up to a \(O(1)\) factor, the coefficients \(\beta_{k}\) of Tab. 1 can reproduce the environment effect on the chirp observed in those simulations.
_Analysis Setup._ We consider an agnostic dephasing parameter \(\delta\Phi_{k}\), such that, in frequency domain, the GW phase with a specific environmental effect is
\[\phi^{\rm env}=\phi^{\rm vac}+\frac{3}{128\eta}\delta\Phi_{k}v^{k-5}, \tag{2}\]
where \(k=-9\) for CA, and \(k=-11\) for BHLA or DF. To generate the waveforms with the environmental correction, we followed the model-agnostic framework of parameterized tests of GR [87; 88; 89]. Throughout this work, we use the phenomenological inspiral-merger-ringdown models IMRPhenomPv2 for binary black hole (BBH) [90; 91; 92; 93] and IMRPhenomPv2_NRTidalv2 for binary neutron star (BNS) systems [94], which incorporate spin-induced precession effects.
We carry out a Bayesian parameter estimation analysis to measure the dephasing parameters \(\delta\Phi_{k}\) and quantify the evidence for the presence of an environment. We compute the Bayes factor \(\mathcal{B}^{\rm env}_{\rm vac}\) to compare the two hypotheses: (i) the data \(d\) is described by the environmental model \(\mathcal{H}_{\rm env}\) that allows nonzero values of the dephasing parameter \(\delta\Phi_{k}\), versus (ii) \(d\) is described by the vacuum model \(\mathcal{H}_{\rm vac}\) where no additional parameters are required.
\begin{table}
\begin{tabular}{c c c} \hline Effect & \(\beta_{k}\) & \(k\) \\ \hline CA & \(\frac{125\pi(1-3\eta)}{357\eta^{2}}\) & \(-9\) \\ BHLA & \(\frac{125\pi[1-5\eta(1-\eta)]}{1824\eta^{4}}\) & \(-11\) \\ DF & \(\frac{25\pi(1-3\eta)}{304\eta^{3}}\) & \(-11\) \\ \hline \end{tabular}
\end{table}
Table 1: Dependence of environmental dephasing coefficients on the physical parameters (see _Supplemental Material_). We write \(\widehat{\delta\Phi}_{k}=-\beta_{k}\tilde{\rho}M^{2}\), with \(\tilde{\rho}\) denoting \(\rho\) for CA, \(\lambda\rho\) for BHLA, and \(\mathcal{I}\rho\) for DF. The effects enter at \(k/2\)-th PN order in the GW phase.
We estimate the magnitude of the environmental effects by computing the marginalized posterior probability distribution of the \(\delta\Phi_{k}\) parameter within \(\mathcal{H}_{\text{env}}\). The posterior distribution is obtained using Bayes' rule for a given prior distribution of the model parameters. The hypotheses share the same prior distribution for their common parameters, and we consider a (uninformative) zero-centred uniform prior distribution for \(\delta\Phi_{k}\) in \(\mathcal{H}_{\text{env}}\). As we discussed in the Supplemental Material, the mismatch (\(\mathpzc{MM}\)) between vacuum waveform (\(h_{\text{vac}}\)) and environment waveform (\(h_{\text{env}}\)) shows that the sensitivity of the dephasing parameters for low-mass systems is significantly higher than for a high-mass system--a suitable prior range of \(\delta\Phi_{k}\) is crucial to ensure the sampler convergence on the global maximum. Throughout this work, we choose to vary a single phase parameter at a time to address specific environmental effects leading to phase deformation.
_Density constraints from GWTC-1._ We analyze individually each of the eleven events of GWTC-1 to measure the evidence for environmental effects in the data [95, 96, 97] from the LIGO [98] and Virgo [99] detectors. Table 2 shows the values of \(\log_{10}\mathcal{E}_{\text{vac}}^{\text{env}}\) for environmental effects with \(k=\{-9,-11\}\) relative to the vacuum model. We find negative values of the log Bayes factor for all events, except for GW151012. Since this event was observed with a low statistical significance [100, 1] and the Bayes factor that we found is relatively small (\(\log_{10}\mathcal{E}_{\text{vac}}^{\text{env}}<1\)), this result is indecisive. Therefore, we do not find any support for environmental effects in the data of GWTC-1. On the other hand, we can not derive any evidence for the vacuum model either. Environmental corrections are more effective in the early inspiral phase of the coalescence, where LIGO-like detectors might be less sensitive. Even if a binary evolves in a medium, the detection of the environmental effect might be challenging because of noise domination. Nevertheless, we can use the results of this analysis to estimate the upper limits on the environment density using the posterior samples of \(\delta\Phi_{k}\), together with the ones of the binary masses.
The environmental effects are responsible for negative values of \(\delta\Phi_{k}\), but the sampler is not limited to those as we also intend to conduct a model-independent (agnostic) vacuum GR test (see _Supplemental Material_). To put upper bounds on the density, we select only the samples with negative \(\delta\Phi_{k}\) values, and calculate the density for the individual environmental effects using \(\widehat{\delta\Phi}_{k}=\delta\Phi_{k}\) and the expressions in Tab. 1[101]. Figure 1 shows the 90% upper bound on the density posterior obtained considering specific environmental effects. Our results for the events GW150914, GW151226, GW170104, GW170608, GW170814, and GW170817 show the density constraint \(\tilde{\rho}\lesssim(21\text{-}2\times 10^{6})\,\text{g/cm}^{3}\), which decisively rules out the binary formation scenario of dynamical fragmentation [50, 24]. The remaining events' low inspiral SNR with fewer inspiral cycles in the band led to poor density constraints--inconclusive for that scenario. We find the notable bound \(\tilde{\rho}\lesssim 21\,\text{g/cm}^{3}\) from GW170817, which is the foremost constraint from GWTC-1 (roughly the density of gold at room temperature on Earth).
_Forecasts for future observations._ To understand the detectability of environmental effects with future detectors, we carry out a set of injection analyses mimicking events like GW150914, GW170608, and GW170817 using the design sensitivity of advanced LIGO (ALIGO) [102], and perform a mismatch-based prediction for Einstein
Figure 1: 90% upper bounds on the environmental density obtained considering the effect of CA (blue triangle), BHLA (red cross), and DF (green diamond).
Telescope (ET) [103] and B-DECIGO [104; 105], with the corresponding lower cutoff frequencies set to \(15\,\mathrm{Hz}\), \(5\,\mathrm{Hz}\), and \(0.1\,\mathrm{Hz}\), respectively. We select the maximum likelihood sample from the standard vacuum model analysis on real data from the Bilby GWTC-1 rerun [106; 107]. Then, we add a set of \(\delta\Phi_{k}\) values to manipulate the individual environmental effects with a monotonically increasing density parameter.
For ALIGO, we conduct Bayesian analysis with zero-noise realization considering a fixed injection SNR (\(\rho_{\mathrm{inj}}\)) of 25 and compare the environment versus vacuum models. Figure 2 shows the values of \(\log_{10}\mathcal{B}_{\mathrm{vac}}^{\mathrm{env}}\) for a range of medium densities \(0.1\)-\(10^{8}\,\mathrm{g/cm^{3}}\). The effect of DF in GW170817, GW170608, GW150914-like events is detectable when \(\tilde{\rho}\gtrsim(10,\,4\times 10^{4},\,10^{6})\,\mathrm{g/cm^{3}}\), respectively. The effect of CA is detectable only for media \(\sim 10\)-\(100\) times denser. When the environment density is set to be half of the (threshold) detectable value, it can still be measurable even though it is not detectable. This is because in comparing environment versus vacuum, the environment hypothesis is hit by Occam's razor factor due to the extra dephasing parameter in \(\mathcal{H}_{\mathrm{env}}\). That also explains why almost all \(\log_{10}\mathcal{B}_{\mathrm{vac}}^{\mathrm{env}}\) values in Tab. 2 are negative. Figure 3 shows that, for the case of the non-vacuum GW170817-like injection, the vacuum model analysis leads to an overestimation of the chirp mass and effective spin, while it underestimates the mass ratio. A recent study [108] also demonstrated the detectability of environmental effects and the biases on the inferred parameters of the binary through the study of the (multi-modal) ringdown phase. The use of environmental waveform models in future parameter estimation analyses could potentially mitigate this bias.
To study the observability of environments with future detectors like ET and B-DECIGO we use an alternative approach. The Bayes factor between two competitive models can be approximated as \(\ln\mathcal{B}_{\mathrm{vac}}^{\mathrm{env}}\approx\rho_{\mathrm{inj}}^{2}(1- \mathcal{F}\mathcal{F}^{2})/2\) when \(\mathcal{H}_{\mathrm{env}}\) is true [109], where \(\mathcal{F}\mathcal{F}\) refers to the fitting factor of the injected waveform (\(h_{\mathrm{inj}}\in\mathcal{H}_{\mathrm{env}}\)) with the vacuum waveform, such that the match between \(h_{\mathrm{vac}}\) and \(h_{\mathrm{inj}}\) is maximized over all the parameters in \(\mathcal{H}_{\mathrm{vac}}\). We assume \(\mathcal{F}\mathcal{F}\approx 1-\mathcal{M}\mathcal{M}\) ignoring the correlation between \(\delta\Phi_{k}\) and the physical parameters in \(\mathcal{H}_{\mathrm{vac}}\). Finally, \(\mathcal{M}\mathcal{M}\) is a function only of the dephasing parameter \(\delta\Phi_{k}\), which in turn is determined by the environment density and the binary parameters through the expressions in Tab. 1. Figure 4 shows the curves of injected SNR versus environment density necessary to achieve \(\log_{10}\mathcal{B}_{\mathrm{vac}}^{\mathrm{env}}=3\) for events like GW170817 and GW170608. Our results suggest that ET will be sensitive to the effect of DF in a GW170817-like event for an environment with \(\tilde{\rho}\gtrsim 10^{-3}\,\mathrm{g/cm^{3}}\) and to CA effects in a medium \(\sim 10^{3}\) times denser. As could be already anticipated, due to its better low frequency coverage, B-DECIGO will be sensitive to much lower densities; DF effects in a GW170817-like event will be detectable for
Figure 3: Posterior distribution of the chirp mass (\(\mathcal{M}_{c}\)), mass ratio (\(q\)), and effective spin (\(\chi_{\mathrm{eff}}\)) using the models \(\mathcal{H}_{\mathrm{vac}}\) and \(\mathcal{H}_{\mathrm{env}}\) with nonzero \(\delta\Phi_{-11}\). We injected GW170817-like waveforms deformed by the effect of DF for a set of environment densities (shown in bottom \(x\)-axis), using the ALIGO sensitivity curve. The top \(x\)-axis shows the logarithmic Bayes factor values for \(\mathcal{H}_{\mathrm{vac}}\) versus \(\mathcal{H}_{\mathrm{env}}\). Small red marks indicate the injection parameter values.
Figure 2: Logarithmic Bayes factor as function of the medium density. This shows the detectability of different environmental effects using the ALIGO sensitivity curve, for GW150914, GW170608, and GW170817-like injections in zero noise realization. We omit the BHLA curve since it follows very closely the DF one.
an environment with \(\tilde{\rho}\gtrsim 10^{-12}\,\mathrm{g/cm^{3}}\) and CA effects for a density \(\sim 10^{4}\) times larger.
A forecast based on the Fisher information matrix was previously performed in Ref. [34]. Overall, our results are more pessimistic (by a few orders of magnitude) than the Fisher analysis: we find a threshold density for detectability larger by roughly 2 orders of magnitude for ALIGO and ET; the difference is larger for DECIGO, because we consider the 2030s B-DECIGO.
_Discussion and Conclusions._ We have designed a model-agnostic Bayesian analysis for detecting the existence of environments surrounding compact binaries. For that, we modelled the waveform considering environmental corrections at \(-4.5\)PN and \(-5.5\)PN associated to the leading effect of accretion and DF at linear order in \(\epsilon\coloneqq\tilde{\rho}M_{z}^{2}\ll 1\). We checked that higher order terms in \(\epsilon\), which affect the phasing at more negative PN orders, are indeed sub-leading and do not alter our results for the systems we studied (see _Supplemental Material_). We analyzed individually each event in GWTC-1 and found _no_ evidence for the presence of environments. In addition, we used our Bayesian analysis to derive upper bounds on the densities of GWTC-1 binary surroundings. Our results for GW150914, GW151226, GW170104, GW170608, GW170814, and GW170817 decisively rules out the scenario of dynamical fragmentation of massive stars as a possible formation channel for these events [24]. Our results for GW150914 are also in excellent agreement with the findings of Ref. [50], which used numerical relativity waveforms in gaseous environments.
Even if a compact binary is found evolving in a (low-density) medium, the detection of environmental effects with LIGO-like detectors is challenging, since such detectors are not very sensitive to the early inspiral stage, and the extra dephasing parameter incurs the Occam's razor penalty, resulting in less evidence for the environment model. We found that a medium density can actually be measurable, even when its value is half of the threshold for detectability (\(\log_{10}\mathcal{B}_{\mathrm{vac}}^{\mathrm{env}}\gtrsim 0\)). We also show that environmental effects can substantially bias the recovered parameters of the vacuum model, even when they are not detectable. Finally, our zero-noise injection analyses indicate that LIGO-like detectors will be capable to see environments (at least) as dilute as \(\tilde{\rho}\sim 10\,\mathrm{g/cm^{3}}\)--roughly the density of lead at room temperature on Earth. While they may in the future definitively exclude the dynamical fragmentation scenario, environments like accretion disk and dark matter overdensities seem to be out of reach.
We have also analysed the prospects for future (2030s) detectors like ET and B-DECIGO. We demonstrated that ET will be sensitive to the effects of DF and BHLA for media as dilute as \(\tilde{\rho}\sim 10^{-3}\,\mathrm{g/cm^{3}}\), implying that it may be capable of detecting environmental effects on a compact binary merging within dense (thin) accretion disks or superradiant clouds hosted by a SMBH. Our results are even more promising for B-DECIGO, indicating that it will be sensitive to DF and BHLA effects from environments as dilute as \(\tilde{\rho}\sim 10^{-12}\,\mathrm{g/cm^{3}}\), which covers, e.g., most accretion disk densities, superradiant clouds of ultra-light bosons, or cold dark matter spikes.
In this work we performed a model-agnostic analysis since our goal was to assess the overall capability of LIGO-like and near-future 2030s detectors in probing environments and to derive the first (order of magnitude) constraints on environments from current observations. This also justifies our (unphysical) choice for considering particular environmental effects separately. In future work, we plan to focus on specific environments (like accretion disks and superradiant clouds), considering simultaneously several environmental effects entering at different PNs, and to study the distinguishability between different environments with compact binaries (see, e.g., the study of Ref. [41] for EMRIs). We also plan to extend our modelling to asymmetric binaries. As the events in GWTC-1 are nearly equal mass binaries, we analyzed them using a quadrupolar waveform model. To analyze the events in GWTC-2/3 which contain many asymmetric binaries, we need to include the contribution of higher multipoles, which become more relevant with increasing mass ratio. In a follow-up work, we will focus on building an environmental waveform model including higher harmonics.
We gratefully acknowledge comments and feedback from Nathan Johnson-McDaniel and Stefano Rinaldi.
Figure 4: Curves of required SNR for a given density value to achieve \(\log_{10}\mathcal{B}_{\mathrm{vac}}^{\mathrm{env}}=3\) for a specific environmental effect, in the configuration of the third-generation detector ET (cyan shade) and the Japanese space-detector B-DECIGO (gray shade). The dots represent the expected SNR if we replace the LIGO-Hanford detector with those future detectors. We omit the BHLA curve since it follows very closely the DF one.
We are grateful to Marc Andres-Carcasona for useful discussions throughout the preparation of the paper. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation, as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. S.R. and M.H. were supported by the research program of the Netherlands Organization for Scientific Research (NWO). R.V. is supported by grant no. FJC2021-046551-I funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTRR. R.V. also acknowledges support by grant no. CERN/FIS-PAR/0023/2019. This paper has been given LIGO DCC number LIGO-P2300301. This work is partially supported by the Spanish MCIN/AEI/ 10.13039/501100011033 under the grants SEV-2016-0588, PGC2018-101858-B-I00, and PID2020-113701GB-I00 some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. To generate the waveforms, we have used the LALSimulation package of the LIGO Algorithms Library (LAL) software suite [110]. To perform the Bayesian parameter estimation analyses, we have used the nested sampling algorithm [111, 112, 113] implemented in the LALinference[114] and Bilby[115] packages to evaluate the integral over the model parameter space and calculate the Bayes factor.
|
2309.13603 | The Eshelby problem in amorphous solids | The ``Eshelby problem" refers to the response of a 2-dimensional elastic
sheet to cutting away a circle, deforming it into an ellipse, and pushing it
back. The resulting response is dominated by the so-called ``Eshelby Kernel"
which was derived for purely elastic (infinite) material, but has been employed
extensively to model the redistribution of stress after plastic events in
amorphous solids with finite boundaries. Here we discuss and solve the Eshelby
problem directly for amorphous solids, taking into account possible screening
effects and realistic boundary conditions. We find major modifications compared
to the classical Eshelby solution. These modification are needed for modeling
correctly the spatial responses to plastic events in amorphous solids. | H. George E. Hentschel, Avanish Kumar, Itamar Procaccia, Saikat Roy | 2023-09-24T10:33:48Z | http://arxiv.org/abs/2309.13603v1 | # The Eshelby problem in amorphous solids
###### Abstract
The "Eshelby problem" refers to the response of a 2-dimensional elastic sheet to cutting away a circle, deforming it into an ellipse, and pushing it back. The resulting response is dominated by the so-called "Eshelby Kernel" which was derived for purely elastic (infinite) material, but has been employed extensively to model the redistribution of stress after plastic events in amorphous solids with finite boundaries. Here we discuss and solve the Eshelby problem directly for amorphous solids, taking into account possible screening effects and realistic boundary conditions. We find major modifications compared to the classical Eshelby solution. These modification are needed for modeling correctly the spatial responses to plastic events in amorphous solids.
**Introduction:** The "Eshelby problem" consists of computing the displacement field resulting from cutting out a circle from an elastic sheet, deforming it into an ellipse and pushing it back [1]. Surprisingly, it turned out that this seemingly artificial problem is intimately connected to the physics of plastic events in strained amorphous solids, cf. Fig. 7 in [2]. Similarly to electrostatic theory that conserves charges (monopoles), but allows dipoles (dielectrics), elasticity theory conserves monopoles _and_ dipoles, but allows quadrupoles. Thus the "cheapest" plastic events in amorphous solids that can release stress locally are quadrupolar in nature, and this agrees with the geometry of the Eshelby problem. Accordingly, the "Eshelby problem" has become popular and a frequently employed theory to discuss the redistribution of stress after plastic events. In particular, the Eshelby kernel was often used in the context of "elasto-plastic" models which purport to describe the mechanical response of amorphous solids to external strains, up to mechanical yield by shear banding [3; 4; 5].
In this Letter we stress that the application of the Eshelby theory to amorphous solids in which plastic events appear at any amount of strain, is fraught with difficulties. To underline this fact, we present here a new analytic solution of the Eshelby problem in an amorphous solid with plastic events and realistic boundary conditions in a finite domain. We show that the resulting displacement field changes qualitatively from the Eshelby solution. The difference in physics between amorphous materials and perfect elastic sheets dictates a reassessment of the Eshelby kernel which is being used in studying the response to external strains. The more ductile the material is, the more severe is the deviation from the classical Eshelby solution.
**Definition of the problem:** We consider the Eshelby problem in circular geometry, see Fig. 1. Initially the amorphous material is confined between an inner circular cavity of radius \(r_{\rm in}\) and an outer circle of radius \(r_{\rm out}\). The inner circle is deformed to an ellipse of the same area, with major semi-axis \(a\) and minor semi-axis \(b\), such that \(ab=r_{\rm in}^{2}\). The boundary of the ellipse is now
\[\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1. \tag{1}\]
Defining \(\delta\equiv a/r_{\rm in}\), the boundary of the ellipse \(r(\theta)\) is traced by
\[r(\theta)=\frac{r_{\rm in}}{\sqrt{\cos^{2}(\theta)/\delta^{2}+\delta^{2}\sin^{ 2}\theta}}\, \tag{2}\]
where \(\theta=\arctan(y/x)\). We are interested in the displacement field that responds to the change from circle to ellipse, with radial component \(d_{r}(r,\theta)\hat{r}\) and transverse component \(d_{\theta}(r,\theta)\hat{\theta}\), where \(\hat{r}\) and \(\hat{\theta}\) are unit vectors in the radial and the transverse directions.
**Equations to be solved:** In a purely elastic sheet the displacement field that arises in a response to a pertur
Figure 1: The geometry used: amorphous solid is contained between the outer circle of radius \(r_{\rm out}\) and an inner circle of radius \(r_{\rm in}\), which is then distorted to an ellipse of of the same area. We are interested in the displacement field as a result of this distortion. |
2309.05075 | Secure Set-Based State Estimation for Linear Systems under Adversarial
Attacks on Sensors | Set-based state estimation plays a vital role in the safety verification of
dynamical systems, which becomes significantly challenging when the system's
sensors are susceptible to cyber-attacks. Existing methods often impose
limitations on the attacker's capabilities, restricting the number of attacked
sensors to be strictly less than half of the total number of sensors. This
paper proposes a Secure Set-Based State Estimation (S3E) algorithm that
addresses this limitation. The S3E algorithm guarantees that the true system
state is contained within the estimated set, provided the initialization set
encompasses the true initial state and the system is redundantly observable
from the set of uncompromised sensors. The algorithm gives the estimated set as
a collection of constrained zonotopes, which can be employed as robust
certificates for verifying whether the system adheres to safety constraints.
Furthermore, we demonstrate that the estimated set remains unaffected by attack
signals of sufficiently large and also establish sufficient conditions for
attack detection, identification, and filtering. This compels the attacker to
inject only stealthy signals of small magnitude to evade detection, thus
preserving the accuracy of the estimated set. When a few number of sensors
(less than half) can be compromised, we prove that the estimated set remains
bounded by a contracting set that converges to a ball whose radius is solely
determined by the noise magnitude and is independent of the attack signals. To
address the computational complexity of the algorithm, we offer several
strategies for complexity-performance trade-offs. The efficacy of the proposed
algorithm is illustrated through its application to a three-story building
model. | M. Umar B. Niazi, Michelle S. Chong, Amr Alanwar, Karl H. Johansson | 2023-09-10T16:48:50Z | http://arxiv.org/abs/2309.05075v2 | # Secure Set-Based State Estimation for Linear Systems under Adversarial Attacks on Sensors
###### Abstract
When a strategic adversary can attack multiple sensors of a system and freely choose a different set of sensors at different times, how can we ensure that the state estimate remains uncorrupted by the attacker? The existing literature addressing this problem mandates that the adversary can only corrupt less than half of the total number of sensors. This limitation is fundamental to all point-based secure state estimators because of their dependence on algorithms that rely on majority voting among sensors. However, in reality, an adversary with ample resources may not be limited to attacking less than half of the total number of sensors. This paper avoids the above-mentioned fundamental limitation by proposing a set-based approach that allows attacks on all but one sensor at any given time. We guarantee that the true state is always contained in the estimated set, which is represented by a collection of constrained zonotopes, provided that the system is bounded-input-bounded-state stable and redundantly observable via every combination of sensor subsets with size equal to the number of uncompromised sensors. Additionally, we show that the estimated set is secure and stable irrespective of the attack signals if the process and measurement noises are bounded. To detect the set of attacked sensors at each time, we propose a simple attack detection technique. However, we acknowledge that intelligently designed stealthy attacks may not be detected and, in the worst-case scenario, could even result in exponential growth in the algorithm's complexity. We alleviate this shortcoming by presenting a range of strategies that offer different levels of trade-offs between estimation performance and complexity. To illustrate the efficacy of our approach, we apply it to a vertically interconnected mechanical system that models a three-story building structure. Our results demonstrate that the proposed set-based method provides a robust and secure state estimation method that can handle a greater number of attacked sensors than existing point-based estimators.
S +
Footnote †: footnoteinfo
performance. However, most methods proposed in the literature are point-based, i.e., the estimated state at any given time is a point in the state space. The estimation error bounds of point-based methods are not precise because of their dependence on comparison functions, which are known to be conservative. Moreover, the reliance of point-based methods on majority voting-type algorithms presents a fundamental limitation that only less than half the total number of sensors are allowed to be attacked at any given time. However, an adversary with ample resources would be capable of attacking more than half the number of sensors.
### Related literature
To overcome the limitations of point-based methods and obtain tighter robust guarantees, the set-based zonotopic filtering paradigm [19, 14, 19, 8, 4, 7, 31] is promising. It has found many real-world applications, including fault diagnosis in industrial systems [9], underwater robotics [17], vehicle localization [10], and leakage detection in water distribution networks [30]. Moreover, in safety-critical applications, guaranteed state inclusion in a bounded set is crucial to provably avoid unsafe regions in the state space. All these issues further motivate the need for set-based state estimation techniques that provide a set of all possible states under unknown disturbances and measurement errors belonging to known bounded sets.
Different from zonotopic estimators, other set-based estimators include interval observers [29, 37], which estimate a box at every time instant, guaranteeing the true state's inclusion. However, a box fails to capture the inter-dependencies between the state variables and is less accurate. In contrast, zonotopic filters are shown to not only provide an accurate set-based estimate but are also computationally efficient.
Despite the growing importance of secure estimation in the presence of adversarial attacks, the existing set-based methods have several shortcomings, including restrictive assumptions, limited robustness to stealthy attacks, and reliance on specific attack strategies. The reachability-based approach of [33] requires the full state vector to be measured by any subset of sensors with cardinality equal to the number of safe sensors. This is a highly restrictive assumption, and without it, the guarantees of estimation accuracy provided in [33] become excessively conservative. Recent works [21, 25, 11] employ standard interval-based or zonotopic filters by identifying and discarding the sensors that have been corrupted. However, this approach fails to exclude stealthy attacks on the sensors, which can easily evade the proposed attack detection procedure. As a result, stealthy attacks can significantly affect estimation accuracy by constantly injecting small signals in the sensor measurements, which can accumulate over time without being detected. Although this issue is addressed by [22, 23] and [35, 39, 40, 24], the proposed methods are either restricted to a particular attack strategy or rely on the assumption that the attack signal is bounded.
### Our Contribution
In this paper, we propose a new set-based state estimation method that neither requires a full state vector to be measured by any subset of sensors nor allows stealthy attacks to corrupt the estimation performance significantly. We also do not assume the boundedness of the attack signals or that the attacker resorts to a particular attack strategy. Moreover, to address the fundamental limitation of point-based secure estimators that strictly less than half the number of sensors can be attacked at any given time, we allow the attacker to corrupt all but one sensor under the assumption that the system remains observable from the remaining attack-free sensors. Subject to these assumptions, we present a zonotope-based state estimation algorithm for linear systems under time-varying sensor attacks and show that the estimated set is guaranteed to contain the true state.
We would like to point out that although the algorithm presented in this paper can also handle sensor faults, it was developed with _adversarial attacks_ in mind. Attacks differ from faults in that an attack is an intelligently designed strategy to inflict a maximal negative impact on the system and/or evade detection, or in other words, is stealthy. For example, injecting small signals to corrupt the sensor measurements, which cannot be detected at any instant in time, but its effect can accumulate over a long time horizon. Another example is the so-called zero dynamics attack [32], where a specially designed sensor attack signal based on the unstable zero dynamics of the plant can grow unbounded while the plant's state is driven away from the attack-free trajectory. Such an attack is directed close to the output null space such that the sensor measurement is close to zero (modulo noise). These intelligently designed attacks can cause conventional state estimation algorithms to provide inaccurate state estimates, where the estimation accuracy is dependent on the attack signals. We eliminate the undesirable effect of sensor attacks with our proposed secure set-based estimation algorithm.
Our algorithm operates in a series of steps at every time instant. Firstly, it calculates the time update by utilizing the model and the bounds on the process noise. Subsequently, subsets of state-space consistent with the sensor measurements are computed. These subsets, known as _measurement consistent sets_, are determined based on the bounds of the measurement noise. Essentially, the measurement consistent sets correspond to the possible states in the state space that could have generated the obtained sensor measurements, given that the exact realization of measurement noise can be anywhere within
the specified bounds described by a zonotope. In practical scenarios, the system might not be observable from every sensor, and an attacker could compromise a subset of sensors. To account for this, we create multiple agreement sets by intersecting various combinations of measurement consistent sets. By doing so, we can eliminate measurement consistent sets that correspond to attacked sensors. We establish that at least one agreement set contains the true system state. Lastly, the measurement update is computed by intersecting the time update with the agreement set, which further eliminates the agreement sets affected by non-stealthy attack signals. We also propose a simple procedure to identify the set of compromised sensors, or its subset if the attacks are stealthy, at any given time.
The main strength of our algorithm is that it can handle attacks on different subsets of sensors at any given time. In addition, our approach can accommodate any cardinality of attacked sensors as long as they are less than the total number of sensors. The reason for this is that we do not rely on a simple majority vote among the sensors. Instead, we use a combination of intersections between their measurement consistent sets, which lead to the agreement sets. These combinations are compared with the model-based time update set in such a way that we can verify the validity of multiple agreement sets if only one sensor is uncompromised.
A shortcoming of our algorithm is that its complexity may exponentially increase in the worst-case scenario when intelligent and stealthy attacks are executed. Despite this, we argue that it is challenging for attackers to achieve this worst-case scenario because it requires a complete understanding of both the system and the algorithm, as well as ample computational resources to calculate the optimal attack within a single time sample. Nonetheless, to tackle this complexity issue, we suggest various strategies to reduce complexity and facilitate the implementation of our zonotope-based secure state estimation algorithm.
Furthermore, when less than half of the sensors have been attacked, we integrate a point-based resilient state observer into our algorithm to prune the candidate sets. However, this strategy's practicality depends on an accurate approximation of the guaranteed estimation error provided by the resilient point-based state observer. The modified algorithm provides asymptotic convergence guarantees with an explicit bound that depends on known process and measurement noise, independent of the attack signals.
In summary, our contributions in this paper include a novel secure set-based state estimation algorithm2 that guarantees the following:
Footnote 2: The code is available online in our GitHub repository [https://github.com/aalanwar/Secure-Set-Based-Estimation](https://github.com/aalanwar/Secure-Set-Based-Estimation).
1. state inclusion, i.e., the true state is always guaranteed to be inside the estimated set;
2. secure estimation, i.e., large attack signals are automatically discarded, while the impact of stealthy attacks on the estimation accuracy is negligible;
3. stability guarantees, i.e., the estimated set remains bounded irrespective of the unbounded attack if the process and measurement noise signals are bounded;
4. attack detection scheme that identifies the set of attacked sensors; and
5. methods to address the algorithm's complexity with minimal compromise on the estimation accuracy.
To this end, we assume that the system is redundantly observable, i.e., observable from every possible combination of sensors with cardinality less than or equal to the number of safe sensors. This assumption is more realistic than the one in our previous work [27], which required observability from every sensor. Also, the analysis under this assumption turns out to be significantly different and non-trivial compared to [27]. Furthermore, we provide stability guarantees and an attack detection algorithm in this paper, which is missing in [27]. Finally, we demonstrate the effectiveness of our proposed method through an illustrative example and a more practical example of set-based state estimation of a three-story building structure [36] during an earthquake and when an adversary has compromised some of the sensors.
### Outline
The rest of the paper is organized as follows. Section 2 defines the notations and summarizes the required preliminaries on set representations. The main assumptions and the problem are stated in Section 3. Section 4 presents the secure zonotopic state estimation algorithm, provides the inclusion guarantees, and discusses methods to reduce the algorithm's complexity. Section 5 provides the stability analysis of the estimation algorithm. Section 6 presents the attack detection scheme and discusses the cases where it works. We also discuss the worst-case complexity of the proposed algorithm under stealthy attacks and proposes methods to handle it effectively. Finally, Section 7 demonstrates the effectiveness of the proposed algorithm through simulation examples, and Section 8 concludes the paper.
## 2 Notations and Preliminaries
### Notations
The set of real numbers and integers are denoted by \(\mathbb{R}\) and \(\mathbb{Z}\), respectively. We let \(\mathbb{Z}_{\geqslant i}\doteq\{i,i+1,i+2,\dots\}\) and \(\mathbb{Z}_{[i,k]}\doteq\{i,i+1,i+2,\dots,k\}\) for \(k\geqslant i\). The maximum norm of a vector \(x\in\mathbb{R}^{n}\) is denoted as
\(\max_{i\in\{1,\ldots,n\}}|x_{i}|\). Given a signal \(v:\mathbb{Z}_{\geq 0}\rightarrow\mathbb{R}^{n}\), we denote its restriction to the interval \([0,k]\) by \(v_{[0,k]}\), for some \(k\in\mathbb{Z}_{\geq 0}\). For a set \(\mathcal{S}\), \(|\mathcal{S}|\) denotes its cardinality. Given multiple sets \(\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\), we denote their collection as \(\mathcal{S}=\{\mathcal{S}_{i}\}_{i\in\mathbb{Z}_{\{1,n\}}}\). The notation \(\binom{n}{k}=\frac{n!}{n!(n-k)!}\) denotes the binomial coefficient or the number of possible combinations when \(k\) out of \(n\) elements are chosen. The Cartesian product is denoted by \(\times\).
### Set Representations
Given a center \(c_{z}\in\mathbb{R}^{n}\) and a generator matrix \(G_{z}\in\mathbb{R}^{n\times\xi_{z}}\), a _zonotope_\(\mathcal{Z}\subset\mathbb{R}^{n}\) is a set
\[\mathcal{Z}\doteq\{c_{z}+G_{z}\beta_{z}:\beta_{z}\in[-1,1]^{\xi_{z}}\}\]
where \(\xi_{z}\) is the number of generators of \(\mathcal{Z}\). Since a zonotope can be completely characterized by its center and generator matrix, the notation \(\mathcal{Z}=\langle c_{z},G_{z}\rangle\) is used throughout the paper for brevity.
A matrix \(L\in\mathbb{R}^{n^{\prime}\times n}\) multiplied with a zonotope \(\mathcal{Z}\) yields a linearly transformed zonotope \(L\mathcal{Z}=\langle Lc_{z},LG_{z}\rangle\). Given two zonotopes \(\mathcal{Z}_{1}=\langle c_{z_{1}},G_{z_{1}}\rangle\) and \(\mathcal{Z}_{2}=\langle c_{z_{2}},G_{z_{2}}\rangle\), each being a subset of \(\mathbb{R}^{n}\), their Minkowski sum is given by
\[\mathcal{Z}_{1}\oplus\mathcal{Z}_{2}=\langle c_{z_{1}}+c_{z_{2}},[\,G_{z_{1} }\,\,G_{z_{2}}\,]\rangle.\]
Similarly, the Cartesian product of two zonotopes is defined and computed as
\[\mathcal{Z}_{1}\times\mathcal{Z}_{2} \doteq\left\{\left[\begin{array}{c}z_{1}\\ z_{2}\end{array}\right]:z_{1}\in\mathcal{Z}_{1},z_{2}\in\mathcal{Z}_{2}\right\}\] \[=\left\langle\left[\begin{array}{c}c_{z_{1}}\\ c_{z_{2}}\end{array}\right],\left[\begin{array}{cc}G_{z_{1}}&0\\ 0&G_{z_{2}}\end{array}\right]\right\rangle.\]
A _constrained zonotope_ is a set
\[\mathcal{Z}\doteq\{c_{z}+G_{z}\beta_{z}:\beta_{z}\in[-1,1]^{\xi_{z}},A\beta_ {z}=b\}\]
where \(A\in\mathbb{R}^{n\times\xi_{z}}\) and \(b\in\mathbb{R}^{n}\) with \(n\in\mathbb{Z}_{>0}\). In other words, a zonotope is an affine transformation of the hypercube and a constrained zonotope is an affine transformation of the corresponding linearly constrained hypercube.
The radius of a zonotope, or a constrained zonotope, is computed as follows
\[\mathrm{rad}(\mathcal{Z})=\min\Delta\text{ subject to }\mathcal{Z}\subseteq \mathcal{H}(c_{z},\Delta),\]
which is the radius \(\Delta\) of a minimal hypercube \(\mathcal{H}(c_{z},\Delta)\) centered at \(c_{z}\) that inscribes \(\mathcal{Z}\).
## 3 Problem Definition
Consider an LTI system with \(p\in\mathbb{Z}_{>0}\) sensors, in discrete-time, for \(k\in\mathbb{Z}_{\geq 0}\)
\[x(k+1) =Ax(k)+Bu(k)+w(k) \tag{1a}\] \[y_{i}(k) =C_{i}x(k)+v_{i}(k)+a_{i}(k),\quad i\in\mathbb{Z}_{[1,p]} \tag{1b}\]
where \(x(k)\in\mathcal{X}\subset\mathbb{R}^{n_{x}}\) is the state, \(u(k)\in\mathcal{U}\subset\mathbb{R}^{n_{u}}\) is a known bounded input, and \(y_{i}(k)\in\mathbb{R}^{m_{i}}\) is the measured output of the \(i\)-th sensor with \(i\in\mathbb{Z}_{[1,p]}\). The vector \(w(k)\in\mathcal{W}\) represents the process noise, which is bounded and assumed to be contained in the zonotope \(\mathcal{W}=\langle c_{w},G_{w}\rangle\), and the vector \(v_{i}(k)\in\mathcal{V}_{i}\) represents the measurement noise of the \(i\)-th sensor, which is also bounded and assumed to be contained in the zonotope \(\mathcal{V}_{i}=\langle c_{v_{i}},G_{v_{i}}\rangle\), for every \(i\in\mathbb{Z}_{[1,p]}\). Finally, \(a_{i}(k)\in\mathbb{R}^{m_{i}}\) represents the attack signal injected by the attacker to corrupt the measurement of the \(i\)-th sensor, and it can be arbitrary and unbounded.
**Assumption 1**.: _We assume the following:_
1. _[label=()]_
2. _Upper bound on the number of attacked sensors: The attacker can attack up to_ \(q\) _number of sensors, where_ \(q\leq p-1\) _is known a priori. However, the exact number and the particular set of attacked sensors need not be known._
3. _Redundant observability: For every sensor subset_ \(\mathsf{J}\subset\mathbb{Z}_{[1,p]}\) _with cardinality_ \(|\mathsf{J}|=c_{\mathsf{J}}\)_, for some_ \(c_{\mathsf{J}}\leq p-q\)_, the pair_ \((A,C_{\mathsf{J}})\) _is observable, where_ \(C_{\mathsf{J}}\) _is obtained by stacking_ \(C_{j}\)_, for all_ \(j\in\mathsf{J}\)_, in row blocks._
4. _Bounded state space: For every time instant_ \(k\in\mathbb{Z}_{\geq 0}\) _and input_ \(u_{[0,k]}\in\mathcal{U}\)_, each element_ \(i\in\mathbb{Z}_{[1,n_{x}]}\) _of the state vector satisfies_ \(|x_{i}(k)|\leq\chi_{i}\)_, where_ \(\chi_{i}>0\) _is known. That is, the state always remains bounded inside an_ \(n_{x}\)_-dimensional box_ \(\mathcal{X}=\langle 0,D_{x}\rangle\)_, where_ \(D_{x}=\mathrm{diag}(\chi_{1},\ldots,\chi_{n_{x}})\in\mathbb{R}^{n_{x}\times n _{x}}\) _describes the dimensions of the box in any orthant._
Assumption 1(i) is fundamental in this paper because it ensures that, at every time \(k\in\mathbb{Z}_{\geq 0}\), there exists a set of _uncompromised_ (or _safe_) sensors
\[\mathsf{S}_{k}\subset\mathbb{Z}_{[1,p]}\text{ with }|\mathsf{S}_{k}|\geq p-q\]
such that \(a_{i}(k)=0_{m_{i}}\) for every \(i\in\mathsf{S}_{k}\). This, along with the redundant observability (Assumption 1(ii)), allows us to ensure that the true state can be theoretically reconstructed from the set of uncompromised sensors under the absence of noise. In addition, the assumption entails that the attacker, even though omniscient about the system dynamics and noise bounds, has limited resources at hand and cannot attack all the sensors. We remark that this assumption is not restrictive because it neither restricts the set of attacked sensors to be static
with respect to time nor requires that \(q\) is less than half the number of sensors \(p\) -- an assumption that is fundamental in the secure state estimation literature. On the contrary, at any time instant, our problem setup allows the attacker to inject arbitrary signals to any subset of sensors with cardinality less than or equal to \(q\), where \(q\) is only required to be strictly less than \(p\) -- meaning that at least one sensor needs to be safe at any time instant.
Assumption 1(ii) is required to enable decentralized set-based operations for secure state estimation without violating the robustness guarantees. Moreover, because Assumption 1(i) allows the attacker to attack up to \(q\leq p-1\) sensors, it is necessary that the observability is guaranteed from the remaining safe sensors.
Assumption 1(iii) demarcates the class of systems considered in this paper and assumes bounded input bounded state (BIBS) stability. While this may appear to be restrictive in comparison to other secure state estimation schemes, we argue that the class of BIBS stable systems is not restrictive as it is a property all control systems strive to achieve via state or output feedback.
Under the standing assumptions stated above, we study the following problem:
Problem StatementGiven the uncertain system (1) subject to Assumption 1, we aim to estimate a set \(\hat{\mathcal{X}}_{k}\subset\mathcal{X}\) guaranteeing the inclusion \(x(k)\in\hat{\mathcal{X}}_{k}\) for every \(k\in\mathbb{Z}_{\geqslant 1}\), where \(x(k)\) is the true state of (1). Moreover, irrespective of the attack signals \(a_{i}(k)\), \(\hat{\mathcal{X}}_{k}\) must satisfy the following stability condition
\[\mathrm{rad}(\hat{\mathcal{X}}_{k})\leq\beta(\mathrm{rad}(\mathcal{X}),k)+ \gamma(\max\{\mathrm{rad}(\mathcal{W}),\mathrm{rad}(\mathcal{V})\}) \tag{2}\]
where \(\beta\) is a class-\(\mathcal{KL}\) function and \(\gamma\) is a class-\(\mathcal{K}\) function3.
Footnote 3: A continuous function \(\gamma:\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{\geqslant 0}\) is a class \(\mathcal{K}\) function, if it is strictly increasing and \(\gamma(0)=0\). A continuous function \(\beta:\mathbb{R}_{\geqslant 0}\times\mathbb{R}_{\geqslant 0}\to\mathbb{R}_{ \geqslant 0}\) is a class \(\mathcal{KL}\) function, if: (i) \(\beta(.,s)\) is a class \(\mathcal{K}\) function for any \(s\geq 0\); (ii) \(\beta(r,.)\) is non-increasing and (iii) \(\beta(r,s)\to 0\) as \(s\to\infty\) for any \(r\geq 0\).
This paper achieves the aforementioned secure set-based state estimation problem via modifications to the conventional zonotopic filtering [14], which will be developed in the forthcoming sections.
## 4 Secure Set-based State Estimation
This section presents our main algorithm for secure set-based state estimation. The algorithm comprises four steps that are summarized in the main loop of Algorithm 1. In the following subsections, we describe each step of the algorithm in detail, provide an error bound, and discuss the complexity issue.
### Time update set
The time update in our proposed algorithm is a one-step computation of the reachability set and is given by
\[\hat{\mathcal{X}}_{k|k-1}=A\hat{\mathcal{X}}_{k-1}\oplus Bu(k-1)\oplus \mathcal{W}. \tag{3}\]
Here, \(\hat{\mathcal{X}}_{k-1}\) is obtained in the previous time step through a measurement update, also known as the correction step, which is described in Section 4.3. Given \(x(k-1)\in\hat{\mathcal{X}}_{k-1}\), the time update set \(\hat{\mathcal{X}}_{k|k-1}\) at time \(k\in\mathbb{Z}_{\geqslant 0}\) is the set of states to which the system can evolve subject to the model \((A,B)\), the input \(u(k-1)\), and the noise zonotope \(\mathcal{W}\). Although the attacker cannot directly influence the time update (3), it can do so indirectly through the previous measurement update \(\hat{\mathcal{X}}_{k-1}\). Thus, the resilience against sensor attacks is achieved by carefully devising the measurement update.
### Measurement-consistent set
Before presenting the measurement update, it is crucial to estimate a region in the state space which is consistent with the sensor measurements. In other words, given the measurements \(y_{1}(k),\ldots,y_{p}(k)\), we estimate a set of points in \(\mathbb{R}^{n_{x}}\), called measurement-consistent set, that could have produced \(y_{1}(k),\ldots,y_{p}(k)\) up to the measurement noise bounds \(\mathcal{V}_{1},\ldots,\mathcal{V}_{p}\).
Given that the system (1) is observable via each subset of sensors with cardinality not more than \(p-q\) according to Assumption 1(ii), we find the region of the state space consistent with the measurement \(y_{1}(k)\) provided by the sensor subset \(\mathsf{J}\subset\mathbb{Z}_{\{1,p\}}\), where \(y_{\mathsf{J}}(k)\) is obtained by stacking \(y_{i}(k)\), for all \(i\in\mathsf{J}\). For instance, if \(\mathsf{J}=\{i_{1},\ldots,i_{n_{j}}\}\), then
\[\underbrace{\left[\begin{array}{c}y^{i_{1}}\\ \vdots\\ y^{i_{n_{j}}}\end{array}\right]}_{y_{i}(k)}=\underbrace{\left[\begin{array}[] {c}C_{i_{1}}\\ \vdots\\ C_{i_{n_{j}}}\end{array}\right]}_{\hat{C}_{i_{j}}}x+\underbrace{\left[ \begin{array}{c}v_{i_{1}}\\ \vdots\\ v_{i_{n_{j}}}\end{array}\right]}_{\hat{v}_{i}}+\left[\begin{array}{c}a_{i_{1} }\\ \vdots\\ a_{i_{n_{j}}}\end{array}\right].\]
Here, the stacked vector of measurement noise is
\[v^{\mathsf{J}}\in\mathcal{V}_{\mathsf{J}}=\mathop{\bigtimes}_{i\in\mathsf{J}} \mathcal{V}_{i}\doteq\langle c_{v^{\mathsf{J}}},G_{v^{\mathsf{J}}}\rangle\]
where \(\times\) denotes the Cartesian product of zonotopes.
Given the output equation (1b), output matrix \(C_{i}\), and the measurement noise zonotope \(\mathcal{V}_{i}\), a method to find a
subset of the state space \(\mathcal{X}\) consistent with each sensor \(i\)'s measurement \(y_{i}(k)\) is described in [2, 3]. In the following, we present its straightforward extension.
**Lemma 2**.: _Let Assumption 1(ii) and (iii) hold. Then, for every \(\mathsf{J}\subset\mathbb{Z}_{[1,p]}\) with \(|\mathsf{J}|=c_{\mathsf{J}}\), for some \(c_{\mathsf{J}}\leqslant p-q\), the region \(\mathcal{Y}_{k}^{\mathsf{J}}\subset\mathbb{R}^{n_{x}}\) consistent with the measurement \(y_{\mathsf{J}}(k)=C_{\mathsf{J}}x(k)+v_{\mathsf{J}}(k)+a_{\mathsf{J}}(k)\), is given by the measurement-consistent set_
\[\mathcal{Y}_{k}^{\mathsf{J}} =\langle c_{y_{\mathsf{J}}}(k),G_{y_{\mathsf{J}}}(k)\rangle,\;\text {where} \tag{4}\] \[\left\{\begin{array}{rcl}c_{y_{\mathsf{J}}}(k)&=&C_{\mathsf{J} }^{\mathsf{J}}\big{(}y_{\mathsf{J}}(k)-c_{v_{\mathsf{J}}}\big{)}\\ G_{y_{\mathsf{J}}}(k)&=&\left[\,C_{\mathsf{J}}^{\mathsf{J}}G_{v_{\mathsf{J}}} \;(I_{n_{x}}-C_{\mathsf{J}}^{\dagger}C_{\mathsf{J}})D_{x}\,\right]\end{array}\right.\]
_where \(D_{x}\) is the generator matrix of the state space \(\mathcal{X}\) given by Assumption 1(iii)._
_Moreover, if \(\mathsf{J}\subseteq\mathsf{S}_{k}\), where \(\mathsf{S}_{k}\) is the index set of uncompromised sensors at time \(k\), then \(x(k)\in\mathcal{Y}_{k}^{\mathsf{J}}\) for every \(k\in\mathbb{Z}_{\geqslant 0}\)._
[Proof.] Consider a subset \(\mathsf{J}\subseteq\mathsf{S}_{k}\) with \(|\mathsf{J}|\leqslant p-q\). Then, \(a_{i}(k)=0\) for every \(i\in\mathsf{J}\). Since the pair \((A,C_{\mathsf{J}})\) is observable (Assumption 1(ii)), we have that
\[y_{\mathsf{J}}(k)=C_{\mathsf{J}}x(k)+v_{\mathsf{J}}(k)\]
has a non-trivial solution
\[x(k)=C_{\mathsf{J}}^{\dagger}(y_{\mathsf{J}}(k)-v_{\mathsf{J}}(k))+(I_{n_{x}} -C_{\mathsf{J}}^{\dagger}C_{\mathsf{J}})x(k).\]
Secondly, note that \(\mathrm{im}(I_{n_{x}}-C_{\mathsf{J}}^{\dagger}C_{\mathsf{J}})=\ker(C_{ \mathsf{J}})\). Finally, \(\mathcal{X}=\langle 0,D_{x}\rangle\) by Assumption 1(iii). Thus, by considering sets on the right hand side and using Minkowski sum operation, we obtain
\[x(k)\in C_{\mathsf{J}}^{\dagger}(y_{\mathsf{J}}(k)-\mathcal{V}_{\mathsf{J}}) \oplus(I_{n_{x}}-C_{\mathsf{J}}^{\dagger}C_{\mathsf{J}})\langle 0,D_{x} \rangle=\mathcal{Y}_{k}^{\mathsf{J}}\]
which completes the proof.
By Assumption 1(iii), the forward invariant reachable set of system (1) (i.e., state space in which the trajectories of system (1) reside in for all \(k\in\mathbb{Z}_{\geqslant 0}\)) is given by the zonotope \(\mathcal{X}=\langle 0,D_{x}\rangle\). Subject to this assumption, (4) in the above lemma computes a subset of \(\mathcal{X}\) that is consistent with the measurements provided by the sensor subset \(\mathsf{J}\). Thus, if the subset \(\mathsf{J}\) of sensors is attack-free at time \(k\), Lemma 2 guarantees that the true state \(x(k)\) is inside the set \(\mathcal{Y}_{k}^{\mathsf{J}}\). However, the guarantee doesn't hold when the sensor subset \(\mathsf{J}\) is under attack at time \(k\). By taking the intersection of the consistent sets \(\mathcal{Y}_{k}^{\mathsf{J}}\), we discard all the sensors whose measurements are corrupted by large valued attack signals. However, sensors that are corrupted by stealthy attack signals, i.e., signals within the noise bounds, remain undetected. Nonetheless, we can ensure that there is at least one subset of sensors with cardinality \(p-q\) containing the true state \(x(k)\).
**Theorem 3**.: _Let Assumption 1 hold. Then, there exist distinct sets \(\mathsf{J}_{1},\mathsf{J}_{2},\ldots,\mathsf{J}_{\eta}\subset\mathbb{Z}_{[1,p]}\), each with cardinality \(|\mathsf{J}_{1}|=\cdots=|\mathsf{J}_{\eta}|=c_{\mathsf{J}}\), for some \(c_{\mathsf{J}}\leqslant p-q\), such that the agreement set_
\[\mathcal{I}_{k}=\mathcal{Y}_{k}^{\mathsf{J}_{1}}\,\cap\,\cdots\cap\mathcal{Y} _{k}^{\mathsf{J}_{\eta}} \tag{5}\]
_is non-empty and contains the true state \(x(k)\), where zonotopes \(\mathcal{Y}_{k}^{\mathsf{J}_{a}}\) are given in (4) and_
\[\eta=\begin{pmatrix}p-q\\ c_{\mathsf{J}}\end{pmatrix}\geqslant 1. \tag{6}\]
[Proof.] Let \(k\in\mathbb{Z}_{\geqslant 0}\) and \(c_{\mathsf{J}}\leqslant p-q\). Notice that the measurement-consistent sets \(\mathcal{Y}_{k}^{\mathsf{J}_{a}}\) for any \(\mathsf{J}_{\alpha}\subset\mathbb{Z}_{[1,p]}\) with cardinality \(|\mathsf{J}_{\alpha}|=c_{\mathsf{J}}>0\) and \(\alpha\in\mathbb{Z}_{[1,\eta]}\) are non-empty according to Lemma 2, because the generator matrices \(G_{y_{\alpha}}(k)\) are non-zero. Moreover, the quantity \(\eta\) determines the total number of sensor subsets when \(c_{\mathsf{J}}\) number of sensors out of possibly safe \(p-q\) sensors are chosen. When \(c_{\mathsf{J}}=p-q\), we have \(\eta=1\), and there exists at least one subset \(\mathsf{J}_{1}\subset\mathbb{Z}_{[1,p]}\) with cardinality \(p-q\) that contains only uncompromised sensors. That is, \(\mathsf{J}_{1}\subseteq\mathsf{S}_{k}\) because only \(q\) number of sensors can be attacked (Assumption 1(i)) and, by considering \(\begin{pmatrix}p-q\end{pmatrix}\) combinations of sensor subsets, there is at least one subset that doesn't contain any attacked sensor at time \(k\). Therefore, in this case, \(\mathcal{I}_{k}=\mathcal{Y}_{k}^{\mathsf{J}_{1}}\neq\emptyset\) and \(x(k)\in\mathcal{I}_{k}\) by Lemma 2. On the other hand, when \(c_{\mathsf{J}}<p-q\), we have \(\eta>1\), and there exist distinct \(\mathsf{J}_{1},\ldots,\mathsf{J}_{\eta}\subset\mathbb{Z}_{[1,p]}\), each with cardinality \(c_{\mathsf{J}}\) and containing only uncompromised sensors. That is, \(\mathsf{J}_{1},\ldots,\mathsf{J}_{\eta}\subset\mathsf{S}_{k}\) because we can choose at least \(\eta\) subsets of cardinality \(c_{\mathsf{J}}\) from the safe, uncompromised sensors \(\mathsf{S}_{k}\) at time \(k\). By Assumption 1(ii) and Lemma 2, we have that \(x(k)\in\mathcal{Y}_{k}^{\mathsf{J}_{1}}\), \(x(k)\in\mathcal{Y}_{k}^{\mathsf{J}_{2}}\)\(\ldots\), and \(x(k)\in\mathcal{Y}_{k}^{\mathsf{J}_{k}}\). Therefore, the intersection (5) yields a non-empty set containing \(x(k)\).
**Remark 4**.: _If the system (1) is redundantly observable for \(c_{\mathsf{J}}=p-q\), then \(\eta=1\). In this case, the agreement set \(\mathcal{I}_{k}\) in (5) will equal to the measurement-consistent set \(\mathcal{Y}_{k}^{\mathsf{J}}\) for some \(\mathsf{J}\) with \(|\mathsf{J}|=p=q\). To have an agreement protocol (5) comparing multiple measurement-consistent sets, i.e., \(\eta\geqslant 2\), which refines the agreement set \(\mathcal{I}_{k}\) by a set-based voting mechanism, it is necessary that the redundant observability holds for \(c_{\mathsf{J}}<p-q\) and \(q<p-1\). For instance, if \(c_{\mathsf{J}}=p-q-1\) and \(q=p-2\), then \(\eta=p-q=2\). In this case, the agreement protocol (5) consists of a pairwise intersection between two measurement-consistent sets._
We have shown that the measurement-consistent sets \(\mathcal{Y}_{k}^{\mathsf{J}}\) formed out of attack-free sensors contain the true state
and, hence, their intersection contains the true state. In the proof of Theorem 3, we saw that by removing the number of attacked sensors \(q\), we guarantee the existence of at least one index set \(\mathrm{J}\) with cardinality \(|\mathrm{J}|=c_{\mathrm{J}}\) which is attack-free. This allows us to ensure that the agreement set \(\mathcal{I}_{k}\) is non-empty and, moreover, contains the true state. However, in the presence of stealthy attacks, we cannot exclude the measurement-consistent sets formed by the stealthily attacked sensors, as these sets may yield non-empty intersections, where only some of them may contain the true state. Therefore, in the next step, we leverage model-based information to estimate the state space region that contains the true state.
### Measurement update step
We exploit model-based information via computing the time update \(\hat{\mathcal{X}}_{k|k-1}\) for trajectories starting from the estimated set \(\hat{\mathcal{X}}_{k-1}\) from the previous time step, using (3). Measurement update \(\hat{\mathcal{X}}_{k}\) corrects the conservative estimate of the model-based time update by incorporating new information from the sensor measurements (4).
In the presence of sensor attacks, we saw in the previous section that the formation of the agreement set \(\mathcal{I}_{k}\) is paramount. In particular, to show the state inclusion \(x(k)\in\mathcal{I}_{k}\), it is required that a certain number \(\eta\) of measurement-consistent sets are formed out of attack-free measurements. However, as stated in Assumption 1(i), we only know the maximal number of attacked sensors, but which sensors have been attacked is unknown. Hence, we need to search over all possible intersections of measurement-consistent sets.
Let
\[n_{c}=\binom{n_{\mathrm{J}}}{\eta},\quad n_{\mathrm{J}}=\binom{p}{c_{\mathrm{J }}} \tag{7}\]
where we recall that \(c_{\mathrm{J}}\) is the cardinality of the index sets \(\mathrm{J}_{\mathrm{i}}\). Obtain all the subsets of sensors \(\mathrm{J}_{1},\ldots,\mathrm{J}_{n_{\mathrm{J}}}\subset\mathbb{Z}_{[1,p]}\) with cardinality \(c_{\mathrm{J}}\). Also, obtain all possible \(n_{c}\) combinations of indices \(1,\ldots,n_{\mathrm{J}}\) with size \(\eta\), where \(\eta\) is given by (6). Let \(\mathsf{P}_{h}\) represent each of these combinations for \(h=1,\ldots,n_{c}\), respectively, where \(|\mathsf{P}_{h}|=\eta\). Then, for \(h\in\mathbb{Z}_{[1,n_{c}]}\) and \(k\in\mathbb{Z}_{\geqslant 0}\), the agreement sets are obtained as
\[\mathcal{I}_{k}^{h}=\bigcap_{\alpha\in\mathsf{P}_{h}}\mathcal{Y}_{k}^{\mathrm{ l}_{\alpha}} \tag{8}\]
where \(\mathrm{J}_{\alpha}\subset\mathbb{Z}_{[1,p]}\) with cardinality \(c_{\mathrm{J}}\leqslant p-q\).
Here, notice that \(n_{\mathrm{J}}\) is the total number of sensor subsets \(\mathrm{J}\subset\mathbb{Z}_{[1,p]}\) with \(|\mathrm{J}|=c_{\mathrm{J}}\) that can be obtained. Further, by Theorem 3, there exists at least one \(h\in\mathbb{Z}_{[1,n_{c}]}\) such that the true state \(x(k)\) is included in the agreement set \(\mathcal{I}_{k}^{h}\); however, we do not know which \(h\) due to the unknown set of compromised sensors. Hence, as proposed in (8), it is necessary to check intersections between all the possible combinations of \(\eta\) measurement-consistent sets, which totals to \(n_{c}\) given in (7), i.e., the number of (unordered) ways to choose \(\eta\) sets out of \(n_{\mathrm{J}}\) sets.
Then, the measurement update \(\hat{\mathcal{X}}_{k}\) is obtained as
\[\hat{\mathcal{X}}_{k}=\hat{\mathcal{X}}_{k|k-1}\cap\{\mathcal{I}_{k}^{h}\}_{h \in\mathbb{Z}_{[1,n_{c}]}} \tag{9}\]
where \(\mathcal{I}_{k}^{h}\) is defined in (8) and the time update set \(\hat{\mathcal{X}}_{k|k-1}\) is given in (3) with the following initialization
\[\hat{\mathcal{X}}_{1|0}=A\mathcal{X}\oplus Bu(0)\oplus\mathcal{W},\quad \mathcal{X}=\langle 0,D_{x}\rangle.\]
Note that the measurement update set \(\hat{\mathcal{X}}_{k}\) is a collection of multiple constrained zonotopes.
**Theorem 5**.: _Let Assumption 1 hold. Then, for every \(k\in\mathbb{Z}_{\geqslant 1}\) and \(x(0)\in\mathcal{X}\), the inclusion \(x(k)\in\hat{\mathcal{X}}_{k}\) is guaranteed, where the measurement update \(\hat{\mathcal{X}}_{k}\) is computed in (9)._
[Proof.] We prove this result by induction. By Assumption 1(iii), we have \(x(0)\in\mathcal{X}=\langle 0,D_{n_{x}}\rangle\). Therefore,
\[x(1)\in\hat{\mathcal{X}}_{1|0}=A\mathcal{X}\oplus Bu(0)\oplus\mathcal{W}\]
because \(w(0)\in\mathcal{W}\). Moreover, by Theorem 3, there exists \(h\in\mathbb{Z}_{[1,n_{c}]}\) such that \(x(1)\in\mathcal{I}_{1}^{h}\). Therefore, from (9), we have
\[x(1)\in\hat{\mathcal{X}}_{1}=\hat{\mathcal{X}}_{1|0}\cap\{\mathcal{I}_{1}^{h} \}_{h\in\mathbb{Z}_{[1,n_{c}]}}.\]
This, in turn, implies that
\[x(2)\in\hat{\mathcal{X}}_{2|1}=A\hat{\mathcal{X}}_{1}\oplus Bu(1)\oplus \mathcal{W}\]
which is the one-step reachability or time update set (3). Now, assume \(x(k^{\prime})\in\hat{\mathcal{X}}_{k^{\prime}|k^{\prime}-1}\) for some \(k^{\prime}\in\mathbb{Z}_{\geqslant 2}\). Then, by Theorem 3, there exists \(h\in\mathbb{Z}_{[1,n_{c}]}\) such that \(x(k^{\prime})\in\mathcal{I}_{k^{\prime}}^{h}\), implying
\[x(k^{\prime})\in\hat{\mathcal{X}}_{k^{\prime}}=\hat{\mathcal{X}}_{k^{\prime}|k^ {\prime}-1}\cap\{\mathcal{I}_{k^{\prime}}^{h}\}_{h\in\mathbb{Z}_{[1,n_{c}]}}.\]
Therefore, we have the inclusion
\[x(k^{\prime}+1)\in\hat{\mathcal{X}}_{k^{\prime}+1|k^{\prime}}=A\hat{\mathcal{ X}}_{k^{\prime}}\oplus Bu(k^{\prime})\oplus\mathcal{W}.\]
Thus, the proof is completed because we showed that, for every \(k\in\mathbb{Z}_{\geqslant 1}\), \(x(k-1)\in\hat{\mathcal{X}}_{k-1|k-2}\) implies \(x(k-1)\in\hat{\mathcal{X}}_{k-1}\), which, in turn, implies \(x(k)\in\hat{\mathcal{X}}_{k|k-1}\). Hence, \(x(k)\in\hat{\mathcal{X}}_{k}\).
Although the above theorem guarantees the inclusion of the true state, it is important to remark that the number of sets in the measurement update (9) may increase with respect to time under stealthy attacks. We address this
issue in Section 4.5 by proposing several techniques that facilitate the computational efficiency of the algorithm.
It is worth mentioning that the proposed algorithm is resilient because the attacker cannot deteriorate the estimation accuracy over time. If an index set \(\mathsf{J}\) is such that the reading of sensor \(i\in\mathsf{J}\) is injected with a large-valued attack signal, it will be automatically discarded because the corresponding measurement-consistent set \(\mathcal{Y}_{k}^{l}\) will not intersect with either its counterparts or the time update set. Therefore, in order to yield a non-empty agreement set \(\mathcal{I}_{k}^{h}\) containing the attacked sensors, the attacker can only inject small attack signals whose magnitude is within the measurement noise bounds \(\mathcal{V}_{i}\), which, therefore, maintains the estimation accuracy.
The proposed algorithm is summarized in Algorithm 1.
```
0: System matrices \(A\), \(B\), and \(C_{i}\), and noise zonotopes \(\mathcal{W}\) and \(\mathcal{V}_{i}\), for every \(i\in\mathbb{Z}_{[1,p]}\); time sequence of sensor measurements \(\{y^{1}(k),y^{2}(k),\dots,y^{p}(k)\}_{k\in\mathbb{Z}_{\times 0}}\).
1: Initialize: \(\hat{\mathcal{X}}_{0}\subseteq\mathcal{X}\)
2:for\(k=1,2,3,\dots\)do
3: Obtain the time update \(\hat{\mathcal{X}}_{k|k-1}\) using (3).
4: Obtain the measurement-consistent sets \(\mathcal{Y}_{k}^{\mathsf{J}_{\alpha}}\), for \(\alpha=1,\dots,n_{\mathsf{J}}\), where \(\mathsf{J}_{\alpha}\subset\mathbb{Z}_{[1,p]}\) with \(|\mathsf{J}_{\alpha}|=c_{\mathsf{J}}\) and \(\mathsf{J}_{\alpha}\neq\mathsf{J}_{\alpha^{\prime}}\) for \(\alpha\neq\alpha^{\prime}\), using (4).
5: Obtain the agreement sets \(\mathcal{I}_{k}^{h}\), for \(h=1,\dots,n_{c}\), using (8).
6: Obtain the measurement update \(\hat{\mathcal{X}}_{k}\) using (9).
7:endfor
```
**Algorithm 1** Secure set-based state estimation
We illustrate Algorithm 1 on a simple example of \(p=3\) sensors, where \(q=1\) sensor has been corrupted. For the purposes of this illustration in Figure 1, we chose the first sensor \(y_{1}\) to be corrupted. Suppose that the system is observable via every sensor. Therefore, the measurement consistent sets \(\mathcal{Y}_{k}^{\mathsf{J}_{1}}\),\(\mathcal{Y}_{k}^{\mathsf{J}_{2}}\),and \(\mathcal{Y}_{k}^{\mathsf{J}_{3}}\) can be obtained from sensors \(y_{\mathsf{J}_{1}}\), \(y_{\mathsf{J}_{2}}\), \(y_{\mathsf{J}_{3}}\), respectively, with \(\mathsf{J}_{1}=\{1\}\), \(\mathsf{J}_{2}=\{2\}\) and \(\mathsf{J}_{3}=\{3\}\). Based on the illustration in Figure 1, the agreement sets are \(\mathcal{I}_{k}^{1}=\mathcal{Y}_{k}^{\mathsf{J}_{1}}\cap\mathcal{Y}_{k}^{ \mathsf{J}_{2}}=\emptyset\); \(\mathcal{I}_{k}^{2}=\mathcal{Y}_{k}^{\mathsf{J}_{1}}\cap\mathcal{Y}_{k}^{ \mathsf{J}_{3}}\) and \(\mathcal{I}_{k}^{3}=\mathcal{Y}_{k}^{\mathsf{J}_{2}}\cap\mathcal{Y}_{k}^{ \mathsf{J}_{k}}\) are depicted as the yellow untextured region in Figure 1. Finally, the estimated set \(\hat{\mathcal{X}}_{k}\) is the union of the textured green regions in Figure 1.
### Bound on the estimation error
Since Theorem 5 guarantees that the true state \(x(k)\) of system (1) lies in at least one of the zonotopes in the measurement update \(\hat{\mathcal{X}}_{k}\) at each \(k\in\mathbb{Z}_{\geqslant 0}\), it must also lie in a zonotope that overbounds \(\hat{\mathcal{X}}_{k}\). That is, let the collection of the constrained zonotopes in \(\hat{\mathcal{X}}_{k}\) be overbounded by a zonotope
\[\hat{\mathcal{Z}}_{k}=\langle\hat{c}_{z}(k),\hat{G}_{z}(k)\rangle \tag{10}\]
which is obtained by solving
\[\min\text{rad}(\hat{\mathcal{Z}}_{k})\text{ subject to }\hat{\mathcal{X}}_{k} \subseteq\hat{\mathcal{Z}}_{k}. \tag{11}\]
Then, the estimation error can be bounded by
\[\|\hat{c}_{z}(k)-x(k)\|\leq\text{rad}(\hat{\mathcal{Z}}_{k}).\]
The stability analysis of this error bound is provided in Section 5 by using a point-based secure estimator. Nonetheless, we remark that this error bound is significantly smaller in practice than the error bounds obtained by point-based secure estimators [16, 28].
Figure 1: Illustration of Algorithm 1 with \(p=3\) and \(q=1\), where sensor \(1\) has been attacked.
### Methods to reduce the complexity
The major computational challenge of Algorithm 1 that the attacker can exploit lies in the measurement update step (9) for computing \(\hat{\mathcal{X}}_{k}\), which is a collection of zonotopes whose cardinality (i.e., the number of zonotopes) could grow over time. To reduce computational complexity resulting from the increasing cardinality of the measurement update, we propose several pruning methods below.
The first obvious step is to remove any empty sets or subsets of other sets in the measurement update intersection (9). It is also possible to obtain a single overbounding zonotope of \(\hat{\mathcal{X}}_{k}\) as in (11), and use it in the next time update step [6]. However, a better trade-off between accuracy and complexity is to not overbound the whole collection but only the intersecting zonotopes in the collection \(\hat{\mathcal{X}}_{k}\). This may not make the cardinality of \(\hat{\mathcal{X}}_{k}\) equal to one, but it reduces it significantly by allowing minimal loss of accuracy.
Another method is to employ zonotope reduction methods [38] to reduce the number of generators in the zonotopes, which are often increased when performing the Minkowski sum operations. However, this technique may result in a larger radius of \(\hat{\mathcal{Z}}_{k}\) in (11) and can also yield conservative estimates.
Finally, one could employ a point-based resilient estimator, if it exists, in parallel with the set-based secure estimator. In this case, we may consider only those candidates in the measurement update collection that lie within the intersection of \(\hat{\mathcal{X}}_{k}\) and an error margin generated by a point-based resilient state estimator. However, the existing point-based resilient state estimators [12, 16, 18, 28, 34] require that the total number of sensors be strictly greater than twice the number of compromised sensors \(q<p/2\) and the members of the attacked sensors also remain unchanged over time, which are tighter requirements than our standing Assumption 1(i). Moreover, the error margins obtained by point-based estimators are usually very conservative. Nonetheless, such a technique is useful in the stability analysis of the proposed secure set-based state estimator, as discussed in the next section.
## 5 Stability Analysis
To assist in the stability analysis of the secure zonotopic state estimation algorithm, in the sense that the estimated set \(\hat{\mathcal{X}}_{k}\) containing the true state \(x(k)\) is bounded with respect to time \(k\in\mathbb{Z}_{\geqslant 0}\) given that the initial set \(\mathcal{X}\) is bounded, we employ a secure _point-based_ state estimator in parallel. Such an estimator provides a point-based estimate of the state and ensures that the bound on the estimation error is unaffected by the attack signal. In the following, we define a point-based secure estimator and use it to modify our proposed algorithm, which ensures that the set-based estimate is stable in the sense of (2).
### Point-based secure state estimator
In the measurement update (9), we observe that the number of zonotopes in the collection given by the estimated set \(\hat{\mathcal{X}}_{k}\) could be very large. To limit the number of candidates in our collection of zonotopes in the measurement update set \(\hat{\mathcal{X}}_{k}\) and consequently, guarantee stability of our algorithm, we make use of a point-based estimator which is _secure_ with respect to the attack signals \(a_{i}\) and robust with respect to disturbances \(w\) and noise \(v\) as follows.
**Assumption 6**.: _There exists a secure point-based state estimator \(E:\mathbb{R}^{m}\times\mathcal{U}\rightarrow\mathbb{R}^{n_{x}}\) which provides a point-based state estimate \(\hat{x}(k)=E(y^{1}(k),\ldots,y^{p}(k),u(k))\) to system (1) satisfying the following for \(k\in\mathbb{Z}_{\geqslant 0}\),_
\[\|\hat{x}(k)-x(k)\|\leqslant\beta(\|\hat{x}(0)-x(0)\|,k)+\alpha\max(\bar{w}, \bar{v}) \tag{12}\]
_for some class \(\mathcal{KL}\) function \(\beta\) and \(\alpha\in\mathbb{R}_{\geqslant 0}\), where \(\|w(k)\|\leqslant\bar{w}\) and \(\|v(k)\|\leqslant\bar{v}\) for all \(k\in\mathbb{Z}_{\geqslant 0}\)._
Note that in Assumption 6, the upper bound on the state estimation error is not affected by the attack signals \(a_{i}\). We call estimators that possess this property _secure_ and results exist in the literature for linear (e.g., [13, 16, 20, 26]). So far, these results require at most \(q\leqslant[p/2]-1\) number of attacked sensors, i.e., strictly less than half the number of sensors can be attacked. Thereby, this imposes a stricter requirement than Assumption 1(i), which we state below.
**Assumption 7**.: _The number of attacked sensors \(q\in\mathbb{Z}_{\geqslant 0}\) satisfies \(2q<p\), where \(p\in\mathbb{Z}_{\geqslant 0}\) is the total number of sensors. The integers \(p\) and \(q\) are known, but the exact attacked sensors is unknown._
In the following section, we describe how we use a point-based _secure_ state estimator to guarantee the stability of the algorithm by pruning the collection of estimated states \(\hat{\mathcal{X}}_{k}\) at each iteration \(k\in\mathbb{Z}_{\geqslant 0}\).
### Modified measurement update
Modification of Algorithm 1, Step 6: We modify the measurement update by the following
\[\hat{\mathcal{X}}_{k}^{\mathrm{mod}}=\hat{\mathcal{X}}_{k|k-1}\cap\{\mathcal{ I}_{k}^{h}\}_{h\in\mathbb{Z}_{[1,n_{c}]}}\cap\mathcal{H}(\hat{x}(k),\Delta_{k}) \tag{13}\]
where \(\hat{\mathcal{X}}_{k|k-1},\mathcal{I}_{k}^{h}\) come from (3) and (8), respectively; the point-based state estimate \(\hat{x}(k)\) is provided by a _secure_
state estimator_ satisfying Assumption 6; and the radius is
\[\Delta_{k}\doteq\beta(\|x(0)-\hat{x}(0)\|,k)+\alpha\max(\bar{w},\bar{v}), \tag{14}\]
where \(\beta\in\mathcal{KL}\), \(\alpha\in\mathbb{R}_{\geqslant 0}\), \(\bar{w}\in\mathbb{R}_{\geqslant 0}\) and \(\bar{v}\in\mathbb{R}_{\geqslant 0}\) come from Assumption 6. Figure 2 illustrates the modified measurement update (13).
It is important to note that the practicality of this pruning algorithm relies on reasonable estimates of \(\beta\in\mathcal{KL}\) and \(\alpha>0\), which is usually difficult to obtain. For linear systems, they can be obtained easily; however, they are usually quite conservative because of their dependence on the condition number of a certain matrix. Nonetheless, this technique is useful in bounding the estimation error.
To this end, we define the estimation error in terms of the center of \(\hat{\mathcal{Z}}_{k}^{\mathrm{mod}}:=\langle\hat{c}_{z}^{\mathrm{mod}}(k), \hat{G}_{z}^{\mathrm{mod}}(k)\rangle\), the minimal radius zonotope that overbounds \(\hat{\mathcal{X}}_{k}^{\mathrm{mod}}\) in (13) with maximal radius \(\Delta_{k}\), i.e.,
\[\min \;\mathrm{rad}(\hat{\mathcal{Z}}_{k}^{\mathrm{mod}})\leqslant \Delta_{k}\] \[\mathrm{subject\;to} \;\hat{\mathcal{X}}_{k}^{\mathrm{mod}}\subseteq\hat{\mathcal{Z}} _{k}^{\mathrm{mod}}. \tag{15}\]
Since \(\lim_{k\to\infty}\Delta_{k}=\alpha\max(\bar{w},\bar{v})\),
\[\lim_{k\to\infty}\mathrm{rad}(\hat{\mathcal{Z}}_{k}^{\mathrm{mod}})\leqslant \alpha\max(\bar{w},\bar{v}),\]
which only depends on the bounds on the process and measurement noise, and not on the attack signals.
### Bound on the estimation error
We can bound the estimation error \(\hat{c}_{z}^{\mathrm{mod}}(k)-x(k)\) as follows.
**Proposition 8**.: _Let Assumptions 1(ii)-(iii), 6, and 7 hold. Consider Algorithm 1 with a modified measurement update (13). Then, for every \(k\in\mathbb{Z}_{\geqslant 0}\), the overall estimation error bound is given by the following_
\[\|\hat{c}_{z}^{\mathrm{mod}}(k)-x(k)\|\leqslant\Delta_{k} \tag{16}\]
_where \(\Delta_{k}\) is defined in (14) and \(\hat{c}_{z}^{\mathrm{mod}}(k)\) is the center of \(\hat{\mathcal{Z}}_{k}^{\mathrm{mod}}\) given by (15)._
[Proof.] Note that \(x(k)\in\hat{\mathcal{X}}_{k|k-1}\cap\{\mathcal{I}_{k}^{h}\}_{h\in\mathbb{Z}_{ \{1,n_{c}\}}}\) according to Theorem 5 and \(x(k)\in\mathcal{H}(\hat{x}(k),\Delta_{k})\) according to Assumption 6. Therefore, we have that \(x(k)\in\hat{\mathcal{X}}_{k}^{\mathrm{mod}}\). Since \(\hat{\mathcal{X}}_{k}^{\mathrm{mod}}\subseteq\hat{\mathcal{Z}}_{k}^{\mathrm{ mod}}\) because of (15), we obtain (16).
Therefore, Proposition 8 enables us to conclude that by implementing a secure point-based estimator with properties given by Assumption 6, we can guarantee that the estimation error satisfies (16) with a bound \(\Delta_{k}\) that asymptotically converges to a bound that does not depend on the attack signals \(a_{i}\), which is unknown, but only depends on the bounds on the process and measurement noise, which are assumed to be known. This is a guarantee that is absent in the purely secure set-based estimation algorithm (Algorithm 1) developed in the previous section.
However, we now have the more restrictive Assumption 7 (in comparison to Assumption i) that guarantees the existence of a resilient point-based estimator possessing the property in Assumption 6. We summarise the assumptions and guarantees provided by Algorithm 1 and its modification in the following section.
### Comparison and guarantees
We have chosen to develop the secure set-based state estimation scheme in two stages, first in Section 4, then providing a modified algorithm by incorporating a point-based state estimation algorithm in Section 5, to show that while the estimated set \(\hat{\mathcal{X}}_{k}\), \(k\in\mathbb{Z}_{\geqslant 0}\) always contains the true state \(x(k)\), we can ensure that its radius asymptotically decreases up to a margin of error that depends only the process and measurement noise, which is assumed to be known. This is achieved with the more restrictive assumption on the allowed number of attacked sensors, which is needed to implement the modified algorithm that incorporates a _secure_ point-based state estimator. We summarise this in Table 1.
Figure 2: Modified measurement update
## 6 Discussion on Attack Detection and Computational Complexity
### Algorithm to detect attacked sensors
A notable contribution of the set-based state estimation scheme in this paper over other secure schemes is Assumption 1(i), which allows the attacker to compromise not only up to \(p-1\) sensors at each time instant but also different subsets of sensors at different times. To this end, we remark that we can detect only those compromised sensors that are injected with non-stealthy attack signals. The detection algorithm is fairly simple and can be summarized in Algorithm 2.
```
0: Time update \(\hat{\mathcal{X}}_{k|k-1}\), measurement-consistent sets \(\mathcal{Y}_{k}^{\mathrm{J}_{1}},\ldots,\mathcal{Y}_{k}^{\mathrm{J}_{n_{j}}}\), and a collection \(\{\mathsf{P}_{h}\}_{h\in\mathbb{Z}_{[1,n_{c}]}}\) containing \(n_{c}=\binom{n_{j}}{\eta}\) combinations of indices.
1: Initialize \(\hat{\mathsf{S}}_{k}=\emptyset\).
2:for\(h=1,\ldots,n_{c}\)do
3: Obtain \(\mathcal{I}_{k}^{h}\) using (5).
4:if\(\hat{\mathcal{X}}_{k|k-1}\cap\mathcal{I}_{k}^{h}\neq\emptyset\)then
5: Estimated safe subset \(\hat{\mathsf{S}}_{k}\leftarrow\hat{\mathsf{S}}_{k}\cup\left(\bigcup_{\alpha \in\mathsf{P}_{h}}\mathsf{J}_{\alpha}\right)\).
6:endif
7:endfor
8: Detected attacked sensors \(\hat{\mathsf{A}}_{k}=\mathbb{Z}_{[1,p]}\backslash\hat{\mathsf{S}}_{k}\).
```
**Algorithm 2** Detection of a subset of attacked sensors at time \(k\in\mathbb{Z}_{\geq 1}\)
At time \(k\) and for \(h\in\mathbb{Z}_{[1,n_{c}]}\), Algorithm 2 checks if both the agreement set \(\mathcal{I}_{k}^{h}\) and its intersection with the time update \(\hat{\mathcal{X}}_{k|k-1}\) are non-empty. If that is the case, then the sensors with indices in \(\bigcup_{\alpha\in\mathsf{P}_{h}}\mathsf{J}_{\alpha}\) are either safe or compromised with a stealthy attack signal. Otherwise, the set \(\bigcup_{\alpha\in\mathsf{P}_{h}}\mathsf{J}_{\alpha}\) contains at least one attacked sensor. By checking all the combinations \(\mathsf{P}_{h}\) and storing a 'potentially' safe subset of sensors in \(\hat{\mathsf{S}}_{k}\) at every iteration, a subset of attacked sensors \(\hat{\mathsf{A}}_{k}\) are estimated as those sensors that are not in \(\hat{\mathsf{S}}_{k}\).
**Remark 9**: _The estimated safe subset \(\hat{\mathsf{S}}_{k}\) contains the true safe subset \(\mathsf{S}_{k}\) at time \(k\), i.e., \(\mathsf{S}_{k}\in\hat{\mathsf{S}}_{k}\). If there are sensors that are injected with small-valued stealthy attack signals, our detection algorithm fails to recognize those attacks and considers those sensors to be uncompromised. Therefore, the detected attacked sensors \(\hat{\mathsf{A}}_{k}\) is only a subset of the true subset of attacked sensors._
**Remark 10**: _In Algorithm 2, we detect a subset of attacked sensors at every time \(k\). However, in the time-invariant attack setting where the attacker does not change the subset of compromised sensors, Algorithm 2 can be adapted to cumulatively detect and remove the attacked sensors over time. This can have application in sensor fault detection and isolation as faults can be modeled as naive attacks._
### Attack detection under naive attacks/sensor faults
Algorithm 1 automatically discards attacked sensors when a naive attacker injects large or random attack signals. Consequently, Algorithm 2 can detect such attacks. These attack signals have no effect on the estimated set \(\hat{\mathcal{X}}_{k}\) since the measurement-consistent sets corresponding to the attacked sensors are automatically excluded in (5) and (9), resulting in empty intersections. Even random attack signals that fall within the noise bounds can be detected eventually if the attacker is not intelligent enough to account for the changing orientation and position of the time update set and the measurement-consistent sets with respect to time.
Our proposed framework is well-suited for sensor faults like a complete failure, sensor deterioration, intermittent transmissions, and random bias, all of which can be treated as naive attacks and detected by Algorithm 2. Random attack signals, on the other hand, can be viewed as a consequence of the attacker's limited knowledge of the system and/or noise bounds. Such an attack could also arise because the attacker has limited resources and cannot generate an optimal attack signal at every time instance to guarantee the worst-case complexity, as discussed in the subsequent subsection.
Under the assumption of naive attacks and/or sensor faults, the attacked sensors can be easily detected and discarded, which results in a significant reduction of complexity of the measurement update step (9). The proposed framework is, therefore, well-suited for robust system monitoring, providing a reliable defense against attacks and faults that could compromise system performance and integrity.
\begin{table}
\begin{tabular}{l||l|l} \hline & _Secure set-based state estimator_ & _Modified measurement update_ \\ & (Algorithm 1) & (Section 5.2) \\ \hline \hline _Constraint on the number of attacks_ & \(q<p\) (Assumption 1(i)) & \(2q<p\) (Assumption 7) \\ \hline _Guarantees on the estimation error_ & \(\hat{\mathcal{X}}_{k}\) is bounded at every time \(k\). & \(\hat{\mathcal{X}}_{k}^{\mathrm{mod}}\) is bounded by a shrinking ball \(\Delta_{k}\) in (14), which converges to a smaller ball of radius \(\alpha\max(\bar{w},\bar{v})\) irrespective of the attack signals. \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between the measurement update in Algorithm 1 and the modified measurement update in Section 4.3.
### Worst-case complexity under stealthy attacks
In the worst-case scenario, stealthy attacks on sensors may result in non-empty intersections in (8), resulting in an empty detected set \(\hat{\mathcal{A}}_{k}\). Moreover, the collection of sets obtained from the intersections (5) and (9) can also increase the complexity of Algorithm 1 exponentially, which can overwhelm the available computation resources.
Precisely, if at every time instant \(k\in\mathbb{Z}_{\geq 0}\), the agreement sets \(\mathcal{I}_{k}^{h},\mathcal{I}_{k}^{h^{\prime}}\) are non-empty and distinct for \(h\neq h^{\prime}\), and they all intersect with the time update \(\hat{\mathcal{X}}_{k|k-1}\), then the number of sets in the measurement update \(\hat{\mathcal{X}}_{k}\) turn out to be \(|\hat{\mathcal{X}}_{k|k-1}|\times n_{c}\), where \(|\hat{\mathcal{X}}_{k|k-1}|\) denotes the number of constrained zonotopes in the time update and \(n_{c}\) is given in (7). Note that, since initially \(|\hat{\mathcal{X}}_{1|0}|=1\), we have \(|\hat{\mathcal{X}}_{1}|=n_{c}\) and, by using (3), \(|\hat{\mathcal{X}}_{2|1}|=n_{c}\), implying \(|\hat{\mathcal{X}}_{2}|=n_{c}^{2}\). Thus, it is now straightforward to see that the number of constrained zonotopes in the measurement update \(\hat{\mathcal{X}}_{k}\) is \(n_{c}^{k}\) in the worst-case scenario.
It is important to mention that generating such an intelligent stealthy attack is quite difficult for an attacker. First, it requires that the attacker has complete knowledge of not only the system but also the estimation algorithm. Second, it assumes that the attacker has ample computational resources to compute a feasible attack signal within one time sample guaranteeing that (i) all agreement sets are non-empty, (ii) they are distinct, and (iii) they all intersect with the time update. From an attacker's perspective, this turns out to be a very difficult task as it not only imposes multiple hard constraints but also requires the computation within significantly small time. Further investigation into the computational complexity of generating such an attack is out of the scope of the current paper and will be addressed in the future.
Apart from the worst-case, there is no doubt that the number of constrained zonotopes in the measurement update could increase with time. Therefore, at every time instant, it is crucial to employ methods discussed in Section 4.5 to reduce the complexity and ensure the computational feasibility of the proposed algorithm. Since each complexity reduction method offers a trade-off between estimation accuracy and complexity, the best method is the one that offers maximum accuracy under the available computational resources.
## 7 Numerical Simulation
We evaluated our proposed algorithms on two examples. The first illustrative one lacks observability from each sensor, which is provided in the three-story building structure example. The CORA toolbox [6] is used to generate our simulations. We start by analyzing the illustrative example.
### Illustrative example
In this example, we consider a simple two-dimensional linear system without a control input \(u(t)\equiv 0\), where the system matrices are given by
\[A=\left[\begin{array}{cc}1&0\\ 1&1\end{array}\right], C_{1}=[\begin{array}{cc}1&0\end{array}], C_{2}=[\begin{array}{cc}1&1\end{array}],\] \[C_{3}=[\begin{array}{cc}0&1\end{array}], C_{4}=[\begin{array}{cc}1&2\end{array}].\]
The number of sensors are \(p=4\), and we suppose that the attacker can target \(q=2\) number of sensors at any time instant. In this case, since \(q=p/2\), the point-based state estimators that require \(q<p/2\) cannot be employed. The process noise bound is \(\mathcal{W}=\langle 0,\text{diag}([0.02\,0.02])\rangle\). We have the measurement noise bounds of the four sensors as follows.
\[\mathcal{V}_{1}=\mathcal{V}_{2}=\mathcal{V}_{3}=\mathcal{V}_{4}=\langle 0,1\rangle,\]
Notice that Assumption 1(ii) holds for \(c_{1}=p-q=2\) because the pair \(\left(A,\left[\begin{array}{cc}C_{i}^{\top}&C_{j}^{\top}\end{array}\right]^{ \top}\right)\) is observable for any \(i,j=1,2,3,4\) and \(i\neq j\). Hence, we consider the following subsets \(\mathcal{I}_{\alpha}\) of sensors with cardinality \(|\mathcal{I}_{\alpha}|=c_{\text{J}}\), where \(\alpha=1,\ldots,n_{1}\) with \(n_{\text{J}}=6\),
\[\begin{array}{llll}\mathcal{I}_{1}=\{1,2\},&\mathcal{I}_{2}=\{1,3\},& \mathcal{I}_{3}=\{1,4\}\\ \mathcal{I}_{4}=\{2,3\},&\mathcal{I}_{5}=\{2,4\},&\mathcal{I}_{6}=\{3,4\}.\end{array}\]
For these sets, we compute the measurement-consistent sets \(\mathcal{Y}_{k}^{\mathcal{I}_{\alpha}}\) using (4). Notice that since \(\eta=1\), we do not need to perform intersections in (8) to compute the agreement sets \(\mathcal{I}_{k}^{h}\). In this example, it turns out that \(\mathcal{I}_{k}^{h}=\mathcal{Y}_{k}^{\mathcal{I}_{\alpha}}\), where \(h=\alpha\) and \(h,\alpha\in\{1,\ldots,6\}\) since \(n_{c}=n_{\text{J}}=6\). Then, the time and measurement updates can be computed using (3) and (9). Fig. 3 illustrates the measurement-consistent sets \(\mathcal{Y}_{k}^{\mathcal{I}_{\alpha}}\) (safe in blue and attacked in red), time update \(\hat{\mathcal{X}}_{k|k-1}\) (green), measurement update \(\hat{\mathcal{X}}_{k}\) (blue), and the true state \(x(k)\) for different sets of sensors attacked at different times in which the attacker chooses two sensors at each time step in a rotating manner. At time step \(k=2\), we have in Figure 3a sensors 3 and 4 under attack, i.e., we have \(\mathcal{Y}_{k}^{\mathcal{I}_{\alpha}}\), for all \(\alpha\in\{2,\ldots,6\}\) under attack. Then, sensors 1 and 4 are attacked in Fig. 3b in which the true state is still included in the measurement update set \(\hat{\mathcal{X}}_{k}\). We should note that some attacked measurement-consistent sets are detected and isolated in Fig. 3c as they do not intersect with any of the time update sets. Interestingly, the measurement update sets have small volumes in Fig. 3d, which still contain the true state \(x(k)\). Similarly, we present the computed sets in Fig. 3e and 3f.
### Three-story building structure
We now consider a three-story building structure of [36] described by a mechanical system
\[M\tilde{q}(t)+D\dot{q}(t)+Sq(t)=Ge(t), \tag{17}\]
where \(q(t)\in\mathbb{R}^{3}\) is the vector of relative horizontal displacements of the floors and \(e(t)\in\mathbb{R}\) is the ground acceleration due to earthquake. Also, \(M\in\mathbb{R}^{3\times 3}\) is the mass matrix, \(D\in\mathbb{R}^{3\times 3}\) is the damping matrix, \(S\in\mathbb{R}^{3\times 3}\) is the stiffness matrix, and \(G\in\mathbb{R}^{3}\) is the loading vector. The parameter values of the system (17) are provided by [36, Appendix A] as:
\[M =\text{diag}\big{(}[\,478350\ 478350\ 517790\,]\big{)}\quad\text{(kg)}\] \[D =10^{5}\times\left[\begin{array}{rrr}7.7626&-3.7304&0.6514\\ -3.7304&5.8284&-2.0266\\ 0.6514&-2.0266&2.4458\end{array}\right]\quad\text{(Ns/m)}\] \[S =10^{8}\times\left[\begin{array}{rrr}4.3651&-2.3730&0.4144\\ -2.3730&3.1347&-1.2892\\ 0.4144&-1.2892&0.9358\end{array}\right]\quad\text{(N/m)}\] \[G =\left[\,478350\ 478350\ 517790\,\right]^{T}\quad\text{(kg)}.\]
By considering the state \(x(t)=[\,q(t)^{T}\ \dot{q}(t)^{T}\,]^{T}\), we can obtain the state-space representation in continuous time
\[\dot{x}=A_{c}x+E_{c}e\]
Figure 3: Snapshots of estimated sets using Algorithm 1 under time-varying attack, where different sensors are attacked at different time steps.
where
\[A_{c}=\left[\begin{array}{cc}0_{3\times 3}&I_{3}\\ -M^{-1}S&-M^{-1}D\end{array}\right],\quad E_{c}=\left[\begin{array}{c}0_{3 \times 1}\\ -M^{-1}G\end{array}\right].\]
After discretization with sample time \(\delta\), we obtain the system in the form (1a), where
\[A = \exp(A_{c}\delta)\] \[w(k) = A_{c}^{-1}(A-I_{6})E_{c}e(k).\]
Notice that we do not consider a control input in this example, i.e., \(Bu(k)\equiv 0\). Here, our goal is to monitor the building dynamics under an earthquake, which is assumed to be the process noise or disturbance, using secure set-based state estimation.
We assume that each floor of the building is equipped with a sensor, i.e., \(p=3\), that measures the relative displacement and the velocity of that floor, which can be collected in the output vector \(y_{i}(k)\in\mathbb{R}^{3}\) as given by (1b), for \(i\in\mathbb{Z}_{[1,3]}\), where
\[C_{1} = \left[\begin{array}{cccccc}1&-1&0&0&0&0\\ 1&0&-1&0&0&0\\ 0&0&0&1&0&0\end{array}\right]\] \[C_{2} = \left[\begin{array}{cccccc}-1&1&0&0&0&0\\ 0&1&-1&0&0&0\\ 0&0&0&0&1&0\end{array}\right]\] \[C_{3} = \left[\begin{array}{cccccc}-1&0&1&0&0&0\\ 0&-1&1&0&0&0\\ 0&0&0&0&0&1\end{array}\right].\]
We suppose that the attacker can compromise only one at each time, i.e., \(q=1\). Assumption 1(ii) holds for \(c_{l}=1\) because all the pairs \((A,C_{1}),(A,C_{2}),(A,C_{3})\) are observable. Then, our goal is to monitor the building displacements and velocities irrespective of the compro
Fig. 4: Snapshots of estimated sets using Algorithm 1 under time-varying attack, where different sensors are attacked at different time steps.
mised sensor. The noise bound \(e(k)\in\langle 0,500\rangle\). We have the measurement noise bounds of the four sensors as follows.
\[\mathcal{V}_{1}=\mathcal{V}_{2}=\mathcal{V}_{3}=\langle 0,\operatorname{diag}([1 1\,1])\rangle.\]
To illustrate the efficacy of our algorithm, we apply a powerful, time-varying attack to our system in which the attacker randomly chooses a sensor \(i\) at every time step \(k\) and injects false data into its measurement \(y_{i}(k)\). In Fig. 4a, Sensor 3 is under attack with a large attack value, and we have one estimated measurement update set (black). Then, Sensor 1 is attacked in Fig. 4b in which the number of estimated measurement update sets is increasing due to having a small attack value. Finally, other sensors are attacked in Fig. 4c-4f. Although the complexity increases in such attacks, it is worth noting that the true state \(x(k)\) remains enclosed by the estimated measurement update sets at all time steps. Also, the estimation error remains bounded, and the attacker cannot destroy the accuracy of the set-based state estimate.
## 8 Conclusion and future work
We presented a novel set-based state estimation algorithm that can estimate the system state even when all but one sensor could be compromised by an adversary. Our proposed algorithm overcomes the limitation of point-based secure estimators that restrict the number of attacked sensors to strictly less than half the total number of sensors. We achieved this by constructing agreement sets from the intersection of various combinations of measurement consistent sets. We showed that our algorithm guarantees the inclusion of the true state in the estimated set, provided that the system remains observable from every combination of the number of uncompromised sensors. Moreover, we proposed a simple algorithm to identify the set of compromised sensors, which can aid in addressing intelligent and stealthy attacks.
While our algorithm's worst-case complexity may increase exponentially under intelligent and stealthy attacks, we argued that it is challenging for attackers to execute such attacks due to the requirement of a complete understanding of the system and algorithm and substantial computational resources. We suggested various strategies to reduce the complexity of our algorithm to facilitate its implementation.
We also incorporated a point-based resilient state observer into our algorithm to prune the candidate sets when less than half of the sensors have been attacked. This strategy's effectiveness depends on an accurate approximation of the guaranteed estimation error provided by the resilient point-based state observer. Nonetheless, the modified algorithm provides asymptotic convergence guarantees with an explicit bound that depends on the known process and measurement noise, independent of the attack signals. Our future work will focus on the set-based secure state estimation of nonlinear systems and developing a data-driven approach for secure estimation when the system model is unknown.
|
2309.12509 | Integrable systems on symmetric spaces from a quadratic pencil of Lax
operators | The article surveys the recent results on integrable systems arising from
quadratic pencil of Lax operator L, with values in a Hermitian symmetric space.
The counterpart operator M in the Lax pair defines positive, negative and
rational flows. The results are illustrated with examples from the A.III
symmetric space. The modeling aspect of the arising higher order nonlinear
Schr\"odinger equations is briefly discussed. | Rossen I. Ivanov | 2023-09-21T22:11:28Z | http://arxiv.org/abs/2309.12509v1 | # Integrable systems on symmetric spaces from a quadratic pencil of Lax operators
###### Abstract
The article surveys the recent results on integrable systems arising from quadratic pencil of Lax operator \(L\), with values in a Hermitian symmetric space. The counterpart operator \(M\) in the Lax pair defines positive, negative and rational flows. The results are illustrated with examples from the A.III symmetric space. The modeling aspect of the arising higher order nonlinear Schrodinger equations is briefly discussed.
Keywords: 2010 Mathematics Subject Classification: 2010
### Outline of the NLS-type models
The NLS-type equations on symmetric spaces introduced by Fordy and Kulish originate from a spectral problem
\[i\psi_{x}=(\lambda J+Q(x,t))\psi\equiv L(\lambda)\psi, \tag{4}\] \[i\psi_{t}=P(x,t,\lambda)\psi\equiv M(\lambda)\psi,\qquad Q(x,t) \in\mathfrak{m},\quad P(x,t,\lambda)\in\mathfrak{g}.\]
Clearly, we have the natural decomposition \(P=P_{\mathfrak{t}}(x,t,\lambda)+P_{\mathfrak{m}}(x,t,\lambda)\). The compatibility of the operators \(L\) and \(M\) (4) gives
\[iQ_{t}=iP_{x}-[Q,P]-\lambda[J,P_{\mathfrak{m}}]\]
where we have used \([J,P_{\mathfrak{t}}]=0\). Applying \(\mathfrak{t}-\mathfrak{m}\) decomposition, this splits further to
\[iQ_{t}=iP_{\mathfrak{m},x}-[Q,P_{\mathfrak{t}}]-\lambda[J,P_{ \mathfrak{m}}],\] \[iP_{\mathfrak{t},x}=[Q,P_{\mathfrak{m}}].\]
From the last relation we determine
\[P_{\mathfrak{t}}=-i\partial^{-1}[Q,P_{\mathfrak{m}}].\]
Then \(Q_{t}=(\partial+\mathrm{ad}_{Q}\partial^{-1}\mathrm{ad}_{Q}+i\lambda\mathrm{ ad}_{J})P_{\mathfrak{m}}\). Assuming \(P=\sum_{j=0}^{N}P^{(j)}\lambda^{j}\), we have for \(\lambda^{0}\)
\[iQ_{t}=iP_{\mathfrak{m},x}^{(0)}-[Q,P_{\mathfrak{t}}^{(0)}]=iP_{ \mathfrak{m},x}^{(0)}+i[Q,\partial^{-1}[Q,P_{\mathfrak{m}}^{(0)}]] \tag{5}\]
and for the higher powers of \(\lambda\)
\[(\partial+\mathrm{ad}_{Q}\partial^{-1}\mathrm{ad}_{Q})P_{\mathfrak{m}}^{(j)}=- i\mathrm{ad}_{J}P_{\mathfrak{m}}^{(j-1)},\quad j=1,...,N+1,\]
When \(N=2\) for example, \([J,P_{\mathfrak{m}}^{(2)}]=0\) and we take \(P_{\mathfrak{m}}^{(2)}=0\). One can take \(P_{\mathfrak{t}}^{(2)}=J\). Then \([J,P_{\mathfrak{m}}^{(1)}]=[P_{\mathfrak{t}}^{(2)},Q]=[J,Q]\) then \(P_{\mathfrak{m}}^{(1)}=Q\) and \(P_{\mathfrak{t}}^{(1)}=0\). Furthermore, \(iP_{\mathfrak{m},x}^{(1)}=[J,P_{\mathfrak{m}}^{(0)}]\) will provide \(P_{\mathfrak{m}}^{(0)}\) in terms of \(Q\),
\[P_{\mathfrak{m}}^{(0)}=i\mathrm{ad}_{J}^{-1}Q_{x}. \tag{6}\]
The nonlinear evolution equation (5) can be written down also as
\[iQ_{t}+\mathrm{ad}_{J}^{-1}Q_{xx}+[Q,\partial^{-1}[Q,\mathrm{ad}_{J}^{-1}Q_{x} ]]=0, \tag{7}\]
which represents the generalisation of the NLS. The invertibility of \(\mathrm{ad}_{J}\) on the symmetric space \(\mathfrak{m}\) could be illustrated explicitly as follows. The variable \(Q\in\mathfrak{m}\) admits a decomposition over the Cartan-Weyl basis \(\{E_{\pm\alpha},\alpha\in\theta^{+}\}\) (\(\theta^{+}\) is a subspace of the root space of \(\mathfrak{g}\)) spanning \(\mathfrak{m}\) :
\[Q=\sum_{\alpha\in\theta^{+}}(q^{\alpha}E_{\alpha}+p^{\alpha}E_{-\alpha}).\]
Since \(Q_{x}=P_{\mathfrak{m},x}^{(1)}=[J,P_{\mathfrak{m}}^{(0)}]\) and \(\mathrm{ad}_{J}(E_{\pm\alpha})=\pm aE_{\pm\alpha}\) when \(\alpha\in\theta^{+}\) then
\[P_{\mathfrak{m}}^{(0)}=\frac{1}{a}\sum_{\alpha\in\theta^{+}}(q_{x}^{\alpha}E_ {\alpha}-p_{x}^{\alpha}E_{-\alpha})\]
could be obtained explicitly.
Let us present an example with the symmetric space **A.III** (see [2; 7]), which bears also the notation
\(SU(m+n)/S(U(m)\times U(n))\). The complexification of the algebra \(su(n,\mathbb{R})\) is isomorphic to \(sl(n,\mathbb{C})\sim A_{n-1}\). Let us specify \(m=n=2\), \(J=\frac{1}{2}\mathrm{diag}(1,1,-1,-1)\) and
\[Q=\left(\begin{array}{cccc}0&0&q_{1}&q_{2}\\ 0&0&q_{4}&q_{3}\\ -\bar{q}_{1}&-\bar{q}_{4}&0&0\\ -\bar{q}_{2}&-\bar{q}_{3}&0&0\end{array}\right).\]
This parameterization assumes \(p^{\alpha}=-\bar{q}^{\alpha}\). The equations written for each component are
\[iq_{1,t}=q_{1,xx}+2q_{1}(|q_{1}|^{2}+|q_{2}|^{2}+|q_{4}|^{2})+2q_{2 }q_{4}\bar{q}_{3}\] \[iq_{2,t}=q_{2,xx}+2q_{1}(|q_{2}|^{2}+|q_{2}|^{2}+|q_{3}|^{2})+2q_ {1}q_{3}\bar{q}_{4}\] \[1\leftrightarrow 3,\quad 2\leftrightarrow 4.\]
In general, for the symmetric space **A.III**, the matrix notations are very useful: \(J=\frac{1}{2}\text{diag}(\mathbf{1}_{n},-\mathbf{1}_{m})\) and
\[Q=\left(\begin{array}{cc}0&\boldsymbol{q}\\ \boldsymbol{p}&0\end{array}\right)\]
where \(\mathbf{1}_{n}\) is the \(n\times n\) unit matrix, \(\boldsymbol{q}\) and \(\boldsymbol{p}^{T}\) are matrices \(n\times m\) (\(\boldsymbol{q}\) is a vector if \(n=1\).)
The equations, written in matrix form are
\[i\frac{\partial\boldsymbol{q}}{\partial t} =\frac{\partial^{2}\boldsymbol{q}}{\partial x^{2}}+2\boldsymbol{ q}\boldsymbol{p}\boldsymbol{q}, \tag{8}\] \[-i\frac{\partial\boldsymbol{p}}{\partial t} =\frac{\partial^{2}\boldsymbol{p}}{\partial x^{2}}+2\boldsymbol{ p}\boldsymbol{q}\boldsymbol{p}.\]
Further reduction \(\boldsymbol{p}=\pm\boldsymbol{q}^{\dagger}\) leads to
\[i\frac{\partial\boldsymbol{q}}{\partial t}=\frac{\partial^{2}\boldsymbol{q}}{ \partial x^{2}}\pm 2\boldsymbol{q}\boldsymbol{q}^{\dagger}\boldsymbol{q}.\]
If \(\boldsymbol{q}\) is a vector (\(n=1\)) this is the vector NLS (Manakov's equation), \(\boldsymbol{q}\boldsymbol{q}^{\dagger}=||\boldsymbol{q}||^{2}\) is the norm of the vector.
Another possible reduction which recently gained popularity is \(\boldsymbol{p}(x,t)=\pm\boldsymbol{q}^{T}(-x,-t),\) leading to the non-local NLEE
\[i\frac{\partial\boldsymbol{q}}{\partial t}=\frac{\partial^{2}\boldsymbol{q}}{ \partial x^{2}}\pm 2\boldsymbol{q}\boldsymbol{q}^{T}(-x,-t)\boldsymbol{q}.\]
A nicely written review article [7] introduces the basic features of the integrable equations on symmetric spaces. It describes the fundamental properties and underlying structures of the nonlinear evolution equations (NLEE), based on the expansions of \(Q,Q_{t}\) over the squared eigenfunctions of \(L\) (4) (which are eigenfunctions of the recursion operator), the hierarchy of NLEE, the hierarchy of integrals of motion and the hierarchy of Hamiltonian structures. The completeness of the squared eigenfunctions is of fundamental significance, since it makes the Inverse Scattering Transform a generalised Fourier Transform. For an \(sl(2)\)-valued Lax operator the completeness has been formulated by D.J. Kaup [18], then proven rigorously by V Gerdjikov and E. Hristov [19; 20]. There the authors introduced also the symplectic basis, which maps the potential \(Q(x,t)\) onto the action-angle variables of NLS. A general result, for \(Q,J\in\mathfrak{g},J\in\mathfrak{h}\) for a semisimple \(\mathfrak{g}\) has been formulated and proven by V. Gerdjikov, in [9].
For systems on symmetric spaces, as well as for systems with \(\mathbb{Z}_{h}\) - Mikhailov reductions [21] the expansions over the squared eigenfunctions have been derived in [7; 22; 23; 24; 25]. As a result, the spectral theory of the Lax operators has been described and the arising hierarchies of Hamiltonian structures have been studied.
## 2 Quadratic pencil of Lax operators
Lax operators, for which \(L\) is quadratic in the spectral parameter \(\lambda\) also lead to multicomponent integrable systems. These systems generalise the family of the DNLS equations - DNLS I, or Kaup-Newell equation, [26; 27; 28], DNLS II [29] and DNLS III, or Gerdjikov-Ivanov equation, [30; 31], the Fokas-Lenells equation, [32] and others. Now we illustrate these possibilities for \(M-\)operator representing "positive", "negative" and "rational" flows [5; 12; 13; 22; 33; 34].
"Positive" flows: DNLS equations on symmetric spaces
Let us illustrate this possibility again with the **A.III** Hermitian symmetric space, \(SU(m+n)/(S(U(m)\otimes U(n))\). The Lax operators are given by [5]:
\[\begin{split} L\psi&\equiv i\frac{\partial\psi}{ \partial x}+(U_{2}(x,t)+\lambda Q(x,t)-\lambda^{2}J)\psi(x,t,\lambda)=0,\qquad Q (x,t)=\left(\begin{array}{cc}0&\mathbf{q}\\ \mathbf{p}&0\end{array}\right),\\ M\psi&\equiv i\frac{\partial\psi}{\partial t}+(V_{4}(x,t)+\lambda V_{3}(x,t)+ \lambda^{2}V_{2}(x,t)+\lambda^{3}Q(x,t)-\lambda^{4}J)\psi=0.\end{split} \tag{9}\]
where \(Q(x,t)\), \(V_{3}(x,t)\in\mathfrak{m}\) and \(U_{2}(x,t)\), \(V_{2}(x,t)\) and \(V_{4}(x,t)\in\mathfrak{k}\), \(\mathbf{q}\) is a \(n\times m\) matrix, \(\mathbf{p}\) is a \(m\times n\) matrix.
Such Lax pairs give rise to multicomponent derivative NLS type equations. Indeed, the Lax pair (9), leads to the system of NLEE, generalising the DNLS III (V. Gerdjikov - M. Ivanov, [30; 31]) equation:
\[\begin{split} i\frac{\partial\mathbf{q}}{\partial t}+\frac{1}{2} \frac{\partial^{2}\mathbf{q}}{\partial x^{2}}-\frac{i}{2}\mathbf{q}\frac{\partial\bm {p}}{\partial x}\mathbf{q}+\frac{1}{4}\mathbf{q}\mathbf{p}\mathbf{q}\mathbf{p}\mathbf{q}=0,\\ -i\frac{\partial\mathbf{p}}{\partial t}+\frac{1}{2}\frac{\partial^{2} \mathbf{p}}{\partial x^{2}}+\frac{i}{2}\mathbf{p}\frac{\partial\mathbf{q}}{\partial x}\bm {p}+\frac{1}{4}\mathbf{p}\mathbf{q}\mathbf{p}\mathbf{q}\mathbf{p}=0.\end{split} \tag{10}\]
Reductions, relating \(\mathbf{p}\) and \(\mathbf{q}\) are possible, such as \(\mathbf{p}=\pm\mathbf{q}^{\dagger}\) and \(\mathbf{p}(x,t)=\pm\mathbf{q}^{T}(-x,-t)\).
Along with (9) one can consider the Lax pair [5]:
\[\begin{split}\tilde{L}\tilde{\psi}&\equiv i\frac{ \partial\tilde{\psi}}{\partial x}+(\lambda\tilde{Q}(x,t)-\lambda^{2}J)\tilde{ \psi}(x,t,\lambda)=0,\quad\tilde{Q}(x,t)=\left(\begin{array}{cc}0&\tilde{ \mathbf{q}}\\ \tilde{\mathbf{p}}&0\end{array}\right),\\ \tilde{M}\tilde{\psi}&\equiv i\frac{\partial\tilde{\psi}}{ \partial t}+(\lambda\tilde{V}_{3}(x,t)+\lambda^{2}\tilde{V}_{2}(x,t)+\lambda^ {3}\tilde{Q}(x,t)-\lambda^{4}J)\tilde{\psi}(x,t,\lambda)=0.\end{split} \tag{11}\]
This Lax pair is gauge equivalent to the previous one (9). The corresponding system is generalising the Kaup-Newell equation [26]:
\[\begin{split} i\frac{\partial\tilde{\mathbf{q}}}{\partial t}+\frac{ \partial^{2}\tilde{\mathbf{q}}}{\partial x^{2}}+i\frac{\partial\tilde{\mathbf{q}} \tilde{\mathbf{p}}\tilde{\mathbf{q}}}{\partial x}=0,\\ -i\frac{\partial\tilde{\mathbf{p}}}{\partial t}+\frac{\partial^{2} \tilde{\mathbf{p}}}{\partial x^{2}}-i\frac{\partial\tilde{\mathbf{p}}\tilde{\mathbf{q}} \tilde{\mathbf{p}}}{\partial x}=0.\end{split} \tag{12}\]
This system also complies with the grading introduced by the symmetric space: \(\tilde{Q}(x,t)\), \(\tilde{V}_{3}(x,t)\in\mathfrak{m}\) and \(\tilde{V}_{2}(x,t)\in\mathfrak{k}\).
Reductions, relating \(\tilde{\mathbf{p}}\) and \(\tilde{\mathbf{q}}\) are possible, such as \(\tilde{\mathbf{p}}=\pm\tilde{\mathbf{q}}^{\dagger}\) and \(\tilde{\mathbf{p}}(x,t)=\pm\tilde{\mathbf{q}}^{T}(-x,-t)\). Other examples of the generalised KN system, related to the **A.III** and **BD.I** symmetric spaces are presented in [12; 33].
## 3 Negative flows: Fokas-Lenells (FL) equation on symmetric spaces
The Fokas-Lenells equations (see [13] for details) are associated to the so-called _negative flows_ and the following Lax pair
\[\begin{split} i\Psi_{x}+(\lambda Q_{x}-\lambda^{2}J)\Psi=0,\\ i\Psi_{t}+\left(\lambda Q_{x}+V_{0}+\lambda^{-1}V_{-1}-(\lambda^{2} -\frac{2}{a}+\frac{1}{a^{2}\lambda^{2}})J\right)\Psi,\end{split} \tag{13}\]
\[\begin{split}\text{where}\quad Q(x,t)=\sum_{\mathbf{\alpha}\in\theta^{ +}}(q^{\alpha}E_{\alpha}+p^{\alpha}E_{-\alpha})\in\mathfrak{m}.\end{split} \tag{14}\]
From the compatibility condition of the Lax operators one can determine
\[V_{-1}=\frac{i}{a}\sum_{\alpha\in\theta^{+}}(q^{\alpha}E_{\alpha}-p^{\alpha}E_ {-\alpha})\in\mathfrak{m},\qquad V_{0}=\frac{1}{a}\sum_{\alpha,\tilde{b}\in \theta^{+}}q^{\alpha}p^{\beta}[E_{\alpha},E_{-\beta}]\in\mathfrak{k}. \tag{15}\]
As an example we take the **A.III** Hermitian symmetric space, \(SU(m+n)/(S(U(m)\otimes U(n))\), with \(J\) and \(Q\) taken as matrices in the form
\[J=\frac{1}{m+n}\left(\begin{array}{cc}n\mathbf{1}_{m}&0\\ 0&-m\mathbf{1}_{n}\end{array}\right)\in\mathfrak{k},\quad Q(x,t)=\left(\begin{array} []{cc}0&\boldsymbol{q}\\ \boldsymbol{p}&0\end{array}\right)\in\mathfrak{m} \tag{16}\]
and
\[V_{-1}(x,t)=i\left(\begin{array}{cc}0&\boldsymbol{q}\\ -\boldsymbol{p}&0\end{array}\right),\quad V_{0}(x,t)=\left(\begin{array}{ cc}\boldsymbol{qp}&0\\ 0&-\boldsymbol{pq}\end{array}\right). \tag{17}\]
The equations in block-matrix form are
\[\begin{array}{l}i\left(\boldsymbol{q}_{u}-\boldsymbol{q}_{xx}+\boldsymbol{ q}\right)+2\boldsymbol{q}_{x}+\left(\boldsymbol{q}_{x}\boldsymbol{pq}+ \boldsymbol{qp}\boldsymbol{q}_{x}\right)=0,\\ i\left(\boldsymbol{p}_{u}-\boldsymbol{p}_{xx}+\boldsymbol{p}\right)-2 \boldsymbol{p}_{x}-\left(\boldsymbol{p}_{x}\boldsymbol{qp}+\boldsymbol{pq} \boldsymbol{p}_{x}\right)=0.\end{array} \tag{18}\]
Introducing new matrices \(\boldsymbol{u},\boldsymbol{v}\) such that
\[\boldsymbol{q}=e^{-ix}\boldsymbol{u},\qquad\boldsymbol{p}=e^{ix}\boldsymbol{v}\]
we represent the equations (18) in the form
\[\begin{array}{l}i\boldsymbol{u}_{t}-\boldsymbol{u}_{xt}+ \boldsymbol{u}_{xx}+\left(\boldsymbol{u}+i\boldsymbol{u}_{x}\right) \boldsymbol{v}\boldsymbol{u}+\boldsymbol{u}\boldsymbol{v}(\boldsymbol{u}+i \boldsymbol{u}_{x})=&0,\\ -i\boldsymbol{v}_{t}-\boldsymbol{v}_{xt}+\boldsymbol{v}_{xx}+\left( \boldsymbol{v}-i\boldsymbol{v}_{x}\right)\boldsymbol{u}\boldsymbol{v}+ \boldsymbol{v}\boldsymbol{u}(\boldsymbol{v}-i\boldsymbol{v}_{x})=&0.\end{array} \tag{19}\]
The first reduction involves Hermitian conjugation \(\boldsymbol{v}=\pm\boldsymbol{u}^{\dagger}\), the equations (19) reduce to
\[i\boldsymbol{u}_{t}-\boldsymbol{u}_{xt}+\boldsymbol{u}_{xx}\pm(2\boldsymbol{ u}\boldsymbol{u}^{\dagger}\boldsymbol{u}+i\boldsymbol{u}_{x}\boldsymbol{u}^{ \dagger}\boldsymbol{u}+i\boldsymbol{u}\boldsymbol{u}^{\dagger}\boldsymbol{u}_ {x})=0. \tag{20}\]
The second reduction \(\boldsymbol{v}(x,t)=\pm\boldsymbol{u}^{T}(-x,-t)\), leads to the following nonlocal equation
\[i\boldsymbol{u}_{t}-\boldsymbol{u}_{xt}+\boldsymbol{u}_{xx}\pm(2\boldsymbol{ u}\tilde{\boldsymbol{u}}\boldsymbol{u}+i\boldsymbol{u}_{x}\tilde{\boldsymbol{u}} \boldsymbol{u}+i\boldsymbol{u}\tilde{\boldsymbol{u}}\boldsymbol{u}_{x})=0, \tag{21}\]
where \(\tilde{\boldsymbol{u}}=\boldsymbol{u}^{T}(-x,-t)\).
### Quadratic pencil - Rational flows
The spectral problem is quadratic with respect to the spectral parameter \(\lambda\),
\[\begin{array}{l}i\frac{\partial\psi}{\partial x}=L\psi=(\lambda^{2}J+ \lambda Q+P)\psi(x,t,\lambda),\\ i\frac{\partial\psi}{\partial t}=M\psi=\frac{1}{\lambda^{2}-\zeta^{2}}\left( \lambda^{2}J+\lambda U+W\right)\psi(x,t,\lambda),\end{array} \tag{22}\]
where \(L,M\in\mathfrak{g}\), \(P,W\in\mathfrak{k}\) and \(Q,U\in\mathfrak{m}\). The details could be found in [34].
The matrix realisation of the symmetric spaces is with block matrices, such that the splitting (1) is related to the matrix block structure for the corresponding symmetric space. For the **A.III** symmetric space the block structure is
\[\begin{array}{l}i\frac{\partial\psi}{\partial x}=L\psi=\left(\begin{array} []{cc}\frac{1}{2}\lambda^{2}\mathbf{1}+\boldsymbol{p}_{1}(x,t)&\lambda \boldsymbol{q}\\ \lambda\boldsymbol{r}&-\frac{1}{2}\lambda^{2}\mathbf{1}+\boldsymbol{p}_{2}(x, t)\end{array}\right)\psi(x,t,\lambda),\\ i\frac{\partial\psi}{\partial t}=M\psi=\frac{1}{\lambda^{2}-\zeta^{2}} \left(\begin{array}{cc}\frac{1}{2}\lambda^{2}\mathbf{1}+\boldsymbol{w}_{1} (x,t)&\lambda\boldsymbol{u}\\ \lambda\boldsymbol{v}&-\frac{1}{2}\lambda^{2}\mathbf{1}+\boldsymbol{w}_{2}(x,t) \end{array}\right)\psi,\end{array} \tag{23}\]
where \(\boldsymbol{q},\boldsymbol{r},\boldsymbol{u},\boldsymbol{v},\boldsymbol{p}_{1},\boldsymbol{p}_{2},\boldsymbol{w}_{1},\boldsymbol{w}_{2}\) are matrices of corresponding dimensions, \(\zeta\) is a constant.
The compatibility condition and the change of the variables \(x\) and \(t\) gives
\[\begin{split} i\mathbf{w}_{1,t}-\zeta^{2}i(\mathbf{qr})_{x}+[\mathbf{qr},\mathbf{w}_ {1}]&=0,\\ -i\mathbf{w}_{2,t}-\zeta^{2}i(\mathbf{rq})_{x}+[\mathbf{rq},\mathbf{w}_{2}]& =0,\\ i\mathbf{q}_{t}+\mathbf{q}_{xt}+\zeta^{2}i\mathbf{q}_{x}+\mathbf{qr}(\mathbf{q}-i \mathbf{q}_{x})+(\mathbf{q}-i\mathbf{q}_{x})\mathbf{r}\mathbf{q}+\mathbf{w}_{1}\mathbf{q}-\mathbf{q}\mathbf{w}_{2}& =0,\\ -ir_{t}+\mathbf{r}_{xt}-\zeta^{2}ir_{x}+(\mathbf{r}+i\mathbf{r}_{x})\mathbf{qr}+ \mathbf{r}\mathbf{q}(\mathbf{r}+i\mathbf{r}_{x})+\mathbf{r}\mathbf{w}_{1}-\mathbf{w}_{2}\mathbf{r}& =0.\end{split} \tag{24}\]
The reduction \(\mathbf{r}=\mathbf{q}^{\dagger}\) leads to \(\mathbf{w}_{1}=\mathbf{w}_{1}^{\dagger}\), \(\mathbf{w}_{2}=\mathbf{w}_{2}^{\dagger}\), and the coupled system of equations
\[i\mathbf{w}_{1,t}-\zeta^{2}i(\mathbf{qq}^{\dagger})_{x}+[\mathbf{qq}^{\dagger },\mathbf{w}_{1}] =0,\] \[-i\mathbf{w}_{2,t}-\zeta^{2}i(\mathbf{q}^{\dagger}\mathbf{q})_{x}+[\mathbf{q}^{ \dagger}\mathbf{q},\mathbf{w}_{2}] =0,\] \[i\mathbf{q}_{t}+\mathbf{q}_{xt}+\zeta^{2}i\mathbf{q}_{x}+\mathbf{qq}^{\dagger}( \mathbf{q}-i\mathbf{q}_{x})+(\mathbf{q}-i\mathbf{q}_{x})\mathbf{q}^{\dagger}\mathbf{q}+\mathbf{w}_{1}\mathbf{q }-\mathbf{q}\mathbf{w}_{2}\!=\!0.\]
The reduction \(\mathbf{r}=-\mathbf{q}^{\dagger}\) is also possible. If \(q=\mathbf{q}\) is just a scalar we have the one-component integrable equation
\[iq_{t}+\zeta^{2}iq_{x}+q_{xt}\pm 2|q|^{2}(q-iq_{x})\mp 2\zeta^{2}q\partial_{t}^{ -1}(|q|^{2})_{x}=0. \tag{25}\]
The integration operator \(\partial_{t}^{-1}\) could be understood as \(\int_{-\infty}^{t}dt^{\prime}\), which leads to a hysteresis term in the equation. We assume for simplicity that all functions are from the Schwartz class \(\mathcal{S}(\mathbb{R})\) in \(x\) for all values of \(t\). A non-local reduction exists, \(r(x,t)=\pm q(-x,-t)\), (scalar functions). The equation is (only the \(-x\) and \(-t\) arguments are explicit)
\[iq_{t}+\zeta^{2}iq_{x}+q_{xt}\pm 2qq(-x,-t)(q-iq_{x})\mp 2\zeta^{2}q\int_{- \infty}^{t}\big{(}qq(-x,-t)\big{)}_{x}dt^{\prime}=0. \tag{26}\]
One possible interpretation of (26) is a nonlocal (due to the "\(-x\)" dependence) non-evolutionary equation with hysteresis.
The inverse scattering for the spectral problem of the quadratic bundle of \(sl(2,\mathbb{C})-\)valued \(L\)-operators has been described in [27]. The normalization of the associated RHP problem is canonical which facilitates the computations leading to the soliton solutions.
## 3 Modelling with higher order nonlinear Schrodinger equations
From modelling point of view, the generalised NLS or Higher order NLS (HNLS) equations with applications in nonlinear optics [35; 36] as well in water waves [37] and plasma [38] are usually written in the form
\[iq_{T}+icq_{X}+\frac{1}{2}q_{XX}+|q|^{2}q+i\beta_{1}q_{XXX}+i\beta_{2}|q|^{2} q_{X}+i\beta_{3}q(|q|^{2})_{X}=0. \tag{27}\]
where \(c\), \(\beta_{i}\), \(i=1,2,3\) are, in general arbitrary real constants, depending on the physical parameters.
The integrable cases correspond to the following ratios \((\beta_{1}:\beta_{2}:\beta_{3}):\) The DNLS I and II with \((0:1:1)\) and \((0:1:0)\); Hirota [39] - with \((1:6:0)\) and Sasa-Satsuma [40] - with \((1:6:3)\). Nijhof and Roelofs [41] using the prolongation method have proven that no other integrable cases in the form (27) exist.
For practical applications, however, there is usually a small parameter, \(\epsilon\), such that the quantities scale like \(q\sim\epsilon\), i.e. \(q\to\epsilon q\), \(T=\epsilon t\), slow time, then \(t=T/\epsilon\) and \(X=\epsilon x\) - slow space variable, where \((x,t)\) are the original unscaled variables. Therefore, the physical models leading to HNLS involve perturbative expansions like
\[iq_{t}+icq_{x}+\frac{\epsilon}{2}q_{xx}+\epsilon|q|^{2}q+i\epsilon^{2}\big{(} \beta_{1}q_{xxx}+\beta_{2}|q|^{2}q_{x}+\beta_{3}q(|q|^{2})_{x}\big{)}=\mathcal{ O}(\epsilon^{3}). \tag{28}\]
Thus we can extend the set of integrable models by considering those, which admit the scaling as in (28), and we can use the triples \((\beta_{1}:\beta_{2}:\beta_{3})\) for some systematic classification of the models.
For example, the Fokas - Lenells equation [32],
\[iq_{t}+icq_{x}-\epsilon\nu q_{xt}+\epsilon\gamma q_{xx}+\epsilon|q|^{2}q+i \epsilon^{2}\nu|q|^{2}q_{x}=0, \tag{29}\]
where \(c,\nu,\gamma\) are constants, can be transformed as follows. In the leading order, \(q_{t}=-cq_{x}\) thus we have
\[\begin{split}& iq_{t}+icq_{x}+\varepsilon(\gamma+\nu c)q_{xx}+ \varepsilon|q|^{2}q=\mathcal{O}(\varepsilon^{2}),\\ & q_{t}=-cq_{x}+i\varepsilon(\gamma+\nu c)q_{xx}+i\varepsilon|q| ^{2}q+\mathcal{O}(\varepsilon^{2})\end{split} \tag{30}\]
Next we substitute \(q_{t}\) from (30) in the \(q_{xt}\) term of (29) to obtain
\[iq_{t}+icq_{x}+\varepsilon(c\nu+\gamma)q_{xx}+\varepsilon|q|^{2}q-i \varepsilon^{2}\nu(c\nu+\gamma)q_{xxx}-i\varepsilon^{2}\nu q(|q|^{2})_{x}= \mathcal{O}(\varepsilon^{3}). \tag{31}\]
In order to match (27) we need to choose \(c\nu+\gamma=1/2\), then
\[iq_{t}+icq_{x}+\frac{\varepsilon}{2}q_{xx}+\varepsilon|q|^{2}q-i\varepsilon^ {2}\frac{\nu}{2}\big{(}q_{xxx}+2q(|q|^{2})_{x}\big{)}=\mathcal{O}(\varepsilon ^{3}),\]
hence the model is characterised by the ratio \((1:0:2)\). The non-evolutional equation (25)
\[iq_{t}+\zeta^{2}iq_{x}+q_{xt}\pm 2|q|^{2}(q-iq_{x})\mp 2\zeta^{2}q\partial_{t}^{ -1}(|q|^{2})_{x}=0\]
with the upper sign is characterised by \((1,-1,3)\) and \(\zeta^{2}=-2\); with the lower sign - by \((1,-2,6)\) and \(\zeta^{2}=2\).
For matching physical models (28) with arbitrary coefficients \(\beta_{1},\beta_{2},\beta_{2}\) to an integrable equation, one needs to apply the method of Kodama transformations. For HNLS they are provided in [42].
## 4 Conclusions
The symmetric spaces approach provides a convenient setting for constructing and classifying multi-component integrable systems. The negative and rational flows lead to a number of new integrable nonlinear systems in non-evolutionary form.
The spectral theory, inverse scattering, recursion operators, completeness of squared eigenfunctions, hierarchies etc. remain to be studied in detail for the quadratic pencil of Lax operators. In particular, the completeness of the squared eigenfunctions of a given Lax operator is a powerful tool for the description of whole hierarchy of NLEE and their properties, so the negative flows outlined above as well as other members of the hierarchy should come out from such description.
From the modelling point of view, the asymptotic expansions are important. Nonevolutionary integrable equations could be asymptotically equivalent to evolutionary non-integrable (but physically important) equations.
The author is thankful to Prof. V. Gerdjikov and Dr G. Grahovski for their help and advice. Partial funding from grant 21/FFP-A/9150 (Science Foundation Ireland) and grant KII-06H42/2 (Bulgarian National Research Foundation) is gratefully acknowledged.
|
2309.15266 | A New Spectral Conjugate Subgradient Method with Application in Computed
Tomography Image Reconstruction | A new spectral conjugate subgradient method is presented to solve nonsmooth
unconstrained optimization problems. The method combines the spectral conjugate
gradient method for smooth problems with the spectral subgradient method for
nonsmooth problems. We study the effect of two different choices of line
search, as well as three formulas for determining the conjugate directions. In
addition to numerical experiments with standard nonsmooth test problems, we
also apply the method to several image reconstruction problems in computed
tomography, using total variation regularization. Performance profiles are used
to compare the performance of the algorithm using different line search
strategies and conjugate directions to that of the original spectral
subgradient method. Our results show that the spectral conjugate subgradient
algorithm outperforms the original spectral subgradient method, and that the
use of the Polak-Ribiere formula for conjugate directions provides the best and
most robust performance. | Milagros Loreto, Thomas Humphries, Chella Raghavan, Kenneth Wu, Sam Kwak | 2023-09-26T20:52:04Z | http://arxiv.org/abs/2309.15266v2 | A New Spectral Conjugate Subgradient Method with Application in Computed Tomography Image Reconstruction
###### Abstract
A new spectral conjugate subgradient method is presented to solve nonsmooth unconstrained optimization problems. The method combines the spectral conjugate gradient method for smooth problems with the spectral subgradient method for nonsmooth problems. We study the effect of two different choices of line search, as well as three formulas for determining the conjugate directions. In addition to numerical experiments with standard nonsmooth test problems, we also apply the method to several image reconstruction problems in computed tomography, using total variation regularization. Performance profiles are used to compare the performance of the algorithm using different line search strategies and conjugate directions to that of the original spectral subgradient method. Our results show that the spectral conjugate subgradient algorithm outperforms the original spectral subgradient method, and that the use of the Polak-Ribiere formula for conjugate directions provides the best and most robust performance.
N osmsooth optimization; conjugate gradients; spectral subgradient; computed tomography.
90C30, 90C56, 94A08
## 1 Introduction
The problem under consideration is the minimization of nonsmooth functions without any constraints
\[\min_{x\in\mathbb{R}^{n}}\ \ f(x), \tag{1}\]
where \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is locally Lipschitz continuous in its domain and differentiable on an open dense subset of \(\mathbb{R}^{n}\). We are interested in the specific case where \(f\) is continuously differentiable almost everywhere and not differentiable at minimizers.
To solve (1), Loreto _et al._ in [24] proposed the spectral subgradient method. It combines a nonmonotone line search [16], a globalization scheme [10], and the classical subgradient approach using the the Barzilai-Borwein (spectral) step length [4]. The
significant advantage of the spectral step is that its computation is independent of the optimal function value. In [24], extensive numerical experimentation was presented, comparing six classic subgradient methods with the spectral subgradient with the method of performance profiles [12]. The numerical experiments demonstrated the spectral subgradient method's superiority compared to these methods.
Moreover, Loreto and collaborators have explored combining the spectral step with subgradients and other subdifferential approaches. In [9], they developed the spectral projected subgradient method for the minimization of non-differentiable functions on convex sets. Successful numerical results were presented for two distinct problem types: set covering and generalized assignment problems, and convergence results were shown later in [23] and [25]. In [22], they developed and analyzed the nonsmooth spectral gradient method for unconstrained problems in which the spectral step is combined with subdifferential approaches, such as the simplex gradient and the sampling gradient.
The spectral subgradient method represents an extension to the nonsmooth case of the spectral gradient method developed by Raydan [29, 30]. In [30], the author found that the use of gradient descent with spectral step length for unconstrained (smooth) nonlinear optimization provided several advantages over approaches based on the conjugate gradient method for this problem. Inspired by this, Birgin & Martinez proposed a spectral conjugate gradient (SCG) method [5], which combines the spectral step with conjugate gradient directions. They studied several formulas for determining the conjugate directions (which we describe in Section 2.1), and determined that the use of the spectral step provided improved results over the standard conjugate gradient method.
Several modifications to SCG have been proposed subsequently. Andrei proposed the SCALCG algorithm [1, 2], including a quasi-Newton step to ensure the production of descent directions. In [13, 35], the authors propose instead modifying the choice of spectral step to guarantee descent, and prove convergence using both Armijo and Wolfe line searches. Subsequent authors have built off these early works by proposing modified formulas for generating search directions with desirable properties, e.g.[3, 8, 11, 20, 21, 34].
All of these extensions to SCG assume \(f\) is continuously differentiable. It is interesting to note that an extension of the original conjugate gradient method to minimize nondifferentiable functions using subgradients was first presented many years ago by Wolfe [32, 33]. In the interim, however, there appears to have been little work done in the area of using conjugate subgradients for nonsmooth problems, despite some recent papers [19, 27].
In this paper, we introduce the spectral conjugate subgradient (SCS) method as a novel approach for solving nonsmooth problems. The inspiration for this method stems from the SCG approach. The SCS algorithm represents the first extension of spectral CG methods to nonsmooth problems, to the best of our knowledge. Our primary objective is to expand the application of spectral methods to nonsmooth problems, given the success of smooth spectral methods in solving a wide range of practical problems (Birgin et al. [6, 7])
The organization of this paper is as follows: Section 2 provides a description of the SCS method, while Section 3 introduces an application of interest which involves the use of the nonsmooth total variation (TV) function as a regularizer for several image reconstruction problems in computed tomography (CT). Section 4 presents numerical results based on a set of nonsmooth problems, including the CT problem, using performance profiles. Finally, Section 5 offers some concluding remarks.
## 2 Methodology
### Spectral Conjugate Gradient Method
The original spectral CG method for unconstrained optimization was proposed by Birgin & Martinez [5], under the assumption that the objective function f is continuously differentiable at all points. Letting \(g_{k}\) denote the gradient at the \(k\)th iterate \(x_{k}\), the vectors \(s_{k}=x_{k+1}-x_{k}\) and \(y_{k}=g_{k+1}-g_{k}\) are defined for \(k=0,1,2,\dots\). Beginning from an initial guess \(x_{0}\), the subsequent iterates are given by
\[x_{k+1}=x_{k}+\alpha_{k}d_{k},\]
where \(\alpha_{k}\) is selected using a Wolfe line search (described in the next section), and the conjugate directions \(d_{k}\in\mathbb{R}^{n}\) are generated by
\[d_{k+1}=-\theta_{k}g_{k+1}+\beta_{k}s_{k}. \tag{2}\]
The spectral step parameter, denoted by \(\theta_{k}\), was initially proposed by Barzilai and Borwein [4] and subsequently refined by Raydan [29, 30]:
\[\theta_{k}=\frac{s_{k}^{T}s_{k}}{s_{k}^{T}y_{k}}, \tag{3}\]
with \(\theta_{0}\) typically chosen to be 1. As the algorithm requires \(\theta_{k}\) to be non-negative and bounded, a safeguarding procedure is often used (e.g. [24]):
\[\theta_{k+1}=\left\{\begin{array}{ll}\min\{\theta_{\max},\frac{1}{\parallel s _{k}\parallel}\}&\text{if }s_{k}^{T}y_{k}\leq 0\\ \min\{\theta_{\max},\max\{\theta_{\min},\frac{s_{k}^{T}s_{k}}{s_{k}^{T}y_{k}} \}\}&\text{if }s_{k}^{T}y_{k}>0\end{array}\right., \tag{4}\]
where \(0<\theta_{\min}<\theta_{\max}<\infty\).
The parameter \(\beta_{k}\) can be chosen in several ways; in [5], the authors proposed the following three choices:
\[\beta_{k} =\frac{(\theta_{k}y_{k}-s_{k})^{T}g_{k+1}}{s_{k}^{T}y_{k}} \tag{5}\] \[\beta_{k} =\frac{\theta_{k}y_{k}^{T}g_{k+1}}{\alpha_{k}\theta_{k-1}g_{k}^{T }g_{k}}\] (6) \[\beta_{k} =\frac{\theta_{k}g_{k+1}^{T}g_{k+1}}{\alpha_{k}\theta_{k-1}g_{k}^ {T}g_{k}} \tag{7}\]
noting that under certain assumptions, these are equivalent to the formula introduced by Perry [28] and its further modifications by Polak-Ribiere and Fletcher-Reeves, respectively. With any of these choices, it is possible that the search direction \(d_{k+1}\) computed by (2) may fail to be a descent direction; in this case, the authors proposed setting \(d_{k+1}=-\theta_{k}g_{k+1}\) to "restart" the algorithm.
In their paper, Birgin & Martinez studied the performance of several variants of this approach, including the three choices for \(\beta_{k}\), using \(\theta_{k}=1\) instead of the spectral step (corresponding to the classical conjugate gradient approach), and a heuristic for the initial choice of stepsize \(\alpha\) when performing the Wolfe line search. Ultimately it was determined that the Perry formula (5) using the spectral step and the heuristic for Wolfe line search gave the best performance.
This work presents an extension of the spectral conjugate gradient method that allows for \(g_{k}\) to be a subgradient at points where \(f\) is non-differentiable. A subgradient of \(f\) at a point \(x\) is any vector \(g\) that satisfies the inequality \(f(y)\geq f(x)+g^{T}(y-x)\) for all \(y\). This algorithm can also be interpreted as a generalization of the spectral subgradient method [24], which corresponds to the special case where \(\beta_{k}=0\) in equation (2). Additionally, we investigate two different line search strategies: the nonmonotone line search used by the spectral subgradient method, and the Wolfe line search used in [5]. We describe these two choices of line search in the next section.
### Line Search Strategies
As mentioned previously, the proposed spectral conjugate subgradient can use one of two line search strategies: nonmonotone, or Wolfe line searches.
#### 2.2.1 Nonmonotone Line Search
Historically, the spectral step has been paired with the nonmonotone line search described below, based on Raydan's work [30]. The reason for this pairing is the nonmonotone behavior of the step. So, it is natural to adjust the line search scheme with the nonmonotone globalization strategy of Grippo _et al._[16] combined with the proposed globalization scheme of La Cruz _et al._[10].
\[f(x_{k}+\alpha d_{k}) \leq\max_{0\leq j\leq\min\{k,M\}}f(x_{k-j})+\gamma\alpha g_{k}^{ T}d_{k}+\eta_{k},\text{with} \tag{8}\] \[0<\sum_{k}\eta_{k} =\eta<\infty. \tag{9}\]
The value of \(\alpha>0\) is determined by a backtracking process beginning with \(\alpha=1\). For the sequence \(\eta_{k}\), we use \(\eta_{k}=\dfrac{\eta_{0}}{k^{1.1}}\), which guarantees that (9) holds. This sequence helps the line search to be satisfied at early iterations. \(M\) is a fixed integer with \(M\geq 0\), and \(0<\gamma<1\) is a small sufficient decrease parameter.
The nonmonotone behavior of the line search is induced by the terms \(\max\limits_{0\leq j\leq\min\{k,M\}}f(x_{k-j})\) and \(\eta_{k}\). The parameter \(M\) allows the function value to increase or remain stationary for up to \(M\) iterations, which is suitable for the behavior of the spectral step, since the spectral step is associated with the eigenvalues of the Hessian at the minimum point, rather than the function values, as noted by [14, 15].
#### 2.2.2 Wolfe Line Search
In [5], the authors use the Wolfe conditions to guarantee convergence. Although the Wolfe conditions assume that \(f\) is differentiable, we included the Wolfe line search option in our algorithm to be compared with the nonmonotone line search, since we assume \(f\) is differentiable almost everywhere. Given \(0<\gamma<\sigma<1\), we require that
the step size \(\alpha\) satisfy
\[f(x_{k}+\alpha d_{k})\leq f(x_{k})+\gamma\alpha g_{k}^{T}d_{k} \tag{10}\]
and
\[\nabla f(x_{k}+\alpha d_{k})^{T}d_{k}\geq\sigma g_{k}^{T}d_{k} \tag{11}\]
in every iteration, where \(g_{k}\) is a subgradient of \(f\) at \(x_{k}\).
### Spectral Conjugate Subgradient Algorithm
Using the definitions of \(s_{k}\) and \(y_{k}\) from Section 2.1, but where \(g_{k}\) is now a subgradient, we define the spectral conjugate subgradient (SCS) algorithm as follows:
**Algorithm SCS:**
Let \(x_{0}\in\mathbb{R}^{n}\) be given, and let \(g_{0}\) represent a subgradient at \(f(x_{0})\). We define \(d_{0}=-g_{0}\), and set \(k=0\) and \(MAXITER\) as the maximum allowable number of iterations. Let \(M\) be an integer with \(M\geq 0\), let \(0<\theta_{\min}<\theta_{\max}<\infty\), let \(\eta_{0}=\max(f(x_{0}),\parallel g_{0}\parallel)\), and let \(0<\gamma<\sigma<1\). Then:
Repeat until \(k=MAXITER\)
1. Compute \(\alpha\) based on the line search, either nonmonotone (8), or Wolfe ((10) and (11))
2. Define \(\alpha_{k}=\alpha\) and \(x_{k+1}=x_{k}+\alpha_{k}d_{k}\)
3. Compute \(\theta_{k}\) by (4) and \(\beta_{k}\) by (5),(6), or (7)
4. Define \(d=-\theta_{k}g_{k+1}+\beta_{k}s_{k}\)
5. If \(d^{T}g_{k+1}\leq-10^{-3}\parallel d\parallel_{2}\parallel g_{k+1}\parallel_{2}\) then \(d_{k+1}=d\) else \(d_{k+1}=-\theta_{k}g_{k+1}\)
6. \(k=k+1\)
**Remarks:**
* Step 5 is the heuristic suggested by Birgin & Martinez to guarantee \(d\) is a descent direction. If the angle between \(d\) and \(-g_{k+1}\) lies outside of \([-\frac{\pi}{2},\frac{\pi}{2}]\) (to some small tolerance), the algorithm is restarted to use the direction of the negative subgradient.
* For values of \(x_{k+1}\) where \(f\) is differentiable, the subgradient \(g_{k+1}\) is equal to the gradient \(\nabla f(x_{k+1})\).
* When \(\beta=0\) and the nonmonotone line search (8) is used, the SCS algorithm becomes the spectral subgradient method described in [24].
* When using the Perry formula (5) for \(\beta_{k}\), it is possible in nonsmooth problems for the denominator \(s_{k}^{T}y_{k}\) to be zero. In this case we simply set \(\beta_{k}=0\), equivalent to the spectral subgradient method.
* For the Wolfe line search ((10) and (11)), we adopted the implementation by Taviani [31]
* The algorithm terminates upon reaching the maximum iteration count (\(MAXITER\)), and the best value \(f_{\min}\) is reported, along with the corresponding
point \(x\).
## 3 CT problem with Total Variation
In computed tomography (CT) imaging, a two- or three-dimensional image of a compactly supported function is recovered from a discrete sampling of the so-called X-ray transform of the function. This transform consists of integrals along every possible line through the function; in the two-dimensional case, which we consider in this article, it is equivalent to the better-known Radon transform. The simplest type of two-dimensional discretization, known as parallel-beam geometry, is obtained by equal spacing of both the affine and angular parameters in the X-ray transform, producing a set of measurements known as a sinogram. Figure 1 shows an example of a simple two-dimensional object (the Shepp-Logan phantom) and its sinogram.
Discretizing both the domain of the function and the X-ray transform results in a linear system of equations:
\[Ax+\eta=b, \tag{12}\]
where \(x\in\mathbb{R}^{n}\) is the non-negative image being reconstructed, \(b\in\mathbb{R}^{m}\) is the sinogram, and \(A\) is the \(m\times n\) system matrix representing the geometry of the CT system. The vector \(\eta\) represents measurement noise that arises naturally during the real-life CT imaging process. When \(\|\eta\|\) is small, and the X-ray transform of the object is well-sampled, the image can be reconstructed by solving the non-negative least-squares problem:
\[\min_{x\in\mathbb{R}^{n}_{+}}\ \ \frac{1}{2}\|Ax-b\|^{2}, \tag{13}\]
typically using an iterative method, due to the large size of the matrices involved. In more challenging circumstances, one must instead solve a regularized problem:
\[\min_{x\in\mathbb{R}^{n}_{+}}\ \ \frac{1}{2}\|Ax-b\|^{2}+\mu\phi(x), \tag{14}\]
where \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is an appropriate regularization function, and \(\mu>0\) is a weighting parameter. Two scenarios of this type are if \(\|\eta\|\) is large, or if the angular sampling of the X-ray transform is insufficient to recover the high-frequency components of \(x\). Both of these situations arise as a result of trying to reduce dose to a patient in medical CT imaging; the former is often called low-dose imaging, and the latter is sparse-view imaging. In both cases, solving the unregularized problem (13) results in poor quality images, as shown in Figure 1.
A common choice of regularizer \(\phi(x)\) is the isotropic total variation (TV) function:
\[\phi_{TV}(x)=\sum_{1\leq i,j\leq N-1}\sqrt{\left(x_{i,j+1}-x_{i,j}\right)^{2}+ \left(x_{i+1,j}-x_{i,j}\right)^{2}}, \tag{15}\]
in which \(x\) is reshaped as an \(N\times N\) image (with \(N^{2}=n\)), indexed by row and column.
A routine computation gives the partial derivatives of \(\phi_{TV}\) as:
\[\frac{\partial}{\partial x_{i,j}}\phi_{TV}(x) =\frac{2x_{i,j}-x_{i+1,j}-x_{i,j+1}}{\sqrt{\left(x_{i+1,j}-x_{i,j} \right)^{2}+\left(x_{i,j+1}-x_{i,j}\right)^{2}}} \tag{16}\] \[\quad+\frac{x_{i,j}-x_{i,j-1}}{\sqrt{\left(x_{i,j}-x_{i,j-1} \right)^{2}+\left(x_{i+1,j-1}-x_{i,j-1}\right)^{2}}}\] \[\quad+\frac{x_{i,j}-x_{i-1,j}}{\sqrt{\left(x_{i,j}-x_{i-1,j} \right)^{2}+\left(x_{i-1,j+1}-x_{i-1,j}\right)^{2}}}.\]
Clearly the derivative is undefined whenever the denominator of any of the three terms in (16) is zero. This situation arises frequently as it occurs when pixels have the same values as their neighbors; furthermore, such images are likely to be minimizers of (14) as they correspond to images with small TV. Note, however, that the numerator of the respective term must also be zero whenever the denominator is zero. Following [26], we therefore define a subgradient by simply setting any term in (16) with a denominator of zero equal to zero.
## 4 Numerical Results
We provide numerical results that demonstrate the performance of the proposed method using a collection of widely recognized nonsmooth benchmark problems frequently employed for evaluation purposes. Additionally, we showcase an application in CT reconstruction with the utilization of total variation, as outlined in Section 3.
### Nonsmooth test problems
The collection of problems consists of 10 nonsmooth minimization problems, taken from [18] and detailed in Table 1. This table provides information such as the optimal value \(f_{*}=f(x_{*})\) and the dimension \(n\). For further details on this problem set, including suggested initial points, please refer to [18].
We are comparing a total of 8 versions of the spectral conjugate subgradient method, corresponding to two choices of line search (nonmonotone or Wolfe) and four different approaches to calculate the parameter \(\beta\) in (2). We let \(\beta_{0}=0\) cor
Figure 1: CT imaging example. Left: true \(400\times 400\) pixel digital Shepp-Logan phantom. Center-left: parallel-beam sinogram corresponding to 360 views taken over \(180^{\circ}\); the affine parameter is the \(y\)-axis and the angular parameter the \(x\)-axis. Center-right: low-dose image reconstructed using unregularized least-squares with 20% Gaussian noise added to \(b\). Right: Sparse-view image reconstructed using unregularized least squares with only 60 views taken over \(180^{\circ}\)
spectral subgradient method, and \(\beta_{1}\), \(\beta_{2}\) and \(\beta_{3}\) correspond to the formulas (5), (6), and (7) respectively. Each algorithm was run for \(MAXITER=1000\) iterations, and the the best function value, called \(f_{\min}\), was reported.
We evaluate the performance of each algorithm by analyzing the results using performance profiles [12]. A detailed explanation of how we adopted the profiles is given in [24]. Considering a set of solvers \(S\), which in this scenario includes the 8 variations of the aforementioned algorithm, and the set of problems \(P\) given in Table 1, the performance ratio for each solver-problem pair is defined as follows:
\[r_{p,s}=\frac{t_{p,s}}{\min\{t_{p,s}:s\in S\}},\]
where the performance measure \(t_{p,s}\) represents a specific metric, e.g. computation time required by solver \(s\) to solve problem \(p\). The performance of solver \(s\) is then assessed by computing a function
\[\rho_{s}(\tau)=\frac{1}{n_{p}}size\{p\in P:r_{p,s}\leq\tau\}.\]
Here \(n_{p}\) denotes the total number of problems in the set \(P\), and \(\tau\) is a parameter ranging between 1 and \(r_{M}\). The max value \(r_{M}\) is determined to satisfy \(r_{M}\geq r_{p,s}\) for all \(p\) in the set \(P\) and \(s\) in the set \(S\). This function is then plotted for all solvers \(s\) to produce the performance profiles; the best-performing algorithms have profiles in the upper left of the plot, and worse-performing algorithms in the lower right. Typically a log scale is used for \(\tau\) for ease of visualization; then the value of \(\rho_{s}(\tau)\) at zero (when \(\tau=1\)) provides the percentage of problems in which solver \(s\) achieved the best performance.
In our experiments, we focus on three different performance metrics: the relative error between \(f_{\min}\) and \(f_{*}\) (if \(f_{*}\) is non-zero), or the absolute error if \(f_{*}\) is zero; the total number of function evaluations needed to obtain \(f_{\min}\); and total CPU time consumed by the algorithm. An algorithm is considered to solve a problem \(p\) if the error between \(f_{\min}\) and \(f_{*}\) is less than \(10^{-1}\). If solver \(s\) fails to solve problem \(p\), \(r_{p,s}\) is set equal to \(r_{M}\).
To represent the 8 methods in the performance profiles, we use the notation \(WB_{i}\) for the Wolfe line search and the four choices of \(\beta\), (\(i=1,\dots 4\)). Likewise, we set the notation \(NMB_{i}\) for the nonmonotone line search.
The performance profile using error as the performance measure is shown in Figure (2). The plot shows the algorithm with Wolfe line search and parameter \(\beta=0\), \(WB_{0}\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline Prob & \(f_{*}\) & n \\ \hline \hline P1: MAXQ & 0.0 & 20 \\ P2: MXHILB & 0.0 & 50 \\ P3: Chained LQ & \(-(n-1)2^{1/2}\) & 2 \\ P4: Chained CB3 I & \(2(n-1)\) & 20 \\ P5: Chained CB3 II & \(2(n-1)\) & 20 \\ P6: Activetaces & 0.0 & 2 \\ P7: Brown 2 & 0.0 & 2 \\ P8: Mifflin 2 & -34.795 & 50 \\ P9: Crescent I & 0.0 & 2 \\ P10:Crescent II & 0.0 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Problem Set
as the most precise solver overall since it solves accurately \(60\%\) of the problems, and \(70\%\) of them if a small error is tolerated. However, \(WB_{0}\) didn't solve the other \(30\%\) of the problems based on our criteria. This makes \(WB_{0}\) the most accurate algorithm but the least robust.
On the other hand, \(NMB_{2}\) solves \(100\%\) of the problems when some error is tolerated, though only \(20\%\) with maximum accuracy. This makes \(NMB_{2}\) the most robust algorithm. It is worth mentioning \(WB_{2}\) as a good alternative to combine accuracy and robustness, solving accurately \(50\%\) of the problems, and \(90\%\) of them when some error is tolerated. We note that \(NMB_{0}\), originally presented at [24], is outperformed by both \(WB_{0}\) and \(NMB_{2}\) in terms of accuracy and robustness, respectively.
Figure 3 shows the performance profile based on total number of function evaluations needed to find \(f_{min}\). It shows the spectral subgradient method \(NMB_{0}\)[24] as the most efficient algorithm. \(NMB_{0}\) solves \(80\%\) of the problems with the fewest function evaluations. Similarly, \(NMB_{1}\) presents as an efficient option, slightly worse than \(NMB_{0}\). However, \(NMB_{0}\) and \(NMB_{1}\) were unable to solve one problem to within the specified threshold.
On the other hand, \(NMB_{2}\) solves \(100\%\) of the problems to within the specified threshold, while requiring only somewhat more function evaluations than \(NMB_{0}\) and \(NMB_{1}\). This would make the trade-off robustness versus efficiency worthy, making \(NMB_{2}\) the best alternative in our opinion.
Figure 3 also shows that the nonmonotone line search considerably outperforms the Wolfe line search with respect to the number of function evaluations required. Indeed, solvers using Wolfe line search are clearly on the bottom right side of the chart, meaning they solve the fewest problems at the highest cost. It is worth mentioning that Figure 2 showed \(WB_{0}\) as the most precise solver and \(WB_{2}\) as a suitable alternative in terms of precision. But Figure 3 shows them both as the most expensive in terms of function
Figure 2: Performance profile based on error between \(f_{\min}\) and \(f_{*}\) using the nonmonotone line search with M=7, and the Wolfe line search per each parameter \(\beta\).
evaluations, which makes these options unattractive.
With respect to CPU time, Figure 4 shows quite similar behavior when compared to the profile based on function evaluations (Figure 3). This outcome is anticipated
Figure 4: Performance profile based on CPU using the nonmonotone line search with M=7, and the Wolfe line search per each parameter \(\beta\)
Figure 3: Performance profile based on number of function evaluations using the nonmonotone line search with M=7, and the Wolfe line search per each parameter \(\beta\)
since evaluating the function is the primary factor that drives the computational effort of these algorithms. \(NMB_{2}\) seems to be the best alternative, being more robust than \(NMB_{0}\) and \(NMB_{1}\), given it solves \(100\%\) of the problems, although slightly worse performance.
After analyzing all performance profiles, we can conclude that \(NMB_{2}\) is the best alternative when factoring in all the criteria. This is based on its high robustness, as it was the only method that solved all ten problems to within the specified threshold. \(NMB_{2}\) also provides solutions with accuracy comparable with the rest of the solvers, although clearly outperformed by \(WB_{0}\) on this measure. And, \(NMB_{2}\) is in the group of top performing solvers based on computation time, with only slightly poorer performance than \(NMB_{0}\) and \(NMB_{1}\).
In conclusion, we can state that the Wolfe line search tends to improve the accuracy of the algorithms, but clearly at expense of many more function evaluations, and hence computational effort.
### CT problem with total variation
In this section, we study the effect of the choice of parameter \(\beta\) when applying Algorithm SCS to a set of TV-regularized CT reconstruction problems. All experiments were run using the AIRTools II Matlab toolbox [17] to generate parallel-beam CT system matrices \(A\) and the corresponding sinograms \(b\). An image size of \(400\times 400\) pixels was used, resulting in a problem size of \(n=160,000\). We then applied Algorithm SCS to the regularized least squares problem (14), using TV (15) as the regularizer, whose subgradient was computed using (16) and the subsequently discussed method.
A total of 24 reconstruction problems were considered, corresponding to:
* Three different images (phantoms) from the AIRTools II toolbox: shepplogan, threphases, and grains. All three phantoms are piecewise constant, making them well-suited to total variation regularization. They are also all elements of the \(n\)-dimensional box \([0,1]^{n}\subset\mathbb{R}^{n}\).
* Four different imaging scenarios: two low-dose scenarios with Gaussian noise of \(5\%\) and \(10\%\) added to \(b\), and two sparse-view scenarios using \(60\) and \(30\) angular views. In the two low-dose scenarios, a total of \(360\) views and \(566\) lines through the object were simulated, giving a sinogram size of \(m=203,760\); this is reduced by a factor of \(6\) and \(12\), respectively, for the two sparse-view scenarios. The sparse-view scenarios did not include noise.
* Two different values of \(\mu\), controlling the weighting of the regularization term in (14). For the two low-dose scenarios, these were \(\mu=25\) and \(\mu=250\), while for the sparse-view scenarios, we used \(\mu=0.5\) and \(\mu=5\). These values were chosen empirically to give generally good results, while providing different weighting on the nonsmooth (TV) component of the function. Note that in the sparse-view scenarios, the size of the first term in (14) is smaller than in the low-dose scenarios (since the sinogram size is smaller), which is why a smaller value of \(\mu\) can be used.
Additionally, because the desired solution to each problem is a vector \(x\in[0,1]^{n}\), we modified Algorithm SCS to include projection onto this set (denoted \(\Omega\)). This is accomplished as follows: before every iteration of Step 1, redefine \(d_{k}\) as
\[d_{k}=P_{\Omega}(x_{k}+d_{k})-x_{k}, \tag{17}\]
where \(P_{\Omega}(x)=\min\{\max\{x,0\},1\}\) is the projection onto \(\Omega\). This guarantees that \(x_{k}\in\Omega\) for all \(k\), provided that the step size \(\alpha_{k}\) does not exceed 1. As this is not the case for the Wolfe line search, in this section we only use the nonmonotone line search with \(M=7\). We, therefore, consider only four algorithms, namely NM\(\beta_{0}\) to NM\(\beta_{3}\) from the previous section. We note that the use of Wolfe line search is prohibitively expensive for this problem in any event, due to the high computational cost of function evaluations. Algorithm SCS was run for 200 total iterations for every case, beginning from \(x_{0}=0\in\mathbb{R}^{n}\).
Figure 5 shows the performance profile based on the lowest objective function value, \(f_{\min}\) found by each solver for each problem. Note that this differs from the measure used for the nonsmooth test problems (relative or absolute difference between \(f_{\min}\) and the true optimum \(f_{*}\)), since \(f_{*}\) is unknown for these problems. A solver was considered to have solved the problem only if \(f_{\min}\) was within 10% of the lowest value found. We observe that the choice of \(\beta_{2}\) provides the best overall results, solving 80% of the problems to a high degree of accuracy; furthermore, it was the only solver to solve all problems within the 10% threshold. The choice of \(\beta_{3}\) was the next best-performing, followed by \(\beta_{1}\) and finally \(\beta_{0}\) (the spectral subgradient method), which was competitive on only about 45% of the problems, and failed to solve nearly 50% to the required tolerance.
Figure 6 shows the performance profile generated for the four solvers, using the total number of function evaluations as the performance measure. The profile based on CPU time is omitted, as the results were essentially the same. Unlike for the nonsmooth test problems (Figure 3), we observe that the SCS algorithm using \(\beta_{2}\) required significantly fewer function evaluations than when using \(\beta_{1}\) or \(\beta_{3}\). As a total of 200 iterations was run for each solver, this indicates that the direction chosen as a result of using \(\beta_{2}\) generally resulted in less backtracking than the other methods. We note that while
Figure 5: Performance profile for CT reconstruction problems, based on lowest objective function value \(f_{\min}\) found by Algorithm SCS using nonmonotone line search with each choice of \(\beta\).
the choice of \(\beta_{0}\) typically resulted in the fewest number of function evaluations needed, it was also unsuccessful in solving nearly half the problems, as discussed in the previous paragraph.
Figure 7 shows the best images reconstructed by Algorithm SCS using \(\beta_{2}\). We can observe that the reconstructed images are generally of high quality, although there are still noticeable artifacts in some of the images corresponding to more challenging scenarios (third and fifth columns). It is possible that increasing the value of the weighting parameter, \(\mu\), would improve image quality in these cases.
## 5 Final Remarks
In this article, we develop a new Spectral Conjugate Subgradient (SCS) method for nonsmooth unconstrained optimization, by combining conjugate directions proposed by Birgin & Martinez [5] and the spectral subgradient method [24]. We investigate the use of two different line search strategies and several choices of the parameter \(\beta\) used to determine conjugate directions. We present the results of numerical experiments with standard nonsmooth test problems, as well as problems from CT imaging using total variation regularizations. The results are examined utilizing performance profiles, which enable the comparison of various approaches in terms of precision and computational cost.
The numerical results show that the combination of the nonmonotone line search with the parameter \(\beta_{2}\) (analogous to the Polak-Ribiere formula), which we denote by \(NMB_{2}\), was the most successful approach. This is based on its high robustness, given that it solved all the problems to within a specified threshold, for both the nonsmooth test problems and the CT imaging problems. Additionally, the computational effort
Figure 6: Performance profile for CT reconstruction problems, based on number of function evaluations used by Algorithm SCS using nonmonotone line search with each choice of \(\beta\).
required by \(NMB_{2}\) was on par with the least expensive approaches in both sets of experiments.
It is interesting to note that in Birgin & Martinez' original article they found that the choice of \(\beta_{1}\) (Perry's formula) was the best option, while the best option in our experiments was \(\beta_{2}\). It is difficult to compare the results directly, given that their experiments were only on smooth problems, using Wolfe line search. However, a possible explanation for why the choice of \(\beta_{2}\) works better in our experiments is that the formula for \(\beta_{1}\) has a denominator of \(s_{k}^{T}y_{k}\), which is more often equal to zero when solving nonsmooth problems than for differentiable problems. Indeed, in our numerical experimentation we noticed this term is often zero for some problems. In this case, we set \(\beta_{1}=0\), defaulting to the spectral subgradient approach. On the other hand, when the problem is differentiable, the Wolfe line search ensures that \(s_{k}^{T}y_{k}>0\), avoiding this issue.
Finally, further investigation into the convergence properties of the SCS method would be worthwhile. Loreto _et al._[24] presented convergence results on the spectral subgradient method, and so did Birgin for the spectral conjugate gradient method. A combination of these existing convergence results, with some additional hypotheses could be a potential path to prove the convergence of the SCS.
Figure 7: Best images reconstructed by Algorithm SCS using \(\beta_{2}\). The first column shows the true phantom images used to generate the data, while the remaining four columns show images reconstructed for the four scenarios (low-dose imaging with 5% and 10% Gaussian noise, and sparse-view imaging with 60 and 30 views, respectively). The larger of the two \(\mu\) values is used for each scenario
## Disclosure statement
The authors do not have any significant financial or non-financial interests to declare.
## Funding
The National Science Foundation provided support for this work (grant DMS-2150511).
## Notes on contributor(s)
M. Loreto and T. Humphries co-authored the manuscript. C. Raghavan, K. Wu and S. Kwak participated in a Research Experience for Undergraduates (REU) funded by the above-mentioned NSF Grant in Summer 2022, providing initial implementation of the spectral conjugate subgradient method and preliminary experimentation.
## Data Availability Statement
The authors can provide the MATLAB code and data used in this work upon request.
|
2309.14178 | Chebyshev HOPGD with sparse grid sampling for parameterized linear
systems | We consider approximating solutions to parameterized linear systems of the
form $A(\mu_1,\mu_2) x(\mu_1,\mu_2) = b$, where $(\mu_1, \mu_2) \in
\mathbb{R}^2$. Here the matrix $A(\mu_1,\mu_2) \in \mathbb{R}^{n \times n}$ is
nonsingular, large, and sparse and depends nonlinearly on the parameters
$\mu_1$ and $\mu_2$. Specifically, the system arises from a discretization of a
partial differential equation and $x(\mu_1,\mu_2) \in \mathbb{R}^n$, $b \in
\mathbb{R}^n$. This work combines companion linearization with the Krylov
subspace method preconditioned bi-conjugate gradient (BiCG) and a decomposition
of a tensor matrix of precomputed solutions, called snapshots. As a result, a
reduced order model of $x(\mu_1,\mu_2)$ is constructed, and this model can be
evaluated in a cheap way for many values of the parameters. Tensor
decompositions performed on a set of snapshots can fail to reach a certain
level of accuracy, and it is not known a priori if a decomposition will be
successful. Moreover, the selection of snapshots can affect both the quality of
the produced model and the computation time required for its construction. This
new method offers a way to generate a new set of solutions on the same
parameter space at little additional cost. An interpolation of the model is
used to produce approximations on the entire parameter space, and this method
can be used to solve a parameter estimation problem. Numerical examples of a
parameterized Helmholtz equation show the competitiveness of our approach. The
simulations are reproducible, and the software is available online. | Siobhán Correnty, Melina A. Freitag, Kirk M. Soodhalter | 2023-09-25T14:41:07Z | http://arxiv.org/abs/2309.14178v2 | # Sparse Grid Based Chebyshev Hopgd for
###### Abstract
We consider approximating solutions to parameterized linear systems of the form \(A(\mu_{1},\mu_{2})x(\mu_{1},\mu_{2})=b\), where \((\mu_{1},\mu_{2})\in\mathbb{R}^{2}\). Here the matrix \(A(\mu_{1},\mu_{2})\in\mathbb{R}^{n\times n}\) is nonsingular, large, and sparse and depends nonlinearly on the parameters \(\mu_{1}\) and \(\mu_{2}\). Specifically, the system arises from a discretization of a partial differential equation and \(x(\mu_{1},\mu_{2})\in\mathbb{R}^{n}\), \(b\in\mathbb{R}^{n}\). This work combines companion linearization with the Krylov subspace method preconditioned bi-conjugate gradient (BiCG) and a decomposition of a tensor matrix of precomputed solutions, called snapshots. As a result, a reduced order model of \(x(\mu_{1},\mu_{2})\) is constructed, and this model can be evaluated in a cheap way for many values of the parameters. The decomposition is performed efficiently using the sparse grid based higher-order proper generalized decomposition (HOPGD) presented in [Lu, Blal, and Gravouil, _Internat. J. Numer. Methods Engrg._, 114:1438-1461, 2018], and the snapshots are generated as one variable functions of \(\mu_{1}\) or of \(\mu_{2}\), as proposed in [Correnty, Jarlebring, and Szyld, _Preprint on arXiv_, 2022 [https://arxiv.org/abs/2212.04295](https://arxiv.org/abs/2212.04295)]. Tensor decompositions performed on a set of snapshots can fail to reach a certain level of accuracy, and it is not possible to know a priori if the decomposition will be successful. This method offers a way to generate a new set of solutions on the same parameter space at little additional cost. An interpolation of the model is used to produce approximations on the entire parameter space, and this method can be used to solve a parameter estimation problem. Numerical examples of a parameterized Helmholtz equation show the competitiveness of our approach. The simulations are reproducible, and the software is available online.
K. Correnty, and (2022) 65F10, 65N22, 65F55
## 1 Introduction
We are interested in approximating solutions to linear systems of the form
\[A(\mu_{1},\mu_{2})x(\mu_{1},\mu_{2})=b, \tag{1}\]
for many different values of the parameters \(\mu_{1},\mu_{2}\). Here \(A(\mu_{1},\mu_{2})\in\mathbb{R}^{n\times n}\) is a large and sparse nonsingular matrix with a nonlinear dependence on \(\mu_{1}\in[a_{1},b_{1}]\subset\mathbb{R}\) and \(\mu_{2}\in[a_{2},b_{2}]\subset\mathbb{R}\) and \(x(\mu_{1},\mu_{2})\in\mathbb{R}^{n}\), \(b\in\mathbb{R}^{n}\). This work combines companion linearization, a technique from the study of nonlinear eigenvalue problems, with the Krylov subspace method bi-conjugate gradient (BiCG) [21, 33] and a tensor decomposition to construct a reduced order model. This smaller model can be evaluated in an inexpensive way to approximate the solution to (1) for many values of the parameters. Additionally, the model can be used to solve a parameter estimation problem, i.e., to simultaneously estimate \(\mu_{1}\) and \(\mu_{2}\) for a given solution vector where these parameters are not known.
Specifically, our proposed method is based on a decomposition of a tensor matrix of precomputed solutions to (1), called snapshots, where the systems arise from discretizations of parameterized partial differential equations (PDEs). In this way, building the reduced order model can be divided into two main parts:
1. Generate the snapshots.
2. Perform the tensor decomposition.
We assume further that the system matrix can be expressed as the sum of products of matrices and functions, i.e.,
\[A(\mu_{1},\mu_{2})=C_{1}f_{1}(\mu_{1},\mu_{2})+\cdots+C_{n_{f}}f_{n_{f}}(\mu_{1},\mu_{2}), \tag{2}\]
where \(n_{f}\ll n\) and \(f_{1},\ldots,f_{n_{f}}\) are nonlinear scalar functions in the parameters \(\mu_{1}\) and \(\mu_{2}\).
Previously proposed methods of this variety, e.g., [9; 10; 35; 36; 37; 38], generate the snapshots in an offline stage, and, thus, a linear system of dimension \(n\times n\) must be solved for each pair \((\mu_{1},\mu_{2})\) in the tensor matrix. Here we instead compute the snapshots with the method proposed in [17], Preconditioned Chebyshev BiCG for parameterized linear systems. This choice allows for greater flexibility in the selection of the set of snapshots included in the tensor, as the approximations are generated as one variable functions of \(\mu_{1}\) or of \(\mu_{2}\), i.e., with one parameter frozen. These functions are cheap to evaluate for different values of the parameter, and this method is described in Section 2.
The tensor matrix of precomputed snapshots has a particular structure, as we consider sparse grid sampling in the parameter space. This is a way to overcome the so-called _curse of dimensionality_ since the number of snapshots to generate and store grows exponentially with the dimension when the sampling is performed on a conventional full grid. The decomposition is performed with a variant of the higher-order proper generalized decomposition (HOPGD), as proposed in [35]. The method HOPGD [38] decomposes a tensor of snapshots sampled on a full grid. The approach here has been adapted for this particular setting with fewer snapshots and, as in the standard implementation, results in a separated expression in \(\mu_{1}\) and \(\mu_{2}\). Once constructed, the decomposition is interpolated to approximate solutions to (1) corresponding to all parameters \((\mu_{1},\mu_{2})\in[a_{1},b_{1}]\times[a_{2},b_{2}]\).
More precisely, an alternating directions, greedy algorithm is used to perform the decomposition, where the cost of each step of the method grows linearly with the number of unknowns \(n\). The basis vectors for the decomposition are not known beforehand, similar to the established proper generalized decomposition (PGD), often used to create reduced order models of PDEs which are separated in the time and space variables. The method PGD has been used widely in the study of highly nonlinear PDEs [13; 14; 15] and has been generalized to solve multiparametric problems; see, for instance, [5; 6].
As performing the decomposition can be done efficiently, generating the snapshots is the dominating cost of building the reduced order model. In general, we cannot guarantee that the error in a tensor decomposition for a certain set of snapshots will reach a specified accuracy level. Additionally, it is not possible to know a priori which sets of snapshots will lead to a successful decomposition, even with modest standards for the convergence [30]. In the case of the decomposition failing to converge for a given tensor, our proposed method offers an efficient way to generate a new set of snapshots on the same parameter space \([a_{1},b_{1}]\times[a_{2},b_{2}]\) with little extra computation. These snapshots can be used to construct a new reduced order model for the same parameter space in an identical, efficient way. This is the main contribution developed in this work.
Evaluating the resulting reduced order model is in general much more efficient than solving the systems individually for each choice of the parameters [35]. Reliable and accurate approximations are available for a variety of parameter choices,
and parameter estimation can be performed in a similarly competitive way. Details regarding the construction of the reduced order model are found in Section 3, and numerical simulations of a parameterized Helmholtz equation and a parameterized advection-diffusion equation are presented in Sections 4 and 5, respectively. All experiments were carried out on a 2.3 GHz Dual-Core Intel Core i5 processor with 16 GB RAM, and the corresponding software can be found online.1
Footnote 1: [https://github.com/siobhanie/ChebyshevHOPGD](https://github.com/siobhanie/ChebyshevHOPGD)
## 2 Generating snapshots with Preconditioned Chebyshev BiCG
Constructing an accurate reduced order model of \(x(\mu_{1},\mu_{2})\) requires sampling of solutions to (1) for many different values of the parameters \(\mu_{1}\) and \(\mu_{2}\). Parameterized linear systems have been studied in several prior works, for example, in the context of Tikhonov regularization for ill-posed problems [25, 29], as well as in [32], where the solutions were approximated by low-rank tensors, and in [44] with parameterized right-hand sides. Additionally, approaches based on companion linearization were proposed in [26], as well as in [16, 28] with a linearization based on an infinite Taylor series expansion.
Reduced order models based on sampling of snapshot solutions are typically constructed from finite element approximations, generated in an offline stage at a significant computational cost. We base our approach to obtain the snapshot solutions on an adapted version of the method Preconditioned Chebyshev BiCG for parameterized linear systems, originally proposed in [17]. Here the unique solution to a companion linearization, formed from a Chebyshev approximation using Chebfun[18], is generated in a preconditioned BiCG [21, 33] setting. Two executions of this method generate all solutions corresponding to the values of the parameters shown in Figure 2. Specifically, one execution for all solutions on the line \(\mu_{1}=\mu_{1}^{*}\) and one execution for all solutions on the line \(\mu_{2}=\mu_{2}^{*}\) in the plane \([a_{1}\times b_{1}]\times[a_{2}\times b_{2}]\). In this way, we generate solutions fixed in one parameter. Moreover, the sampling represented here is sufficient to build a reliable reduced order model with the strategy presented in [35]. We summarize the method Preconditioned Chebyshev BiCG, for the sake of self-containment, as follows.
A Chebyshev approximation of \(A(\mu_{1},\mu_{2})\) for a fixed \(\mu_{2}^{*}\in\mathbb{R}\) is given by \(\hat{P}(\mu_{1})\), where
\[A(\mu_{1},\mu_{2}^{*})\approx\hat{P}(\mu_{1})\coloneqq\hat{P}_{0}\tilde{\tau}_ {0}(\mu_{1})+\ldots+\hat{P}_{d_{1}}\tilde{\tau}_{d_{1}}(\mu_{1}) \tag{1}\]
and, for a fixed \(\mu_{1}^{*}\in\mathbb{R}\), by \(\tilde{P}(\mu_{2})\), i.e.,
\[A(\mu_{1}^{*},\mu_{2})\approx\tilde{P}(\mu_{2})\coloneqq\tilde{P}_{0}\tilde{ \tau}_{0}(\mu_{2})+\ldots+\tilde{P}_{d_{2}}\tilde{\tau}_{d_{2}}(\mu_{2}). \tag{2}\]
Here \(\hat{\tau}_{\ell}\) is the degree \(\ell\) Chebyshev polynomial on \([-c_{1},c_{1}]\), \(\tilde{\tau}_{\ell}\) is the degree \(\ell\) Chebyshev polynomial on \([-c_{2},c_{2}]\), and \(\hat{P}_{\ell}\), \(\tilde{P}_{\ell}\in\mathbb{R}^{n\times n}\) are the corresponding interpolation coefficients. These coefficients are computed efficiently by a discrete cosine transform of the one variable scalar functions
\[f_{k}(\mu_{1},\mu_{2}^{*})\coloneqq\hat{f}_{k}(\mu_{1}),\qquad f_{k}(\mu_{1}^ {*},\mu_{2})\eqqcolon\tilde{f}_{k}(\mu_{2}),\qquad k=1,\ldots,n_{f}, \tag{3}\]
in (2). Note, \(c_{l}\in\mathbb{R}\) is chosen such that \([a_{l},b_{l}]\subseteq[-c_{l},c_{l}]\), for \(l=1,2\). A companion linearization based on the linear system
\[\hat{P}(\mu_{1})\hat{x}(\mu_{1})=b \tag{4}\]
with \(\hat{x}(\mu_{1})=\hat{x}(\mu_{1},\mu_{2}^{*})\approx x(\mu_{1},\mu_{2}^{*})\), where \(x(\mu_{1},\mu_{2}^{*})\) is as in (1.1), and a fixed \(\mu_{2}^{*}\) is given by
\[\begin{pmatrix}0&I&&&&\\ I&0&I&&\\ &I&0&I&&\\ &&\ddots&&I&0&I\\ \hat{P}_{0}&\hat{P}_{1}&...&...&\hat{P}_{d_{1}-3}\left(-\hat{P}_{d_{1}}+\hat{ P}_{d_{1}-2}\right)\hat{P}_{d_{1}-1}\end{pmatrix}\begin{bmatrix}I&&&&\\ 2I&&\\ &2I&&\\ &\ddots&\\ &&2I&\\ &&-2\hat{P}_{d_{1}}\end{bmatrix}\begin{bmatrix}u_{0}(\mu_{1})\\ u_{1}(\mu_{1})\\ u_{2}(\mu_{1})\\ \vdots\\ u_{d_{1}-2}(\mu_{1})\\ u_{d_{1}-1}(\mu_{1})\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\\ \vdots\\ b\end{bmatrix}\]
with \(u_{\ell}(\mu_{1})\coloneqq\hat{\tau}_{\ell}(\mu_{1})\hat{x}(\mu_{1},\mu_{2}^{ *})\), for \(\ell=0,\ldots,d_{1}-1\). The linearization has the form
\[\left(\hat{K}-\mu_{1}\hat{M}\right)u(\mu_{1})=c, \tag{2.5}\]
where \(\hat{K},\hat{M}\in\mathbb{R}^{d_{1}n\times d_{1}n}\) are coefficient matrices, independent of the parameter \(\mu_{1}\), \(c\in\mathbb{R}^{d_{1}n}\) is a constant vector, and the solution \(u(\mu_{1})\) is unique. This linearization, inspired by the work [19] and studied fully in [17], relies on the well-known three-term recurrence of the Chebyshev polynomials:
\[\hat{\tau}_{0}(\mu_{1})=1,\quad\hat{\tau}_{1}(\mu_{1})=\frac{1}{c_{1}}\mu_{1},\quad\hat{\tau}_{\ell+1}(\mu_{1})=\frac{2}{c_{1}}\mu_{1}\hat{\tau}_{\ell}( \mu_{1})-\hat{\tau}_{\ell-1}(\mu_{1}). \tag{2.6}\]
Solutions to the systems in (2.5) for many different \(\mu_{1}\) are approximated with the Krylov subspace method bi-conjugate gradient for shifted systems [2, 23]. Specifically, we approximate an equivalent right preconditioned system, where the system matrix incorporates a shift with the identity matrix. This system is given by
\[(\hat{K}-\mu_{1}\hat{M})(\hat{K}-\hat{\sigma}\hat{M})^{-1}\hat{u} (\mu_{1})=c \tag{2.7b}\] \[\Longleftrightarrow (\hat{K}-\mu_{1}\hat{M}+\hat{\sigma}\hat{M}-\hat{\sigma}\hat{M})( \hat{K}-\hat{\sigma}\hat{M})^{-1}\hat{u}(\mu_{1})=c\] (2.7c) \[\Longleftrightarrow \left(I+(-\mu_{1}+\hat{\sigma})\hat{M}(\hat{K}-\hat{\sigma}\hat{ M})^{-1}\right)\hat{u}(\mu_{1})=c \tag{2.7a}\]
with \(\hat{u}(\mu_{1})=(\hat{K}-\hat{\sigma}\hat{M})u(\mu_{1})\). The \(k\)th approximation comes from the Krylov subspace generated from the matrix \(\hat{M}(\hat{K}-\hat{\sigma}\hat{M})^{-1}\) and the vector \(c\), defined by
\[\mathcal{K}_{k}\coloneqq\operatorname{span}\{c,\hat{M}(\hat{K}-\hat{\sigma} \hat{M})^{-1}c,\ldots,(\hat{M}(\hat{K}-\hat{\sigma}\hat{M})^{-1})^{k-1}c\}. \tag{2.8}\]
Here the shift- and scaling- invariance properties of Krylov subspaces have been exploited, i.e., \(\mathcal{K}_{k}=\tilde{\mathcal{K}}_{k}\), where \(\tilde{\mathcal{K}}_{k}\) is the Krylov subspace generated from the matrix \((I+(-\mu_{1}+\hat{\sigma})\hat{M}(\hat{K}-\hat{\sigma}\hat{M})^{-1})\) and the vector \(c\). Note, several Krylov methods have been developed specifically to approximate shifted systems of the form (2.7c). See, for example, [22, 24, 42, 45], as well as in [7, 8], where shift-and-invert preconditioners were considered.
As we consider a BiCG setting, we require also a basis matrix for the subspace defined by
\[\mathcal{L}_{k}\coloneqq\operatorname{span}\{\tilde{c},\left(\hat{M}(\hat{K} -\hat{\sigma}\hat{M})^{-1}\right)^{T}\tilde{c},\ldots,\left((\hat{M}(\hat{K}- \hat{\sigma}\hat{M})^{-1})^{T}\right)^{k-1}\tilde{c}\}, \tag{2.9}\]
where \(\tilde{c}\in\mathbb{R}^{d_{1}n}\) and \(c^{T}\tilde{c}\neq 0\). In this way, a basis matrix for the Krylov subspace (2.8) and a second basis matrix for the subspace (2.9) are built once and reused for the computation of solutions to (2.7) for all \(\mu_{1}\) of interest. More concretely, a Lanczos
biorthogonalization process generates the matrices \(V_{k}\), \(W_{k}\in\mathbb{R}^{d_{1}n\times k}\), \(T_{k}\ \in\mathbb{R}^{k\times k}\), and \(\underline{T}_{k}\), \(\bar{T}_{k}^{T}\in\mathbb{R}^{(k+1)\times k}\) such that the relations
\[\hat{M}(\hat{K}-\hat{\sigma}\hat{M})^{-1}V_{k} =V_{k}\ T_{k}\ +\beta_{k}v_{k+1}e_{k}^{T}\ =V_{k+1}\ \underline{T}_{k}, \tag{2.10b}\] \[\big{(}\hat{M}(\hat{K}-\hat{\sigma}\hat{M})^{-1}\big{)}^{T}W_{k} =W_{k}T_{k}^{T}+\gamma_{k}w_{k+1}e_{k}^{T}=W_{k+1}\bar{T}_{k}^{T} \tag{2.10a}\]
hold. The columns of \(V_{k}\) span the subspace (2.8), the columns of \(W_{k}\) span the subspace (2.9), and the biorthogonalization procedure gives \(W_{k}^{T}V_{k}=I_{k}\), where \(I_{k}\in\mathbb{R}^{k\times k}\) is the identity matrix of dimension \(k\times k\). Here \(e_{k}\) in (2.10) denotes the \(k\)th column of \(I_{k}\) and the matrices in (2.10) are generated independently of the parameter \(\mu_{1}\). The square matrix \(T_{k}\) is of the form
\[T_{k}\coloneqq\begin{bmatrix}\alpha_{1}&\gamma_{1}&&\\ \beta_{1}&\ddots&&\\ &\ddots&&\gamma_{k-1}\\ &&\beta_{k-1}&\alpha_{k}\end{bmatrix}\in\mathbb{R}^{k\times k}, \tag{2.11}\]
and the tridiagonal rectangular Hessenberg matrices \(\underline{T}_{k}\) and \(\bar{T}_{k}^{T}\) are given by
\[\underline{T}_{k}\coloneqq\begin{bmatrix}\alpha_{1}&\gamma_{1}&&\\ \beta_{1}&\ddots&\ddots&\\ &\ddots&\ddots&\gamma_{k-1}\\ &&\beta_{k-1}&\alpha_{k}\\ &&&\beta_{k}\end{bmatrix},\quad\bar{T}_{k}^{T}\coloneqq\begin{bmatrix}\alpha_{ 1}&\beta_{1}&&\\ \gamma_{1}&\ddots&\ddots&\\ &\ddots&\ddots&\beta_{k-1}\\ &&\gamma_{k-1}&\alpha_{k}\\ &&\gamma_{k}\end{bmatrix}, \tag{2.12}\]
where only the \(k\times k\) principal submatrices of \(\underline{T}_{k}\) and \(\bar{T}_{k}^{T}\) are the transpose of each other. The Lanczos biorthogonalization process has the advantage of the so-called short-term recurrence of the Krylov basis vectors, i.e., that the matrices \(\underline{T}_{k}\) and \(\bar{T}_{k}^{T}\) in (2.10) are tridiagonal. In this way, the basis vectors are computed recursively at each iteration of the algorithm, and no additional orthogonalization procedure is required.
The same shift-and-invert preconditioner and its adjoint must be applied at each iteration of the Lanczos biorthogonalization algorithm. We consider an efficient application, derived in [4] and adapted in [31], via a block LU decomposition of the matrix \(L_{\hat{\sigma}}U_{\hat{\sigma}}=(\hat{K}-\hat{\sigma}\hat{M})\Pi\), where \(\Pi\coloneqq\begin{bmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I_{ \begin{smallmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I_{ \begin{smallmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I _{\begin{smallmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I_{ \begin{smallmatrix}I_{\begin{smallmatrix}I_{\begin{smallmatrix}I_{ \begin{\begin{smallmatrix}I_{\begin{{\cdot}I_{{{\cdot}_{{{}_{ }}_{\cdot}}}}{}}}}}\\ }\end{}}\end{}}\end{}}}\end{}}\end{}}\end{}}\end{bmatrix}\in\mathbb{R}^{d_{1}n \times d_{1}n}\) is a permutation matrix,
\[L_{\hat{\sigma}}\coloneqq\begin{bmatrix}I&&\\ -\frac{2\hat{\sigma}}{c_{1}}I&I&\\ I&-\frac{2\hat{\sigma}}{c_{1}}I&I&\\ &&\ddots&&\\ &&I&-\frac{2\hat{\sigma}}{c_{1}}I&I\\ \hat{P}_{1}&\cdots&\hat{P}_{d_{1}-3}&(-\hat{P}_{d_{1}}+\hat{P}_{d_{1}+2})&( \hat{P}_{d_{1}-1}+\frac{2\hat{\sigma}}{c_{1}}\hat{P}_{d_{1}})&\hat{P}(\hat{ \sigma})\end{bmatrix}\end{bmatrix}\in\mathbb{R}^{d_{1}n\times d_{1}n},\]
and
\[U_{\hat{\sigma}}\coloneqq\begin{bmatrix}I&&\begin{matrix}-\hat{\tau}_{1}(\hat {\sigma})I\\ &I&\begin{matrix}-\hat{\tau}_{2}(\hat{\sigma})I\\ &-\hat{\tau}_{3}(\hat{\sigma})I\\ &-\hat{\tau}_{3}(\hat{\sigma})I\\ &\ddots&\vdots\\ &&I&-\hat{\tau}_{d_{1}-1}(\hat{\sigma})I\end{matrix}\end{bmatrix}\in\mathbb{R} ^{d_{1}n\times d_{1}n}.\]
Specifically, the action of the preconditioner \((\hat{K}-\hat{\sigma}\hat{M})^{-1}\) to a vector \(y\in\mathbb{R}^{d_{1}n}\) is given by
\[(\hat{K}-\hat{\sigma}\hat{M})^{-1}y=\Pi U_{\hat{\sigma}}^{-1}L_{\hat{\sigma}}^{- 1}y, \tag{2.13}\]
and the adjoint preconditioner is applied analogously, i.e.,
\[(\hat{K}-\hat{\sigma}\hat{M})^{-T}y=L_{\hat{\sigma}}^{-T}U_{\hat{\sigma}}^{-T} \Pi^{T}y. \tag{2.14}\]
The matrix \(U_{\hat{\sigma}}^{-1}\) is identical to \(U_{\hat{\sigma}}\), except for a sign change in the first \(d_{1}-1\) blocks in the last block column. Applying \(L_{\hat{\sigma}}^{-1}\) to a vector amounts to recursively computing the first \(d_{1}-1\) block elements and performing one linear solve with system matrix \(\hat{P}(\hat{\sigma})\in\mathbb{R}^{n\times n}\), analogous to solving a block lower triangular system with Gaussian elimination. This linear solve can be achieved, for example, by computing one LU decomposition of \(\hat{P}(\hat{\sigma})\). Note, an LU decomposition of \(\hat{P}(\hat{\sigma})\) can be reused in the application of \(L_{\hat{\sigma}}^{-T}\).
After the Krylov subspace basis matrices of a desired dimension \(k\) have been constructed, approximations to (2.5) and, equivalently, approximations to (1.1), can be calculated efficiently for many values of the parameter \(\mu_{1}\). In particular, we reuse the matrices \(V_{k}\) and \(T_{k}\) in (2.10) for the computation of each approximation to \(x(\mu_{1}^{i},\mu_{2}^{*})_{i=1}^{n_{1}}\) of interest. This requires the calculations
\[y_{k}(\mu_{1}^{i}) = \left(I_{k}+(-\mu_{1}^{i}+\hat{\sigma}T_{k})\right)^{-1}(\beta e _{1}), \tag{2.15b}\] \[z_{k}(\mu_{1}^{i}) = V_{k}y_{k}(\mu_{1}^{i}),\] (2.15c) \[x_{k}(\mu_{1}^{i},\mu_{2}^{*}) = \left((\hat{K}-\hat{\sigma}\hat{M})^{-1}z_{k}(\mu_{1}^{i})\right) _{1:n}, \tag{2.15a}\]
for \(i=1,\ldots,n_{1}\). Here \(\beta\coloneqq\left\|b\right\|\), where \(b\) is as in (1.1), and \(x_{k}(\mu_{1},\mu_{2}^{*})\) denotes the approximation to (1.1) from the Krylov subspace (2.8) of dimension \(k\) corresponding to \((\mu_{1},\mu_{2}^{*})\). Equivalently, \(x_{k}(\mu_{1},\mu_{2}^{*})\approx\hat{x}(\mu_{1},\mu_{2}^{*})\), where \(\hat{x}(\mu_{1},\mu_{2}^{*})\) is the solution to the system (2.4). Note, the subscript on the right-hand side of (2.15c) denotes the first \(n\) elements in the vector, and the preconditioner in (2.15c) is applied as in (2.13). Since \(k\ll n\), solving the tridiagonal system in (2.15a) is not computationally demanding. Thus, once we build a sufficiently large Krylov subspace via one execution of the main algorithm, we have access to approximations \(x_{k}(\mu_{1}^{i},\mu_{2}^{*})\) for all \(\mu_{1}^{i}\) in the interval \([a_{1},b_{1}]\).
The process of approximating (1.1) for a fixed \(\mu_{1}^{*}\) and \(\mu_{2}^{j}\), for \(j=1,\ldots,n_{2}\), is completely analogous to the above procedure and, thus, we provide just a summary here. A companion linearization is formed for a fixed \(\mu_{1}^{*}\), which is solved to approximate \(x(\mu_{1}^{*},\mu_{2}^{j})\), where \(\mu_{2}^{j}\in[a_{2},b_{2}]\), for \(j=1,\ldots,n_{2}\). This linearization has the form
\[\left(\tilde{K}-\mu_{2}\tilde{M}\right)u(\mu_{2})=c \tag{2.16}\]
and is based on the Chebyshev approximation
\[\tilde{P}(\mu_{2})\tilde{x}(\mu_{2})=b. \tag{2.17}\]
Here \(\tilde{P}(\mu_{2})\) is as in (2.2) and \(\tilde{x}(\mu_{2})=\tilde{x}(\mu_{1}^{*},\mu_{2})\approx x(\mu_{1}^{*},\mu_{2})\), where \(x(\mu_{1}^{*},\mu_{2})\) is as in (1.1). The linearization (2.16) is based on a three-term recurrence as in (2.6), and we consider a shift-and-invert preconditioner of the form \((\tilde{K}-\tilde{\sigma}\tilde{M})^{-1}\). As in (2.13) and (2.14), the application of this particular preconditioner and its adjoint each require the solution to a linear system with matrices \(\tilde{P}(\tilde{\sigma})\) and \(\tilde{P}(\tilde{\sigma})^{T}\), respectively, and
this must be done at each iteration of the Lanczos biorthogonalization. Analogous computations to (15) must be performed for each \(\mu_{2}^{j}\) of interest. Thus, using a second application of the method Preconditioned Chebyshev BiCG for parameterized linear systems to solve (17), we have access to approximations \(x_{k}(\mu_{1}^{*},\mu_{2}^{j})\) for all \(\mu_{2}^{j}\) on the interval \([a_{2},b_{2}]\), obtained in a similarly efficient way.
**Remark 1** (Choice of target parameter): _The use of shift-and-invert preconditioners with well-chosen \(\hat{\sigma}\) (\(\tilde{\sigma}\)) generally result in fast convergence, i.e., a few iterations of the shifted BiCG algorithm. This is because \((\hat{K}-\mu_{1}\hat{M})(\hat{K}-\hat{\sigma}\hat{M})^{-1}\approx I\), for \(\mu_{1}\approx\hat{\sigma}\) (and similarly \((\tilde{K}-\mu_{2}\tilde{M})(\tilde{K}-\tilde{\sigma}\tilde{M})^{-1}\approx I\), for \(\mu_{2}\approx\tilde{\sigma}\)). The result of this is that, typically, only a few matrix-vector products and linear (triangular) solves of dimension \(n\times n\) are required before the algorithm terminates. Thus, we have accurate approximations to \(x(\mu_{1}^{i},\mu_{2}^{*})\), \(i=1,\ldots,n_{1}\) (and \(x(\mu_{1}^{*},\mu_{2}^{j})\), \(j=1,\ldots,n_{2}\)), obtained in a cheap way. We refer to \(\hat{\sigma}\) and \(\tilde{\sigma}\) as target parameters for this reason._
**Remark 2** **(Inexact application of the preconditioner): _The LU decomposition of the matrices \(\hat{P}(\hat{\sigma})\) and \(\tilde{P}(\tilde{\sigma})\) of dimension \(n\times n\) can be avoided entirely by considering the inexact version of Preconditioned Chebyshev BiCG for parameterized linear systems, derived and fully analyzed in [17]. This method applies the preconditioner and its adjoint approximately via iterative methods and is suitable for systems where the dimension \(n\) is very large. In practice, the corresponding systems can be solved with a large amount of error once the relative residual of the outer method is sufficiently low._
**Remark 3** **(Structure of the companion linearization): _Though we are interested in the solution to systems which depend on two parameters, we consider an interpolation in one variable. Interpolations of functions in two variables have been studied. However, the error of the approximation outside of the interpolation nodes tends to be too large for our purposes. Additionally, the recursion in (6) is essential for the structure of the companion linearizations (5) and, analogously, in (16). In particular, solutions to (1) must only appear in the solution vector, and the matrices \(\hat{K}\), \(\hat{M}\) (\(\tilde{K}\), \(\tilde{M}\)) and right-hand side vector \(c\) must be constant with respect to the parameters \(\mu_{1}\) and \(\mu_{2}\)._
**Remark 4** **(Error introduced by the Chebyshev approximation): _This work utilizes Chebfun[18], which computes approximations to the true Chebyshev coefficients. In particular, convergence is achieved as the approximate coefficients decay to zero, and only coefficients greater in magnitude than \(10^{-16}\) are used in the approximation. In the examples which follow, we consider only twice continuously differentiable functions for \(\hat{f}_{k}\) and \(\tilde{f}_{k}\) in (3). Thus, the error introduced by a Chebyshev approximation is very small._
## 3 Sparse grid based HOPGD
Let \(X(\mu_{1},\mu_{2})\in\mathbb{R}^{n\times n_{1}\times n_{2}}\) be a sparse three-dimensional matrix of precomputed snapshots. These approximations to (1), generated by two executions of the preconditioned Krylov subspace method described in Section 2, correspond to the pairs of parameters \((\mu_{1}^{i},\mu_{2}^{*})\), \(i=1,\ldots,n_{1}\), and \((\mu_{1}^{*},\mu_{2}^{j})\), \(j=1,\ldots,n_{2}\). Here \(\mu_{1}^{i}\) are in the interval \([a_{1},b_{1}]\), \(\mu_{2}^{j}\) are in the interval \([a_{2},b_{2}]\), and \(\mu_{1}^{*}\in[a_{1},b_{1}]\), \(\mu_{2}^{*}\in[a_{2},b_{2}]\) are fixed values. More precisely,
\[X(\mu_{1}^{i},\mu_{2}^{*}) =\hat{x}(\mu_{1}^{i},\mu_{2}^{*})\in\mathbb{R}^{n},\quad i=1, \ldots,n_{1}, \tag{1a}\] \[X(\mu_{1}^{*},\mu_{2}^{j}) =\tilde{x}(\mu_{1}^{*},\mu_{2}^{j})\in\mathbb{R}^{n},\quad j=1, \ldots,n_{2}, \tag{1b}\]
and the remaining entries of \(X(\mu_{1},\mu_{2})\) are zeros; see Figure 1 for a visualization of a tensor of this form. Note, \(\hat{x}(\mu_{1}^{i},\mu_{2}^{*})=\hat{x}(\mu_{1}^{i})\) are approximations to the linear
systems described in (4) and \(\tilde{x}(\mu_{1}^{*},\mu_{2}^{\ast})=\tilde{x}(\mu_{2}^{j})\) are approximations to the systems in (17). We refer to the set
\[\mathbf{\mu}\coloneqq(\mu_{1}^{i},\mu_{2}^{*})\cup(\mu_{1}^{*},\mu_{2}^{j}), \tag{22}\]
for \(i=1,\ldots,n_{1}\), \(j=1,\ldots,n_{2}\), as the nodes and \(\mathbf{\mu}\subset[a_{1},b_{1}]\times[a_{2},b_{2}]\). The set of nodes corresponding to the tensor matrix \(X(\mu_{1},\mu_{2})\) in Figure 2 are visualized in Figure 3. This way of sampling in the parameter space is a way to mitigate the so-called _curse of dimensionality_ in terms of the number of snapshots to generate and store.
Sparse grid based higher order proper generalized decomposition (HOPGD) [35] is a method which generates an approximation \(X^{m}(\mu_{1},\mu_{2})\in\mathbb{R}^{n\times n_{1}\times n_{2}}\) to the tensor matrix \(X(\mu_{1},\mu_{2})\in\mathbb{R}^{n\times n_{1}\times n_{2}}\) (21). Specifically, this expression is separated in the parameters \(\mu_{1}\) and \(\mu_{2}\) and is of the form
\[X(\mu_{1}^{i},\mu_{2}^{*})\approx X^{m}(\mu_{1}^{i},\mu_{2}^{*}) =\sum_{k=1}^{m}\Phi_{n}^{k}F_{1}^{k}(\mu_{1}^{i})F_{2}^{k}(\mu_{2}^ {*})\in\mathbb{R}^{n},\quad i=1,\ldots,n_{1}, \tag{23a}\] \[X(\mu_{1}^{*},\mu_{2}^{j})\approx X^{m}(\mu_{1}^{*},\mu_{2}^{j}) =\sum_{k=1}^{m}\Phi_{n}^{k}F_{1}^{k}(\mu_{1}^{*})F_{2}^{k}(\mu_{2}^ {j})\in\mathbb{R}^{n},\quad j=1,\ldots,n_{2} \tag{23b}\]
Figure 2: Sparse grid sampling in the parameter space. Nodes (22) correspond to the 9 snapshot solutions in Figure 2 with \(\mu_{1}\in[a_{1},b_{1}]\), \(\mu_{2}\in[a_{2},b_{2}]\), \(\mu_{1}^{*}=(a_{1}+b_{1})/2\), \(\mu_{2}^{*}=(a_{2}+b_{2})/2\). All snapshots generated via two executions of Preconditioned Chebyshev BiCG.
at each node in the set (12), where \(\Phi_{n}^{k}\in\mathbb{R}^{n}\) and
\[F_{1}^{k}:\mathbb{R}\to\mathbb{R},\quad F_{2}^{k}:\mathbb{R}\to\mathbb{R} \tag{13}\]
are one variable scalar functions of the parameters \(\mu_{1}\) and \(\mu_{2}\), respectively. Here \(m\) is the rank of the approximation and \(F_{l}^{k}\), \(l=1,2\), are the unknown functions of the \(k\)th mode. In this way, the original function \(X(\mu_{1},\mu_{2})\), evaluated at the nodes \(\boldsymbol{\mu}\), is estimated by a linear combination of products of lower-dimensional functions.
The decomposition in (11) generates only the evaluations of \(F_{l}^{k}\), and the reduced basis vectors for the approximation are not known a priori. Instead, these vectors are determined on-the-fly, in contrast to other methods based on the similar proper orthogonal decomposition [9; 14]. A visualization of this decomposition on a sparse tensor of snapshots appears in Figure 1. The spacing of the nodes is equidistant in both the \(\mu_{1}\) and \(\mu_{2}\) directions, conventional for methods of this type.
We note the similarity between this method and the well-known higher-order singular value decomposition (HOSVD) [34]. Additionally, the reduced order model generated via the method sparse grid based HOPGD and visualized in Figure 1 is of the same form as the established CANDECOMP/PARAFAC (CP) decomposition. Specifically, the approximations consists of a sum of rank-one tensors [30].
The model as expressed in (11) can only be evaluated on the set of nodes (12) corresponding to the snapshots included in the tensor matrix (10). To access other approximations, interpolations of \(F_{l}^{k}\) will be required; see Section 3.2. The particular structure of the sparse grid sampling incorporated here is very similar to that which is used in traditional sparse grid methods for PDEs [12]. Our approach differs from these methods, which are based on an interpolation of high dimensional functions using a hierarchical basis not considered here. See also [11] for a method in this style, where a reduced order model was constructed using hierarchical collocation to solve parameteric problems.
**Remark** **(Evaluation of the model at a point in space)**: The approximation in (11) is expressed using vectors of length \(n\) as the examples which follow in Sections 4 concern a parameterized PDE discretized on a spatial domain denoted by \(\Omega\). In particular, we are interested in the solutions to these PDEs for many values of the parameters \(\mu_{1}\) and \(\mu_{2}\). The approximation (11), expressed at a point in space, is denoted by
\[X(x_{0},\mu_{1},\mu_{2})\approx X^{m}(x_{0},\mu_{1},\mu_{2})=\sum_{k=1}^{m} \Phi^{k}(x_{0})F_{1}^{k}(\mu_{1})F_{2}^{k}(\mu_{2})\in\mathbb{R}, \tag{14}\]
for a particular \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\) (12), where \(x_{0}\in\Omega\subset\mathbb{R}^{d}\) is a spatial parameter, \(d\) is the dimension of the spatial domain, and \(\Phi^{k}(x_{0})\in\mathbb{R}\) is an entry of the vector \(\Phi_{n}^{k}\in\mathbb{R}^{n}\) in (11). Specifically, \(\Phi_{n}^{k}\) denotes \(\Phi^{k}\) evaluated on a set of \(n\) spatial discretization points \(\Omega_{dp}\), where \(\Omega_{dp}\subset\Omega\). As in (11), to evaluate the model in (14) for values of \(\mu_{1}\) and \(\mu_{2}\) outside of the set of nodes, interpolations must be performed. \({}_{\Box}\)
### Computation of the separated expression
Traditionally, tensor decomposition methods require sampling performed on full grids in the parameter space, requiring the computation and storage of many snapshot solutions. See [10; 36; 38] for HOPGD performed in this setting, as well as an example in Section 4.2. The tensor decomposition proposed in [35] is specifically designed for sparse tensors. In particular, the sampling is of the same form considered in Section 2 and shown in Figures 1 and 2. This approach is more efficient, as it requires fewer snapshots
and, as a result, the decomposition is performed in fewer computations. The method in [35], adapted to our particular setting, is summarized here, and details on the full derivation appear in Appendix A.
We seek an approximation to the nonzero vertical columns of \(X(\mu_{1},\mu_{2})\) described in (3.1), separated in the parameters \(\mu_{1}\in[a_{1},b_{1}]\) and \(\mu_{2}\in[a_{2},b_{2}]\). The full parameter domain is defined by \(\mathcal{D}\coloneqq\Omega_{n}\times[a_{1},b_{1}]\times[a_{2},b_{2}]\), where \(\Omega_{n}\subset\mathbb{R}^{n}\). We express solutions as vectors of length \(n\), i.e., as approximations to (1.1) evaluated at a particular node \((\mu_{1},\mu_{2})\); cf. (3.5), where solutions are expressed at a single point in the spatial domain. The approximations satisfy
\[X(\mu_{1},\mu_{2}) \approx X^{m}(\mu_{1},\mu_{2}), \tag{3.6b}\] \[= \sum_{k=1}^{m}\Phi_{n}^{k}F_{1}^{k}(\mu_{1})F_{2}^{k}(\mu_{2}),\] (3.6c) \[= X^{m-1}(\mu_{1},\mu_{2})+\Phi_{n}^{m}F_{1}^{m}(\mu_{1})F_{2}^{m} (\mu_{2}), \tag{3.6a}\]
for \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\), where \(\boldsymbol{\mu}\) is the set of nodes defined in (3.2), and are generated via an alternating directions strategy. Specifically, at each enrichment step \(m\), all components \(\Phi_{n}^{m}\in\mathbb{R}^{n}\), \(F_{1}^{m}(\mu_{1})\in\mathbb{R}\), \(F_{2}^{m}(\mu_{2})\in\mathbb{R}\) in (3.6c) are initialized then updated sequentially to construct an \(L^{2}\) projection of \(X(\mu_{1},\mu_{2})\). This is done via a greedy algorithm, where all components are assumed fixed, except the one we seek to compute. The process is equivalent to the minimization problem described as follows. At step \(m\), find \(X^{m}(\mu_{1},\mu_{2})\in V_{m}\subset L^{2}(\mathcal{D})\) as in (3.6) such that
\[\min_{X^{m}(\mu_{1},\mu_{2})}\left(\frac{1}{2}\left\|wX(\mu_{1},\mu_{2})-wX^{ m}(\mu_{1},\mu_{2})\right\|_{L^{2}(\mathcal{D})}^{2}\right), \tag{3.7}\]
where \(w\) is the sampling index, i.e.,
\[\begin{cases}w=1,&(\mu_{1},\mu_{2})\in\boldsymbol{\mu},\\ w=0,&\text{otherwise},\end{cases}\]
\(\mathcal{D}\) is the full parameter domain, and \(V_{m}\) represents a set of test functions. Equivalently, using the weak formulation, we seek the solution \(X^{m}(\mu_{1},\mu_{2})\) to
\[(wX^{m}(\mu_{1},\mu_{2}),\delta X)_{\mathcal{D}}=(wX(\mu_{1},\mu_{2}),\delta X )_{\mathcal{D}},\quad\forall\delta X\in V_{m}. \tag{3.8}\]
Here \((\cdot,\cdot)_{\mathcal{D}}\) denotes the integral of the scalar product over the domain \(\mathcal{D}\), and (3.8) can be written as
\[\left(w\Phi_{n}^{m}F_{1}^{m}F_{2}^{m},\delta X\right)_{\mathcal{D}}=\left(wX( \mu_{1},\mu_{2})-wX^{m-1}(\mu_{1},\mu_{2}),\delta X\right)_{\mathcal{D}}, \quad\forall\delta X\in V_{m}; \tag{3.9}\]
see details in Appendix A. We have simplified the notation in (3.9) with \(F_{1}^{m}\coloneqq F_{1}^{m}(\mu_{1})\) and \(F_{2}^{m}\coloneqq F_{2}^{m}(\mu_{2})\) for readability. In practice, approximations to (3.9) are computed successively via a least squares procedure until a fixed point is detected. The \(m\)th test function is given by \(\delta X=\delta\Phi^{m}F_{1}^{m}F_{2}^{m}+\Phi_{n}^{m}\delta F_{1}^{m}F_{2}^{ m}+\Phi_{n}^{m}F_{1}^{m}\delta F_{2}^{m}\), and the approximation (3.6c) is updated with the resulting components.
More concretely, let the initializations \(\Phi_{n}^{m}\), \(\boldsymbol{f_{1}^{m}}\coloneqq\left[F_{1}^{m}(\mu_{1}^{1}),\ldots,F_{1}^{m}( \mu_{1}^{n_{1}})\right]\in\mathbb{R}^{n_{1}}\), \(\boldsymbol{f_{2}^{m}}\coloneqq\left[F_{2}^{m}(\mu_{2}^{1}),\ldots,F_{2}^{m}( \mu_{2}^{n_{2}})\right]\in\mathbb{R}^{n_{2}}\) be given. We seek first an update of \(\Phi_{n}^{m_{\text{old}}}\coloneqq\Phi_{n}^{m}\), assuming \(F_{1}^{m_{\text{old}}}\coloneqq F_{1}^{m}\), \(F_{2}^{m_{\text{old}}}\coloneqq F_{2}^{m}\) are known function evaluations for all \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\). We update \(\Phi_{n}^{m}\) as
\[\Phi_{n}^{m}=\frac{\sum_{\boldsymbol{\mu}}r_{m-1}(\mu_{1},\mu_{2})}{\sum_{ \boldsymbol{\mu}}F_{1}^{m}(\mu_{1})F_{2}^{m}(\mu_{2})}, \tag{3.10}\]
where the \((m-1)\)th residual vector is given by
\[r_{m-1}(\mu_{1},\mu_{2})\coloneqq X(\mu_{1},\mu_{2})-X^{m-1}(\mu_{1},\mu_{2}) \in\mathbb{R}^{n}. \tag{3.11}\]
Denote the vector obtained in (3.10) by \(\Phi_{n}^{m_{\text{new}}}\) and seek an update of \(F_{1}^{m_{\text{old}}}(\mu_{1}^{i})\), assuming \(\Phi_{n}^{m}\), \(F_{2}^{m}(\mu_{2}^{j})\) known, using
\[F_{1}^{m}(\mu_{1}^{i})=\frac{(\Phi_{n}^{m})^{T}r_{m-1}(\mu_{1}^{i},\mu_{2}^{*} )}{(\Phi_{n}^{m})^{T}\Phi_{n}^{m}F_{2}^{m}(\mu_{2}^{*})}, \tag{3.12}\]
for \(i=1,\ldots,n_{1}\). We denote the solutions found in (3.12) by \(F_{1}^{m_{\text{new}}}(\mu_{1}^{i})\). Note, \(\Phi_{n}^{m_{\text{new}}}\) and \(F_{2}^{m_{\text{old}}}(\mu_{2}^{j})\) are used in the computations in (3.12). The updates \(F_{2}^{m}(\mu_{2}^{j})\) are computed as
\[F_{2}^{m}(\mu_{2}^{j})=\frac{(\Phi_{n}^{m})^{T}r_{m-1}(\mu_{1}^{*},\mu_{2}^{j} )}{(\Phi_{n}^{m})^{T}\Phi_{n}^{m}F_{2}^{m}(\mu_{2}^{j})}, \tag{3.13}\]
for \(j=1,\ldots,n_{2}\), which we denote by \(F_{2}^{m_{\text{new}}}(\mu_{2}^{j})\), where \(\Phi_{n}^{m_{\text{new}}}\) and \(F_{1}^{m_{\text{new}}}(\mu_{1}^{i})\) were used in the computations in (3.13). For a certain tolerance level \(\varepsilon_{1}\), if the relation
\[\|\delta(\mu_{1},\mu_{2})\|<\varepsilon_{1} \tag{3.14}\]
holds for all \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\), where
\[\delta(\mu_{1},\mu_{2})\coloneqq\Phi_{n}^{m_{\text{old}}}F_{1}^{m_{\text{old }}}(\mu_{1})F_{2}^{m_{\text{old}}}(\mu_{2})-\Phi_{n}^{m_{\text{new}}}F_{1}^{m_ {\text{new}}}(\mu_{1})F_{2}^{m_{\text{new}}}(\mu_{2}), \tag{3.15}\]
we have approached a fixed point. In this case, the approximation in (3.6c) is updated with the components \(\Phi_{n}^{m}\), \(F_{1}^{m}(\mu_{1}^{i})\), and \(F_{2}^{m}(\mu_{2}^{j})\). If the condition (3.14) is not met, we set \(\Phi_{n}^{m_{\text{old}}}=\Phi_{n}^{m_{\text{new}}}\), \(F_{1}^{m_{\text{old}}}(\mu_{1}^{i})\coloneqq F_{1}^{m_{\text{new}}}(\mu_{1}^{ i})\), \(F_{2}^{m_{\text{old}}}(\mu_{2}^{j})\coloneqq F_{2}^{m_{\text{new}}}(\mu_{2}^{j})\) and repeat the process described above. If
\[\frac{\|X(\mu_{1},\mu_{2})-X^{m}(\mu_{1},\mu_{2})\|}{\|X(\mu_{1},\mu_{2})\|}< \varepsilon_{2}\]
is met for a specified \(\varepsilon_{2}\) and all \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\), the algorithm terminates. Otherwise, we seek \(\Phi_{n}^{m+1}\), \(F_{1}^{m+1}(\mu_{1}^{i})\), \(F_{2}^{m+1}(\mu_{2}^{j})\) using the same procedure, initialized with the most recent updates. This process is summarized in Algorithm 1.
```
1:\(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2}\), \(\boldsymbol{\mu}_{1},\mu_{2
**Remark 3.2** (Separated expression in more than two parameters): _The reduced order model (3.3) is separated in the two parameters \(\mu_{1}\) and \(\mu_{2}\). In [35], models separated in as many as six parameters were constructed, where the sampling was performed on a sparse grid in the parameter space. The authors note this particular approach for the decomposition may not be optimal in the sense of finding the best approximation, though it is possible. Our proposed way of computing the snapshots could be generalized to a setting separated in s parameters. In this way, the procedure would fix \((s-1)\) parameters at a time and execute Chebyshev BiCG a total of s times._
**Remark 3.3** (Comparison of HOPGD to similar methods): _In [38], HOPGD was studied alongside the similar HOSVD method. The decompositions were separated in as many as six parameters, and the sampling was performed on a full grid in the parameter space. In general, HOPGD produced separable representations with fewer terms compared with HOSVD. In this way, the model constructed by HOPGD can be evaluated in a more efficient manner. Furthermore, the method HOPGD does not require the number of terms in the separated solution to be set a priori, as in a CP decomposition._
```
Input: Set of nodes \(\boldsymbol{\mu}\) as in (3.2), tensor matrix of precomputed snapshots \(X(\mu_{1},\mu_{2})\) as in (3.3), tolerances \(\varepsilon_{1},\varepsilon_{2}>0\) Output: Function \(\boldsymbol{X}^{m}(\mu_{1},\mu_{2}):[a_{1},b_{1}]\times[a_{2},b_{2}]\to \mathbb{R}^{n}\), approx to (1.1) Initialize: \(\Phi_{n}^{1}\in\mathbb{R}^{n}\), \(F_{1}^{1}(\mu_{1}^{i})\in\mathbb{R}\), \(F_{2}^{1}(\mu_{2}^{j})\in\mathbb{R}\), \(i=1,\ldots,n_{1}\), \(j=1,\ldots,n_{2}\)
1\(m=1\)
2 Set \(X^{0}(\mu_{1}^{i},\mu_{2}^{j})\coloneqq 0\), \(X^{m}(\mu_{1}^{i},\mu_{2}^{j})\coloneqq\Phi_{n}^{m}F_{1}^{m}(\mu_{1}^{i})F_{2 }^{m}(\mu_{2}^{j})\)
3while\(\|X(\mu_{1},\mu_{2})-X^{m}(\mu_{1},\mu_{2})\|\,/\,\|X(\mu_{1},\mu_{2})\|> \varepsilon_{2}\), for any \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\)do
4 Set \(\Phi_{n}^{m_{\text{old}}}\coloneqq\Phi_{n}^{m}\), \(F_{1}^{m_{\text{old}}}(\mu_{1}^{i})\coloneqq F_{1}^{m}(\mu_{1}^{i})\), \(F_{2}^{m_{\text{old}}}(\mu_{2}^{j})\coloneqq F_{2}^{m}(\mu_{2}^{j})\)
5 Compute \(\Phi_{n}^{m}\) with (3.10)
6 Set \(\Phi_{n}^{m_{\text{new}}}\coloneqq\Phi_{n}^{m}\)
7 Compute \(F_{1}^{m}(\mu_{1}^{i})\) with (3.12)
8 Set \(F_{1}^{m_{\text{new}}}(\mu_{1}^{i})\coloneqq F_{1}^{m}(\mu_{1}^{i})\)
9 Compute \(F_{2}^{m}(\mu_{2}^{j})\) with (3.13)
10 Set \(F_{2}^{m_{\text{new}}}(\mu_{2}^{j})\coloneqq F_{2}^{m}(\mu_{2}^{j})\)
11 Compute \(\delta(\mu_{1}^{i},\mu_{2}^{j})\) (3.15)
12if\(\delta(\mu_{1},\mu_{2})<\varepsilon_{1}\), for all \((\mu_{1},\mu_{2})\in\boldsymbol{\mu}\)do
13 Update \(X^{m}(\mu_{1},\mu_{2})\) as in (3.6c)
14 Set \(m\coloneqq m+1\)
15 Set \(\Phi_{n}^{m}=\Phi_{n}^{m-1}\), \(F_{1}^{m}(\mu_{1}^{i})\coloneqq F_{1}^{m-1}(\mu_{1}^{i})\), \(F_{2}^{m}(\mu_{2}^{j})\coloneqq F_{2}^{m-1}(\mu_{2}^{j})\)
16
17 end if
18
19 end for return Compute interpolation \(\boldsymbol{X}^{m}(\mu_{1},\mu_{2})\) as in (3.16)
```
**Algorithm 1**Sparse grid based Chebyshev HOPGD for parameterized linear systems
### Interpolated model
The tensor representation in (3.3) consists, in part, of one-dimensional functions \(F_{1}^{k}\) and \(F_{2}^{k}\), for \(k=1,\ldots,m\). Note, we have only evaluations of these functions at \(\mu_{1}^{i}\), \(i=1,\ldots,n_{1}\) and \(\mu_{2}^{j}\), \(j=1,\ldots,n_{2}\), respectively, and no information about these functions outside of these points. In this way, we cannot approximate the solution to (1.1) for all \((\mu_{1},\mu_{2})\in[a_{1},b_{1}]\times[a_{2},b_{2}]\) using (3.3) as
written.
We can use the evaluations \(F_{1}^{k}(\mu_{1}^{i})\) and \(F_{2}^{k}(\mu_{2}^{j})\) in (3.3) to compute an interpolation of these one-dimensional functions in a cheap way. In practice, any interpolation is possible and varying the type of interpolation does not contribute significantly to the overall error in the approximation. Thus, we make the following interpolation of the representation in (3.3):
\[\mathbf{X}^{m}(\mu_{1},\mu_{2})=\sum_{k=1}^{m}\Phi_{n}^{k}\mathbf{F}_{1}^{k}(\mu_{1}) \mathbf{F}_{2}^{k}(\mu_{2})\in\mathbb{R}^{n}, \tag{3.16}\]
where \(\mathbf{F}_{1}^{k}\), \(\mathbf{F}_{2}^{k}\) are spline interpolations of \(F_{1}^{k}\), \(F_{2}^{k}\) (3.4), respectively, and \(\mathbf{X}^{m}(\mu_{1},\mu_{2}):[a_{1},b_{1}]\times[a_{2},b_{2}]\to\mathbb{R}^{n}\). The interpolation in (3.16) can be evaluated to approximate (1.1) for other \((\mu_{1},\mu_{2})\in[a_{1},b_{1}]\times[a_{2},b_{2}]\) in a cheap way. This approximation to \(x(\mu_{1},\mu_{2})\) is denoted as
\[x^{m}(\mu_{1},\mu_{2})\in\mathbb{R}^{n}, \tag{3.17}\]
and we compute the corresponding relative error as follows:
\[\frac{\|x^{m}(\mu_{1},\mu_{2})-x(\mu_{1},\mu_{2})\|_{2}}{\|x(\mu_{1},\mu_{2}) \|_{2}}.\]
Note, simply interpolating several snapshots to estimate solutions to (1.1) for all \((\mu_{1},\mu_{2})\) in the parameter space is not a suitable approach, as the solutions tend to be extremely sensitive to the parameters [38].
To solve the parameter estimation problem for a given solution \(x(\mu_{1},\mu_{2})\in\mathbb{R}^{n}\) which depends on unknown parameters \((\mu_{1},\mu_{2})\), we use the Matlab routine fmincon which uses the sequential quadratic programming algorithm [40] to find the pair of values \((\mu_{1},\mu_{2})\) in the the domain which minimize the quantity
\[\left\|x(\mu_{1},\mu_{2})-x^{m}(\mu_{1},\mu_{2})\right\|_{2}. \tag{3.18}\]
Simulations from a parameterized Helmholtz equation and a parameterized advection-diffusion equation appear in Sections 4 and 5, respectively. In these experiments, an interpolation of a reduced order model is constructed to approximate the solutions to the parameterized PDEs.
**Remark 3.4** (Offline and online stages).: _In practice, the fixed point method in Algorithm 1 can require many iterations until convergence, though the cost of each step scales linearly with the number of unknowns. Residual-based accelerators have been studied to reduce the number of these iterations [35]. This strategy, though outside the scope of this work, has shown to be beneficial in situations where the snapshots depended strongly on the parameters. Only one successful decomposition is required to construct (3.16). Thus, the offline stage of the method consists of generating the snapshots with Chebyshev BiCG and executing Algorithm 1._
_Evaluating the reduced order model (3.16) requires only scalar and vector operations and, therefore, a variety of approximations to (1.1) can be obtained very quickly, even when the number of unknowns is large. We consider this part of the proposed method the online stage. In the experiment corresponding to Figure 4.5, evaluating the reduced order model had simulation time \(0.010601\) CPU seconds. Generating the corresponding finite element solution with backslash in Matlab had simulation time \(1.197930\) CPU seconds._
Remark: (Sources of error). The approximations generated by Algorithm 1 contain error from the Chebyshev approximation, the iterative method Preconditioned Chebyshev BiCG, the low-rank approximation, as well as the interpolations in (3.16). As noted in Remark 4, the Chebyshev approximation has a very small impact on the overall error, and Figure 4 shows that the iterative method obtains relatively accurate approximations to the true solutions. In practice, the interpolations performed in (3.16) are done on smooth functions, as visualized in Figure 3. The largest source of error from our proposed algorithm stems from the tensor decomposition, i.e., lines \(1\)-\(17\) in Algorithm 1. This can be seen, for example, in Figure 4.
## 4 Numerical simulations of a parameterized Helmholtz equation
We consider both a reduced order model for a parameterized Helmholtz equation and a parameter estimation problem for the solution to a parameterized Helmholtz equation. Such settings occur naturally, for example, in the study of geophysics; see [38, 43, 47]. These prior works were also based on a reduced order model, constructed using PGD. Similarly, the method PGD was used in the study of thermal process in [1], and a reduced basis method for solving parameterized PDEs was considered in [27].
In the simulations which follow, the matrices \(A_{i}\) arise from a finite element discretization, and \(b\) is the corresponding load vector. All matrices and vectors here were generated using the finite element software FEniCS [3]. The solutions to these discretized systems were approximated with a modified version of the Kryov subspace method Preconditioned Chebyshev BiCG [17], as described in Section 2. This strategy requires a linear solve with a matrix of dimension \(n\times n\) on each application of the preconditioner and of its adjoint. We have chosen to perform the linear solve by computing one LU decomposition per execution of the main algorithm, which is reused at each subsequent iteration accordingly. This can be avoided by considering the inexact version of Preconditioned Chebyshev BiCG; see Remark 2.
This work proposes a novel improvement in generating snapshot solutions necessary for constructing models of this type. We choose to include three different examples in this section in order to fully capture the versatility of our strategy. Once the interpolation (3.16) has been constructed, evaluating the approximations for many values of the parameters \(\mu_{1}\) and \(\mu_{2}\) can be done in an efficient manner; see Remark 3. In the case of the tensor decomposition failing to converge sufficiently, a new set of snapshots can be generated with little extra work and a new decomposition can be attempted; see Section 2.
### First simulation, snapshots on a sparse grid
Consider the Helmholtz equation given by
\[\nabla^{2}u(x)+f(\mu_{1},\mu_{2})u(x) =h(x), x\in\Omega, \tag{4.1a}\] \[u(x) =0, x\in\partial\Omega, \tag{4.1b}\]
where \(\Omega\subset\mathbb{R}^{2}\) is as in Figure 5, \(\mu_{1}\in[1,2]\), \(\mu_{2}\in[1,2]\), \(f(\mu_{1},\mu_{2})=2\pi^{2}+\cos(\mu_{1})+\mu_{1}^{4}+\sin(\mu_{2})+\mu_{2}\), and \(h(x)=\sin(\pi x_{1})\sin(\pi x_{2})\). A discretization of (4.1) is of the form (1.1), where
\[A(\mu_{1},\mu_{2})\coloneqq A_{0}+f(\mu_{1},\mu_{2})A_{1}.\]
We are interested in approximating \(u(x)\) in (4.1) for many different pairs of \((\mu_{1},\mu_{2})\) simultaneously.
The reduced order model \(\mathbf{X}^{m}(\mu_{1},\mu_{2})\) is constructed as described in Algorithm 1. We can evaluate this model for \((\mu_{1},\mu_{2})\) outside of the nodes corresponding to the
snapshot solutions. The particular \(13\) nodes considered in this simulation are plotted in the parameter space, shown in Figure 1. Figure 2 displays the convergence of the two executions of Preconditioned Chebyshev BiCG required to generate the snapshot corresponding to the nodes in Figure 1. Specifically, in Figure 2(a), we see faster convergence for approximations \(\tilde{x}(\mu_{1},\mu_{2}^{*})\) where the value \(\mu_{1}\) is closer to the target parameter \(\tilde{\sigma}\) in (7). An analogous result holds in Figure 2(b) for approximations \(\tilde{x}(\mu_{1}^{*},\mu_{2})\) where \(\mu_{2}\) is closer to \(\tilde{\sigma}\); see Remark 2. We require the LU decompositions of \(2\) different matrices of dimension \(n\times n\) for this simulation.
In Figure 3, we see the interpolations \(\mathbf{F}_{1}^{1}(\mu_{1})\) and \(\mathbf{F}_{2}^{1}(\mu_{2})\) as in (21) plotted along with the function evaluations \(F_{1}^{1}(\mu_{1}^{i})\) and \(F_{2}^{1}(\mu_{2}^{j})\) generated by Algorithm 1, where \(\mu_{1}^{i}\) and \(\mu_{2}^{j}\) are as in Figure 1. Figure 4 shows the percent relative error for approximations to (10) for a variety of pairs \((\mu_{1},\mu_{2})\) different from the nodes plotted in Figure 1. As in [36], we consider approximate solutions with percent relative error below \(6\%\) reliable and approximations with percent relative error below \(1.5\%\) accurate, where the error is computed by comparing the approximation to the solution obtained using backslash in Matlab. Figures 5, 6, and 7 show both the finite element solution and the solution generated from (21) for the sake of comparison. Here we display these solutions for \(3\) pairs of \((\mu_{1},\mu_{2})\). These approximations all have percent relative error below \(1.5\%\). A variety of solutions can be obtained from this reduced order model.
Helmholtz equation given by
\[\left(\nabla^{2}+f_{1}(\mu_{1})+f_{2}(\mu_{2})\alpha(x)\right)u(x) =h(x), x\in\Omega, \tag{4.2a}\] \[u(x) =0, x\in\partial\Omega, \tag{4.2b}\]
where \(\Omega\subset\mathbb{R}^{2}\) is as in Figure 4.9, \(\alpha(x)=x_{2}\), \(\mu_{1}\in[1,2]\), \(\mu_{2}\in[1,2]\), and \(f_{1}(\mu_{1})=\cos(\mu_{1})+\mu_{1}^{3}\), \(f_{2}(\mu_{2})=\sin(\mu_{2})+\mu_{2}^{2}\), \(h(x)=\sin(\pi x_{1})\sin(\pi x_{2})\). A discretization of (4.2) is of the form (1.1), where
\[A(\mu_{1},\mu_{2}):=A_{0}+f_{1}(\mu_{1})A_{1}+f_{2}(\mu_{2})A_{2},\]
and approximating \(u(x)\) in (4.3) for many different pairs of \((\mu_{1},\mu_{2})\) is of interest. We can still exploit the structure of the sampling when generating the snapshots by fixing one parameter and returning solutions as a one variable function of the other parameter. As in the simulations based on sparse grid sampling, if the decomposition fails to reach a certain accuracy level, a new set of snapshots can be generated with little extra computational effort.
Figure 8(a) shows the locations of the nodes corresponding to 77 different snapshot solutions used to construct a reduced order model. These snapshots were generated via 7 executions of a modified form of Preconditioned Chebyshev BiCG, i.e., we consider 7 different fixed \(\mu_{2}^{*}\), equally spaced on \([a_{2},b_{2}]\). This requires the LU decompositions of 7 different matrices of size \(n\times n\). Figure 8(b) shows the percent relative error of the interpolated approximation, analogous to (3.16) constructed using Algorithm 1, for 400 pairs of \((\mu_{1},\mu_{2})\in[1,2]\times[1,2]\), different from the nodes corre
Figure 4: Percent relative error of \(x^{m}(\mu_{1},\mu_{2})\) (3.17) to approximate (4.1) for 400 equidistant pairs of \((\mu_{1},\mu_{2})\), \(m=14\), \(n=112995\). Here accurate: \(<1.5\%\), reliable: \(<6\%\). Note, same model in both figures.
sponding to the snapshot solutions. We note that the percent relative error of the approximation is below \(1\%\) for all pairs \((\mu_{1},\mu_{2})\), indicating that the approximations are accurate. For the sake of comparison, Figures 9, 10 and 11 show both the finite element solutions and corresponding reduced order model solutions for 3 pairs of \((\mu_{1},\mu_{2})\). In general, accurate solutions can be produced on a larger parameter space with full grid sampling. Furthermore, the model can produce a variety of solutions.
sparse grid based HOPGD [35]. Consider the Helmholtz equation given by
\[\left(\nabla^{2}+f_{1}(\mu_{1})+f_{2}(\mu_{2})\alpha(x)\right)u(x) =h(x), x\in\Omega, \tag{4.3b}\] \[u(x) =0, x\in\partial\Omega, \tag{4.3a}\]
where \(\Omega=[0,1]\times[0,1]\), \(\alpha(x)=x_{1}\), \(f_{1}(\mu_{1})=\sin^{2}(\mu_{1})\), \(f_{2}(\mu_{2})=\cos^{2}(\mu_{2})\), \(\mu_{1}\in[0,1]\), \(\mu_{2}\in[0,1]\), and \(h(x)=\exp(-x_{1}x_{2})\). A discretization of (4.3) is of the form (1.1),
Figure 4.8: Simulation for approximating (4.2). Here percent relative error \(<1.5\%\) (accurate), \(n=112693\), \(m=16\), sampling on a full grid.
Figure 4.9: Approximation to (4.2) with \(m=16\), \(n=112693\), \(.340\%\) relative error (accurate), \(\mu_{1}=1\), \(\mu_{2}=1\).
where
\[A(\mu_{1},\mu_{2})\coloneqq A_{0}+f_{1}(\mu_{1})A_{1}+f_{2}(\mu_{2})A_{2}.\]
We consider a parameter estimation problem, i.e., we have a solution to the discretized problem, where the parameters \((\mu_{1},\mu_{2})\) are unknown. In particular, the solution is obtained using backslash in Matlab.
Figures 12(a), 12(b), and 12(c) together show a method similar to the one proposed in [37] for approximating this problem, performed by constructing 3 successive HOPGD models with sparse grid sampling. This simulation executes a modified form of Preconditioned Chebyshev BiCG 6 times, requiring the LU factorizations of 6 different matrices of dimension \(n\times n\) to generate the 39 snapshot solutions used to create the 3 reduced order models. Once the models have been constructed, an
Figure 11: Approximation to (4.2) with \(m=16\), \(n=112693\), \(.946\%\) relative error (accurate), \(\mu_{1}=2\), \(\mu_{2}=2\).
interpolation is performed, followed by the minimization of (3.18). Table 4.1 shows the percent relative error in the estimated values of \(\mu_{1}\) and \(\mu_{2}\) for each run. Each execution of our strategy leads to a better estimation of the pair of parameters. As before, if the decomposition fails to reach a certain level of accuracy on a set of snapshots, a new set of approximations in the same parameter space can be generated with little extra computational effort.
## 5 Numerical simulations of a parameterized advection-diffusion equation
The advection-diffusion equation can be used to model particle transport, for
\begin{table}
\begin{tabular}{c|r|r} & rel err \% \(\mu_{1}\) & rel err \% \(\mu_{2}\) \\ \hline First run, \(m=34\) & 41.87417 & 97.40212 \\ \hline Second run, \(m=25\) & 2.66555 & 105.09263 \\ \hline Third run, \(m=7\) & 0.05706 & 1.69921 \\ \hline \end{tabular}
\end{table}
Table 4.1: Percent relative error for parameter estimation in \(\mu_{1}\) and \(\mu_{2}\) for solution to (4.3), \(n=10024\).
Figure 4.12: Parameter estimation for solution to (4.3), \(n=10024\).
example the distribution of air pollutants in the atmosphere; see [20, 46]. In particular, we consider the parameterized advection-diffusion equation given by
\[\frac{\partial}{\partial t}u(x,t)=f_{1}(\mu_{1})\frac{\partial^{2}}{\partial x^{ 2}}u(x,t)+f_{2}(\mu_{2})\frac{\partial}{\partial x}u(x,t), \tag{10}\]
where \(f_{1}(\mu_{1})=1+\sin(\mu_{1})\), \(f_{2}(\mu_{2})=10+\cos(\mu_{2})+\pi\mu_{2}\) and \(x\in[0,1]\), \(t\in[0,T]\). The boundary conditions \(u(0,t)=0\), \(u(1,t)=0\) is enforced, as well as the initial condition \(u_{0}(x)=u(x,0)=\sin(\pi x)\). Here the function evaluation \(f_{1}(\mu_{1})\) is referred to as the diffusion coefficient, and the value \(f_{2}(\mu_{2})\) is the advection parameter. Discretizing (10) in space, with a finite difference scheme, and in time, with by the implicit Euler method, gives a parameterized linear system of the form (1), where
\[A(\mu_{1},\mu_{2})\coloneqq A_{0}+f_{1}(\mu_{1})A_{1}+f_{2}(\mu_{2})A_{2}.\]
Note, the solution to this linear system gives an approximation to (10) at a specific time-step. As in Section 4, we construct a reduced order model via a tensor decomposition [35], where the snapshots are generated efficiently with a modified version of the method Preconditioned Chebyshev BiCG [17]. Here the sampling is performed on a sparse grid in the parameter space. If the tensor decomposition is not successful on a given set of snapshots, a new set can be generated on the same parameter space with little extra computation. Similar model problems appear in, for example, [41], where the method PGD was used to construct approximate solutions to the parameterized PDE. Once the corresponding reduced order model (11) has been constructed, approximating the solutions for many \((\mu_{1},\mu_{2})\) can be done very cheaply; see Remark 3.
### First simulation, snapshots on a sparse grid
We construct approximations to (10) with Algorithm 1, where the sparse grid sampling is performed as displayed in Figure 2(a). Figure 1 shows the percent relative error for approximations to (10) for 400 different pairs \((\mu_{1},\mu_{2})\in[0,0.5]\times[0,0.5]\), where, specifically, we consider the solutions at \(t=0.01\). As in [36], approximate solutions with percent relative error below 6% are considered reliable, and approximations with percent relative error below 1.5% are considered accurate. Here the error is computed by comparing the approximation to the solution obtained using backslash in Matlab. The 9 snapshots used to constructed the reduced order model here were generated with 2 executions of Chebyshev BiCG, requiring the LU decompositions of 2 matrices of dimension \(n\times n\).
Note, solutions to the advection-diffusion equation (10) are time-dependent. An all-at-once procedure could be utilized to approximate solutions to this equation at many different time-steps, though this approach would result in a larger number of unknowns and a longer simulation time. Alternatively, the decomposition could be modified to construct an approximation similar to the one in (11), separated in the time variable \(t\) as well as the parameters \(\mu_{1}\) and \(\mu_{2}\). Such an approach would, however, require specific testing. Furthermore, a decomposition of this form performed with HOPGD and sparse grid sampling in the parameter space may not be optimal [35].
### Second simulation, parameter estimation with snapshots on a sparse grid
Consider again a parameter estimation problem. Specifically, we have an approximation to \(u(x,t)\in\mathbb{R}^{n}\) in (10), where \(t=0.01\) is known, but the parameters \(\mu_{1}\) and \(\mu_{2}\) are not. This approximation corresponds to the solution to the discretized problem and is obtained using backslash in Matlab. Additionally, there is some
uncertainty in the measurement of the given solution. In this way, we express our observed solution \(u^{\mathrm{obs}}\in\mathbb{R}^{n}\) as
\[u^{\mathrm{obs}}\coloneqq u(x,t)+\varepsilon\Delta, \tag{10}\]
where \(\Delta\in\mathcal{N}(0,I)\in\mathbb{R}^{n}\) is a random vector and \(\varepsilon=10^{-2}\).
Figure 2 and Table 1 show the result of this parameter estimation problem, using a similar strategy to the one described in [37]. Here 3 successive HOPGD models are constructed from snapshots sampled on a sparse grid in the parameter space \((\mu_{1},\mu_{2})\in[0,0.5]\times[0,0.5]\). This simulation executes a modified version of Chebyshev BiCG 6 times, generating the 23 snapshots efficiently. Note, this requires the LU decompositions of 6 matrices of dimension \(n\times n\). After the third run of Algorithm 1, the estimated parameter is very close to the actual parameter. Note, the relative error in \(\mu_{1}\) and \(\mu_{2}\) after the third run of the simulation is of the same order of magnitude as the noise in the observed solution (10).
## 6 Conclusions and future work
This work proposes a novel way to generate the snapshot solutions required to build a reduced order model. The model is based on an efficient decomposition of a tensor matrix, where the sampling is performed on a sparse grid. An adaptation of the previously proposed method Preconditioned Chebyshev BiCG is used to generate many snapshots simultaneously. Tensor decompositions may fail to reach a certain level of accuracy on a given set of snapshot solutions. Our
\begin{table}
\begin{tabular}{c|c|c} & rel err \% \(\mu_{1}\) & rel err \% \(\mu_{2}\) \\ \hline First run, \(m=50\) & 1.9004 & 16.1248 \\ \hline Second run, \(m=25\) & 4.8542 & 23.2504 \\ \hline Third run, \(m=15\) & 1.2313 & 4.2588 \\ \hline \end{tabular}
\end{table}
Table 1: Percent relative error for parameter estimation in \(\mu_{1}\) and \(\mu_{2}\) for noisy solution to (10), \(n=9999\).
approach offers a way to generate a new set of snapshots on the same parameter space with little extra computation. This is advantageous, as it is not possible to know a priori if a decomposition will converge or not on a given set of snapshots, and generating snapshots is computationally demanding in general. The reduced order model can also be used to solve a parameter estimation problem. Numerical simulations show competitive results.
In [35], a residual-based accelerator was used in order to decrease the number of iterations required by the fixed point method when computing the updates described in (10), (12), and (13). Techniques of this variety are especially effective when the snapshot solutions have a strong dependence on the parameters. Such a strategy could be directly incorporated into this work, though specific testing of the effectiveness would be required.
## 7 Acknowledgements
The Erasmus+ programme of the European Union funded the first author's extended visit to Trinity College Dublin to carry out this research. The authors thank Elias Jarlebring (KTH Royal Institute of Technology) for fruitful discussions and for providing feedback on the manuscript. Additionally, the authors wish to thank Anna-Karin Tornberg (KTH Royal Institute of Technology)
Figure 2: Parameter estimation for noisy solution to (11), \(n=9999\).
and her research group for many supportive, constructive discussions.
## Appendix A Derivation of the update formulas, Algorithm 1
We are interested in the approximation \(X^{m}(\mu_{1},\mu_{2})\) as in (11), such that (12) is satisfied. Here an alternating directions algorithm is used, where \(X^{m-1}(\mu_{1},\mu_{2})\) as in (11c) is given, and we assume, after an initialization, \(F_{1}^{m}\coloneqq F_{1}^{m}(\mu_{1})\), \(F_{2}^{m}\coloneqq F_{2}^{m}(\mu_{2})\) are known. Equivalently, we seek the update \(\Phi_{n}^{m}\) in (13) via a least squares procedure. The left-hand side of (13) can be expressed equivalently as
\[\left(w\Phi_{n}^{m}F_{1}^{m}F_{2}^{m},\delta X\right)_{\mathcal{D}}=w\Phi_{n}^ {m}(F_{1}^{m})^{2}(F_{2}^{m})^{2}\int_{\mathcal{D}}\delta\Phi^{m}(x)\,dx\,d\mu_ {1}\,d\mu_{2}, \tag{15}\]
and the right-hand side can be written as
\[\left(wr_{m-1}(\mu_{1},\mu_{2}),\delta X\right)_{\mathcal{D}}=wr_{m-1}(\mu_{ 1},\mu_{2})F_{1}^{m}F_{2}^{m}\int_{\mathcal{D}}\delta\Phi^{m}(x)\,dx\,d\mu_{1 }\,d\mu_{2}, \tag{16}\]
with \(r_{m-1}\) (10). Thus, from equating (15) and (16), we obtain the overdetermined system
\[\left[\begin{matrix}F_{1}^{m}(\mu_{1}^{1})F_{2}^{m}(\mu_{2}^{*})I_{n}\\ \vdots\\ F_{1}^{m}(\mu_{1}^{*})F_{2}^{m}(\mu_{2}^{n_{2}})I_{n}\end{matrix}\right]\Phi_{ n}^{m}=\left[\begin{matrix}r_{m-1}(\mu_{1}^{1},\mu_{2}^{*})\\ \vdots\\ r_{m-1}(\mu_{1}^{*},\mu_{2}^{n_{2}})\end{matrix}\right], \tag{17}\]
where \(I_{n}\in\mathbb{R}^{n\times n}\) is the identity matrix of dimension \(n\), and the update for \(\Phi_{n}^{m}\) described in (12) is determined via the solution to the corresponding normal equations. Note, the linear system (17) contains function evaluations corresponding to all the nodes in \(\boldsymbol{\mu}\) (9).
Assume now \(\Phi_{n}^{m}\), \(F_{2}^{m}\) in (11c) are known, and seek an update of \(F_{1}^{m}\). Rewriting the left-hand side of (13) yields
\[\left(w\Phi_{n}^{m}F_{1}^{m}F_{2}^{m},\delta X\right)_{\mathcal{D}}=w(\Phi_{n }^{m})^{T}(\Phi_{n}^{m})(F_{1}^{m})(F_{2}^{m})^{2}\int_{\mathcal{D}}\delta F_ {1}^{m}(\mu_{1})\,dx\,d\mu_{1}\,d\mu_{2}, \tag{18}\]
and the right-hand side is given by
\[\left(wr_{m-1}(\mu_{1},\mu_{2}),\delta X\right)_{\mathcal{D}}=w(r_{m-1}(\mu_ {1},\mu_{2}))^{T}(\Phi_{n}^{m})F_{2}^{m}\int_{\mathcal{D}}\delta F_{1}^{m}( \mu_{1})\,dx\,d\mu_{1}\,d\mu_{2}. \tag{19}\]
Approximates to \(F_{1}^{m}\) in (13) are found by computing the least squares solutions to the overdetermined systems
\[\left[\Phi_{n}^{m}F_{2}^{m}(\mu_{2}^{*})\right]F_{1}^{m}(\mu_{1}^{i})=\left[r _{m-1}(\mu_{1}^{i},\mu_{2}^{*})\right], \tag{20}\]
given in (15). Proceeding analogously to (18) and (19), updating \(F_{2}^{m}\) yields the overdetermined systems
\[\left[\Phi_{n}^{m}F_{1}^{m}(\mu_{1}^{*})\right]F_{2}^{m}(\mu_{2}^{j})=\left[r _{m-1}(\mu_{1}^{*},\mu_{2}^{j})\right], \tag{21}\]
with least squares solutions (16).
In practice, each of the vectors depicted in the approximation in Figure 1 are normalized. A constant is computed for each low-rank update, in a process analogous to the one described above. We leave this out of the derivation for the sake of brevity. This is consistent with the algorithm derived in [35]. |
2309.03404 | The Role of Communication and Reference Songs in the Mixing Process:
Insights from Professional Mix Engineers | Effective music mixing requires technical and creative finesse, but clear
communication with the client is crucial. The mixing engineer must grasp the
client's expectations, and preferences, and collaborate to achieve the desired
sound. The tacit agreement for the desired sound of the mix is often
established using guides like reference songs and demo mixes exchanged between
the artist and the engineer and sometimes verbalised using semantic terms. This
paper presents the findings of a two-phased exploratory study aimed at
understanding how professional mixing engineers interact with clients and use
their feedback to guide the mixing process. For phase one, semi-structured
interviews were conducted with five mixing engineers with the aim of gathering
insights about their communication strategies, creative processes, and
decision-making criteria. Based on the inferences from these interviews, an
online questionnaire was designed and administered to a larger group of 22
mixing engineers during the second phase. The results of this study shed light
on the importance of collaboration, empathy, and intention in the mixing
process, and can inform the development of smart multi-track mixing systems
that better support these practices. By highlighting the significance of these
findings, this paper contributes to the growing body of research on the
collaborative nature of music production and provides actionable
recommendations for the design and implementation of innovative mixing tools. | Soumya Sai Vanka, Maryam Safi, Jean-Baptiste Rolland, György Fazekas | 2023-09-06T23:45:09Z | http://arxiv.org/abs/2309.03404v3 | The Role of Communication and Reference Songs in the Mixing Process: Insights from Professional Mix Engineers+
###### Abstract
Effective music mixing requires technical and creative finesse, but clear communication with the client is crucial. The mixing engineer must grasp the client's expectations, and preferences, and collaborate to achieve the desired sound. The tacit agreement for the desired sound of the mix is often established using guides like reference songs and demo mixes exchanged between the artist and the engineer and sometimes verbalised using semantic terms. This paper presents the findings of a two-phased exploratory study aimed at understanding how professional mixing engineers interact with clients and use their feedback to guide the mixing process. For phase one, semi-structured interviews were conducted with five mixing engineers with the aim of gathering insights about their communication strategies, creative processes, and decision-making criteria. Based on the inferences from these interviews, an online questionnaire was designed and administered to a larger group of 22 mixing engineers during the second phase. The results of this study shed light on the importance of collaboration, empathy, and intention in the mixing process, and can inform the development of smart multi-track mixing systems that better support these practices. By highlighting the significance of these findings, this paper contributes to the growing body of research on the collaborative nature of music production and provides actionable recommendations for the design and implementation of innovative mixing tools.
## 1 Introduction
### Music Production and Mixing
Music production encompasses the entire life cycle of the creation of a song, from idea conception to distribution. The process involves writing the score, recording vocals and instruments, mixing the music, mastering the audio, and finally distributing it [1] that influences the creation of different edits or versions for different formats and media. Audio recordings are made in a studio using digital audio workstations (DAW) and various other equipment. A mixing engineer then combines these recordings using knowledge of signal processing, sound, and music to produce a cohesive and well-balanced mix which is further mastered and enhanced for distribution.
Music mixing is the process of combining various audio recordings to create a polished, balanced, and artistically aligned finished product that elicits feelings and tells a story [2]. The mixing engineer adjusts technical and artistic elements, emphasising crucial components, balancing instrumentation, and employing techniques like gain staging, equalisation, panning, compression, and modulation effects to shape the sound [3].
### Technological Advancements and the Recording Industry
Technological advancements have transformed the landscape of music production. Multi-track recording, analogue synthesizers, DAWs, and software plugins have given artists and producers greater control, resulting in more intricate and detailed productions [4]. However, this has led to specialised roles such as recording engineer, producer, mixing engineer and mastering engineer, requiring extensive skills and knowledge [5]. The increased number of stakeholders in the production process demands teamwork and effective communication to achieve the desired goals [6].
In the past, professional music production required access to expensive studios and equipment. However, afford
able recording equipment and software have democratised music production, allowing artists to create high-quality music at home. This accessibility has led to a wider variety of styles and genres, catering to different user groups [7; 8; 9]. Recent literature identifies three user types: amateurs, who are typically newcomers to the field without formal training [10], pro-ams, who are often skilled individuals without full professional support [11; 12; 13], and professionals, who are typically highly skilled paid engineers with industry expertise [9]. With high-fidelity recordings and polished productions becoming the norm, artists face pressure to create sonically impressive music [14]. Streaming platforms and digital distribution have intensified competition, prompting musicians to invest in complex production techniques for a unique and captivating sound. Production quality plays a significant role in gaining traction, yet there remains a gap between the accessibility of production tools and the expertise required to use them effectively.
### Intelligent Music Production and Automatic Mixing
Intelligent music production systems assist creators in navigating the complex tasks of audio production. These systems provide insights, suggestions and even full automation of control parameters in multitrack projects [15]. They benefit both amateurs and professionals, offering assistance and streamlining the process [16; 17].
Automatic mixing employs algorithms and intelligent systems to automate the audio mixing process [18]. It adjusts key audio parameters such as volume, panning, equalisation, compression, and spatial effects. Through the analysis of audio signals and the application of predefined rules or machine learning models, automatic mixing algorithms aim to create balanced and cohesive mixes. Various approaches have been explored [17; 19], including knowledge-engineered [20], machine learning-based [21], and deep learning-based methods [22; 23; 24; 25]. Most of these systems take raw tracks or stems as input and generate a mix as output.
While these approaches provide solutions within limits, they often lack user control to specify context and expectations. As music mixing allows for multiple possible mixes from the same stems or tracks, context plays a crucial role in determining the best fit for the client's needs [26]. Professionals and pro-ams have shown less interest in completely automatic services due to the generic nature of these systems [27; 28]. They prefer technology that allows for more control and contextual understanding to achieve desired results [26].
### Collaborative Practice-Informed Design for Smart Mixing Tools
Creating music involves multiple stakeholders who contribute their expertise to shape the final product. Clear communication and understanding between artists, recording engineers, mixing engineers, producers, and mastering engineers are crucial to realising the artist's vision. To design mixing systems that facilitate effective collaboration, our work explores the communication dynamics between artists and mixing engineers. These findings can inform the development of future smart mixing tools.
During the mixing process, the mixing engineer employs various methods to understand the artist's expectations. The artist uses semantic terms, verbal instructions, reference songs, and demo mixes to convey their needs. These media allow for developing a tacit agreement between the client and the mixing engineer about the context. The decisions made by the mixing engineer greatly impact the clarity, balance, spatialisation, emotion, and dynamics of the song. Collaborative work between the mixing engineer and artist is vital to achieving the desired results while incorporating their respective identities into the music. Our objective is to understand the communication process enabling this vital synergy, and explore how this knowledge might influence the design of smart mixing tools.
This study utilises semi-structured online interviews with five experienced mix engineers, followed by a questionnaire-based study to validate the findings. Through qualitative data analysis, we explore how clients express their expectations and how mixing engineers interpret and derive insights from these communication channels. The results of this study aim to inspire an interaction-centred approach to designing mixing tools.
## 2 Study Design and Methodology
The objective of this study is to investigate the exchange of ideas and expectations between clients and mixing engineers during the song-mixing process. We aimed to understand the informative cues and suggestions provided by clients to convey their objectives for the mix, how mixing engineers interpret and incorporate these cues into the mixing decisions, as well as how mixing engineers assess the completion of a mix. To achieve these goals, we implemented a multi-phase approach consisting of semi-structured interviews and a structured web-based questionnaire.
### Phase 1: Semi-Structured Interviews
Semi-structured interviews are qualitative research methods that involve asking a pre-determined set of open questions, allowing for further exploration of responses [29]. In this phase of the study, we conducted one-to-one semi-structured interviews with professional mixing engineers to gain insights into their experiences and perspectives.
#### 2.1.1 Interview Structure
The interviews focused on exploring various aspects of communication, expectation exchange, and collaboration between mixing engineers and artists during the music-mixing process. Key themes explored during the interviews included the interaction with clients, the role of demo mixes and reference songs, the criteria for defining the completion of a mix, and the role of creativity in the mixing process. The interviews were conducted through recorded
online video calls, and the recordings were later transcribed for analysis purposes.
#### 2.1.2 Participant Profile
To ensure the validity of the investigation and mitigate selectivity issues, five professional mixing engineers were invited for interviews [30]. By involving multiple experts in the field, we wanted to gain diverse perspectives and minimize the risk of bias or individual preferences from skewing the results. These engineers were chosen based on their expertise and extensive experience in the field of music mixing. At the time of the interviews, they all had more than five years of experience, possessed advanced mixing skills, and actively worked on music projects for themselves and other artists. Their qualifications included formal training, payment for their work, and having their work published using traditional discographies and music streaming platforms. Two of the interviewed engineers are female. Three out of five engineers worked in Europe, one in Asia, and one in America/Europe. Three of them specialise in pop, rock, and metal with the other two mixing for jazz, ensemble, folk, classical and world music.
#### 2.1.3 Data Analysis
The interview recordings were transcribed, and thematic analysis was conducted using an inductive, iterative, and grounded approach [31][32] in the MAXQDA software 1. A series of codes were generated to identify different themes across the interviews, which were refined over several iterations. These themes were then used to inform the design of the questionnaire. This involved identifying key themes, patterns, and categories of information related to the exchange of ideas and expectations between the client and the mixing engineer. The analysis process focused on extracting meaningful insights and understanding the underlying factors influencing the mixing process. Figure 1 provides a concise summary of the extracted themes and sub-themes, highlighting their interconnectedness. The central theme revolves around the critical role of communication and collaboration throughout the mixing process. Furthermore, we have identified three distinct stages where this communication and collaboration occur: prior to mixing, during the mixing process, and upon completion of the mix. Figure 1 effectively illustrates how various sub-themes within different categories intertwine, ultimately forming a comprehensive understanding of the subject.
Footnote 1: [https://www.maxqda.com/](https://www.maxqda.com/)
### Phase 2: Questionnaire-based Study
After the analysis of interviews, a structured web-based questionnaire was designed to both validate and expand upon the insights gathered. The qualitative analysis of the interview transcripts allowed us to structure the inferences into questions that could be asked to a wider group of mixing engineers for greater validity.
For example, one key inference from the interviews was that mixing engineers often utilize reference songs from genres different from the ones they are working on. To explore this further, we included specific questions in the questionnaire to assess the frequency of this practice among a larger pool of engineers. For example:
"Can you describe some cases where you would consider reference songs from another Genre?"
Figure 1: Themes and sub-themes extracted from the thematic analysis for the interviews
"Consider a situation where you like how an instrument is processed in a song from another genre [not the genre of the song you are working on] and you want to adapt this to the instrument of your song."
By including these questions, we aimed to gather quantifiable data and gain a broader perspective on the communication and collaboration dynamics between clients and mixing engineers in the song-mixing process. The questionnaire took into account a diverse range of mixing engineers to ensure a more comprehensive understanding of the topic, incorporating insights and themes derived from the initial interviews.
#### 2.2.1 Questionnaire Structure
To ensure comparability across participants, we structured the questionnaire in a standardised manner [33]. It consisted of three sections. The first section collected general information about participants' musical practices, expertise and skills. The second section focused on standard practices, client interaction, and the use of demo mixes and reference songs in the mixing workflow. The third section explored the participants' understanding of the information provided by reference songs and demo mixes, as well as the role of creativity in mixing practice. The questionnaire included 33 questions, which were mostly multiple-choice or multi-select, with some optional long-answer questions. It was hosted online using Microsoft Forms and took approximately 15-20 minutes to complete.
#### 2.2.2 Participant Profile
The questionnaire was distributed among professional mixing engineers within our personal network. A total of 22 participants with varied experiences took part in the study. Figure 2 illustrates the experience levels of the participants, with all of them having more than 3 years of production experience. The majority of respondents identified their mixing skills as advanced. They worked on a range of 1-10 to over 100 songs per year. The participants consisted of professional or pro-am mixing engineers who worked on their own music as well as projects for other artists. The participants were based out of Asia, Europe, and America. About 95% of the participants were male. About 50% of them worked in the pop, rock, electronic, and metal genres, however, we also had engineers who specialised in classical, hip-hop, jazz, folk, and world music in decreasing order.
#### 2.2.3 Data Analysis
For the questionnaire data, a mixed methods approach was employed due to the nature of the data collected. Qualitative data was processed using Thematic Analysis [31, 32], while quantitative data was analysed using descriptive statistical methods in Microsoft Excel. This approach provided a comprehensive analysis of both qualitative and quantitative aspects of the data.
Fig. 3: Flow of communication and collaboration during the mixing process
Fig. 2: Mixing Engineer’s profile for Phase 2 of the study
## 3 Results
The following section presents results from both study phases, categorised into three stages: pre-mixing, during mixing, and mix completion. The findings from both the interviews and the questionnaire are discussed together, encompassing both quantitative data from the questionnaire and qualitative insights from the interviews. An overview of this is shown in Figure 3. The results of our research indicate a strong correlation between effective communication and collaboration and the quality and alignment of the final product.
### Pre-Mixing
Effective communication with the client is an important part of the mixing process. It is important for the mix engineer to understand the client's expectations and preferences, and to work with them to achieve the desired sound. When asked what mixing means to them, engineers said,
"_It's all a thing about communication and trying to understand what the client wants."_
"_Their hopes, I would say."_
Engineers often set up an initial consultation session with the artist where they try to get a sense of the artist's vision for the mix. The session involves discussions about the sound of individual elements of the mix as well as the mix as a whole.
As seen in Figure 4, the artist uses different mediums to convey their expectations. The artist may describe the desired sound using verbal instructions, and semantic terms, provide reference songs, and offer a rough mix (also called a demo mix) as a guide. Semantics are words, linguistic units, or phrases applied to capture the meaning and interpretation of various sonic elements of the mix. Previous work has explored the role of semantics in music production and how it is used to describe the sound and expected transformation for the sound [34; 35; 36].
"_They give you some description of how they would like the mix to be. Yeah, sometimes it's just a verbal reference in terms of you know, hey have you heard that song, maybe something on that space something in this space, or it could just be like how we are having a chat... I need it very bright. So, usually, we tend to think, then I try to understand from him what he means by bright. Usually, it tends to be like he needs a lot of highs in the sound. You know high frequencies."_
The use of the term "bright" to describe the sound is a common example of how semantics can be applied to convey specific characteristics. In this context, when the client describes the sound as "bright," they are likely referring to a sonic quality that emphasizes higher frequencies or has a sense of clarity and brilliance.
Ultimately, the mix engineer should work closely with the artist to ensure that the final mix meets their expectations and achieves the desired sound.
### During Mixing
Mixing is a process of carving the sound of the final mix that emotes and tells the intended story. The mix engineer begins by loading the raw tracks or stems into the DAW and organising them in a way that is easy to navigate and understand. This may involve labelling the tracks and grouping similar ones together. Next, the engineer listens to the project and takes note of any issues that need to be addressed. They may listen to the project multiple times, as well as the rough mix and reference songs provided by the client to get a sense of the desired direction for the mix. After this initial preparation, the engineer uses the gathered direction to make decisions on the technical aspects of mixing, such as gain staging, panning, and fixing any frequency masking and noise or clicks. They then proceed to add more creative elements such as reverb, delay, and distortion. All of these decisions are made in alignment with the vision for the mix.
#### 3.2.1 Inspiration from Previous Work
Mix engineers often have their own techniques and approaches to mixing. Figure 5 shows that some engineers prefer to start a project from scratch, while others find it useful to borrow ideas or settings from past projects.
This is true in three cases:
1. Artist-specific sonic preference: Some artists have a preference for a certain sonic identity for elements in their mix. This could be unique to the artist's identity. In these cases, the artist might request the engineer to take inspiration from previous songs/projects of the artist.
"_For example, I was mixing a song for somebody and that artist really liked the way her voice sounded in one particular mix. Next time a very different song was done but, she wanted the vocals to sound just the way it was before. So, I did turn back to what I had done and you know present it, and that worked for that artist."_
Fig. 4: Ways in which expectations about the mix are conveyed
Fig. 5: Situations when engineers import settings from the previous project
This highlights the importance of understanding the client's preferences and using past projects as a reference to achieve their desired sound.
2. Album-specific sonic preference: It is a practice to maintain the cohesiveness of sound across an album. Hence, if an engineer is working on all the songs in an album, they generally establish a sound for a song on the album and then take inspiration from that for mixing the rest of the songs. _"It also depends, if I'm mixing just one song or a whole album and if I'm mixing a whole album versus just like one song, I will typically ask the artist for a song, that they'd like me to start with to establish the overall sonic characteristics of the album.... their overall concept for the album needs to be cohesive. So, I will,...mix one song, make sure it's going in the right direction, and then use that to lead me into every other song and start to borrow from that first mix and bring things over other mixes and change them accordingly._" Importing settings from other tracks on the album can also be a time-saving technique, especially if the tracks were recorded in the same acoustic space. _"So quite often when you're mixing a whole album, the first thing you do is if you got the same drum kit and it was all recorded in the same room. Yeah, just start with the same drum sound on every song and then do the creative part."_
3. Processing-specific sonic preference: Often mix engineers borrow sound processing ideas from previous projects when they feel that the current mixing project can benefit from a starting point as the sonic sound goal is similar to something they have done previously.
It is important to remember, however, that every song is unique and may require its own approach and treatment. In the coming subsections, we will elaborate further on how elements like demo mixes and reference songs are used to aid the process of mixing a song.
#### 3.2.2 Demo Mix (Rough Mix)
Rough mix or demo mix are terms used to describe an unfinished coarse mix of a song that is completed by adjusting gain and pan levels and basic processing on some of the elements. Our studies (Table 1) showed that about 77% of the participant mix engineers were likely to use demo mixes and more than 50% of the time, they received a rough mix from the client. It is typically used to give the mix engineer an idea of the artist's vision for the mix and can be helpful in providing a sense of direction for the mix as also presented in Figure 6
_"For me a demo, it's not finished. It's more like this is the start of how we're going to." "But a demo song as well could be really helpful for me to understand where the thing should go."_
However, it's important to note that the demo mix is often not perfect and may require adjustments to get to the final mix.
_"In 99.9% of the cases, it's really bad. But most of the clients would give you some sort of a demo or something that they make themselves so that you get a direction of what they want to do."_
The rough mix is sometimes done by the recording engineer, producer or the artist themselves. However, demo mixes are often rough and may not be fully representative of the desired sound, so it is important for the mix engineer to have a conversation with the client to understand their expectations and preferences. The client might ask the engineer to stick with the balance, panning, or processing on certain instruments or elements as shown in Figure 6.
_"Usually I'm mixing from a rough mix as well. They do a rough mix and I usually ask them the question very early. How important is this rough mix to you? Is this part of your vision or do you want me to go in the opposite direction and do something." "I will speak with the composer or the client you know what are you wanting me to listen to from the demo mix? So, sometimes they will tell me that this is the producer's mix, so don't change the balance of it. I want the balance to be the same."_
The artist may ask the mix engineer to focus on certain elements of the rough mix, such as balance or panning, or to preserve certain elements of the rough mix.
#### 3.2.3 Reference Songs
Apart from demo mixes, most mix engineers often also rely on reference songs to guide them in the mixing process as shown in Table 1. A reference track is a song from
\begin{table}
\begin{tabular}{l|l l l} \hline \hline
**Directors** & **Status** & **Usage** & **From** \\ & & **(in\%)** & **client** \\ & & & **(in\%)** \\ \hline \multirow{3}{*}{**Demo Mix**} & Yes & 27 & 24 \\ & Sometimes & 50 & 38 \\ & No & 23 & 38 \\ \hline \multirow{3}{*}{**Reference Songs**} & Yes & 45 & 14 \\ & Sometimes & 45 & 38 \\ \cline{1-1} & No & 9 & 48 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Likelihood of usage of various directors in the mixing process.
Figure 6: Information derived from Demo mixes
another artist to use as a benchmark for varying elements in your own productions. Moreover, 90% of the engineers confirmed that they use more than one reference song as seen from Figure 8.
#### 3.2.3.1 Directions from reference song
When asked what the use of reference songs is in the mixing workflow, engineers described (as seen in Figure 9):
1. Helped understand the client's vision
2. Get a sense of direction
3. A pointer for the sound of the final mix: This may include understanding the balance, panorama, and processing of certain instruments as shown in Figure 7.
Reference songs are commonly used by mix engineers as a direction for their work.
_"Most often a reference song acts as a direction sign on the road. It helps me get a sense of expectation and direction."_ The reference song either brings out the same target emotion, has a similar target semantic descriptor (for example the song sounds warm, bright, etc), has the same target sonic and dynamic sound or has a similar compositional structure. In addition to providing a sense of direction, reference songs can also be used for A/B testing to see how far the engineer has progressed with the mix, and to help composers and artists communicate their vision to the engineer by presenting examples in the form of a reference song. However, it is important to note that while reference songs are often chosen for their ability to evoke a certain feeling or bring out a specific emotion, they are typically not used to replicate the mix exactly as seen from a strong disagreement in Figure 12. _"I'm not specifically interested in my mix sounding exactly the same. I'm interested in whether my mix feels the same."_
#### 3.2.3.2 Finding a reference song
We also induced that about 50% of the time, clients send mixing engineers reference songs. As seen in Figure 10, about 42% of the time, clients send reference songs, however in case the client does not, engineers find them on their own. Also, 78% of the engineers confirmed that once they find reference songs, they pass it on to the client to confirm if the songs align with their vision as seen from Figure 8. However, for certain purposes of usage like the ones described below, engineers may not get feedback from the client.
_"Depends on the purpose of the reference track. If the reference track is aimed at the overall sound and artistic vision of the artist i.e. a genre-specific reference track, then yes. If it's related to being able to use a reference track that I know has a well-balanced mix, then there's no need to discuss this with the artist as their judgment on that part of the process is not always necessary."_
_"It depends on why I'm using the reference. For overall vibe, balance, etc. it's always good to get a client's input. More often than not I'm using a reference for some smaller detail (e.g. exactly how two sounds interact in the frequency spectrum), and in these cases, I don't bother getting a client's input."_
#### 3.2.3.3 Choosing a reference song
Choosing a reference song is an important part of the mixing process. Reference songs are generally chosen based on the purpose they intend to serve.
1. Director for the entire mix: Reference songs can serve as a guide for the engineer in order to achieve the desired sound and emotion for their mix. _"I'm definitely listening to the ambience and spatiality. I'm listening to compression. How punchy is it? Or is it quite a sort of mellow in the overall league? I'm not listening to specific EQs, but I'm listening to an overall EQ curve."_ The referenced song should either have a similar sound to the final mix, which could mean it has the same frequency profile, dynamics, and general processing, or brings out the same emotion. _"It was you know very emotive and I had a reference in my mind that in this particular song, it should sound that breezy or that lush. So, I went back to that reference song in my head and looked it up and saw how is it sounding in that song and then I got a few clues and
Fig. 8: Influence of the reference song in the mixing process
Fig. 7: Information derived from the reference songs
Fig. 9: The use of reference songs in the mixing workflow
then I try to incorporate that in my song."_
2. Director for a particular element in the mix: Reference songs can give information about the processing or sound of one particular element or instrument in the mix. _"If someone sent you a track it can mean so much because yeah, if you, if you're listening to something, then it's very personal on the things you're going to check out. I mean, some people are only going to do it for the vocals or for the whole song, or for structure. So, then I'm going to have to start a talk about it I think."_
A reference song is often from the same genre and sounds similar to the final expected mix, however, it is possible for it to be from another genre. This holds true in two situations: _i)_ when the reference gives information about the processing of some specific element and not the mix as a whole, or _ii)_ when the artist simply wants to evoke a certain emotion and is not concerned with genre. _"I always try to go to a song from the original genre or an album that reminds me of that song. I got this kind of blues song which was really warm but they didn't give me any reference. I was listening to the rough mix and I was like this reminds me of something. I went through my dance collection of CDs and I picked up a couple of songs and sent them to the artists saying this rough mix reminds me of these songs. Do you approve to use them as references?"_
#### 3.2.3.4 Creativity
The process of mixing music is technical as well as creative. In our studies, we asked engineers whether reference songs restrict their creativity, with the majority (about 64% of the engineers) disagreeing, as indicated in Figure 8. They emphasised that reference songs only provide a direction (a suggestion), however, the goal can be achieved in multiple ways. Reference songs provide a valuable means to grasp the client's expectations. One engineer expressed that the sounds and instruments in their mix might be different from those in the reference, however, a reference might help them listen to the sound in the mix that they might have otherwise ignored.
_"I would put like about 20-30% of my own opinion into that. So, it doesn't really deviate too much from the actual reference, but then at the same time there's a bit of a character that is not the same as the thing that they bring in the reference song."_
_"For helping me inform specific decisions and resolve uncertainties and not a creative limitation."_
#### 3.2.3.5 Feedback
The process of mixing requires a continuous dialogue between the mix engineer and the client to ensure that the final product meets the client's expectations. One of the crucial elements of this dialogue is the feedback that the client provides on the mix in progress. As the mix engineer works on the track, they send different versions of the mix to the client for their review, making necessary adjustments and tweaks based on the client's feedback. This iterative process allows the engineer to make sure that the mix is on the right track, and that the client is happy with the direction it is heading.
_"I feel like it's done and I can send it to the artist where if that's the first mix, I'm obviously happy with it if there is a revision."_
### Mix Completion
Engineers agreed that the mix is ready whenever the mix sounds balanced, the engineer is happy, and the client is satisfied as shown in Figure 11.
_"Honestly, I just hit the play button and just sit back and I am looking for anything that would surprise me and that's a negative thing. I mean, if I just sit back and listen to it and... it just sounds unnatural or something like perhaps my attention...I know that it's not finished."_
_"I think it is done when the client says it's done because in your head it's never done. That's my experience. But, yes uh there comes a time a point in the mix where you're really happy."_
Engineers confirmed that often they compare the final mix against the given reference songs. Our studies also show that most often engineers try to create a mix that either sounds similar to the reference songs or evokes similar feelings as shown in Figure 12.
_"I feel it's like a middle point between the two like obviously when I feel like I'm happy with what I've done and I've got a reference I tried to compare it to the reference and if I feel what I have done is totally different, I'm like
Fig. 11: When do you know your mix is ready?
Fig. 10: How do you find a reference song?
okay and I try to make it a bit closer" "I'm not specifically interested in my mix sounding exactly the same. I'm interested in whether my mix feels the same." "I mean when it sounds good to me, generally balanced, and it has enough similarity to that reference."_
It is important to note that there isn't a one-size-fits-all answer to when a mix is ready. The decision ultimately depends on the individual engineer and the specific project they are working on. Some may rely heavily on reference songs to guide their mixing process, while others may be more focused on creating a mix that evokes the desired emotion or feeling. The mix is generally considered ready when it sounds smooth and balanced and evokes the same feeling as the reference song. Ultimately, a mix is ready when it meets the expectations of the engineer and the client and sounds good on a variety of playback devices.
## 4 Discussion and Conclusion
Our investigation highlights the crucial role of communication and collaboration in the mixing workflow. As mixing is a context-dependent task, it is essential for the mixing engineer to understand the artist's vision for the song and make decisions aligned with it to create the desired mix. The artist provides various mediums of information, including verbal instructions, semantic terms, reference songs and demo mixes, to convey their expectations. These directives provide a sense of direction to the mixing engineer, facilitating their decision-making process.
Our findings also shed light on the information that mixing engineers gather from each of these different directives, underscoring the dynamic nature of collaboration in the mixing workflow. Previous research has emphasised the importance of collaborative and co-creative smart mixing systems that promote interaction, collaboration, and trust among professionals [27; 28; 26]). This study further underscores the significance of incorporating context into the mixing workflow and presents a preliminary approach for designing such systems.
Context plays a crucial role in shaping mixing decisions and perceptions, encompassing factors such as style, genre, knowledge, time and technology. Moreover, context evolves dynamically throughout the song and the mixing process [26]). Our research demonstrates that context in music mixing can be incorporated through various means, including reference songs, demo mixes, verbal instructions and semantic terms. However, each of these directives presents its own set of challenges.
Previous studies on the utilisation of semantics to define mixing expectations have highlighted a significant challenge in generalisation, as there is no universally acknowledged and agreed-upon interpretation for specific semantic terms [36]. Additionally, the semantic terms may hold varied meanings across genres, production settings, and geographical regions [34]. Verbal instructions provided in written form also require further examination. The complex nature of the mixing process may not be fully captured by a few words, making it difficult to represent the vision of the mix accurately.
In our work, we conducted a detailed investigation into the use of reference songs and demo mixes as mediums of context. The results demonstrate that reference songs have the ability to effectively convey the abstract nature of mixing style, capturing the desired emotions and sonic qualities sought by the artist. Similar findings have been observed in the music recording process regarding the use of reference songs. Ongoing research exploring the use of reference sound and reference songs for audio effects [37] and mixing style transfer [38; 25] has shown promising success in capturing context and the user's intentions.
The future success of AI in music production depends on the ability of these systems to support interaction. This interaction can be divided into two phases: before the mixing process and after the mixing process. An AI-based mixing system that enables smart mixing needs to be provided with context on the input side. This context can be provided in the form of reference songs, demo mixes, verbal instructions or semantic descriptors. Once the context is provided, the system should be able to interpolate and generate a mix that represents an understanding of the context. Furthermore, on the output side, the system should allow for fine-tuning the mix to achieve the desired sound. This is akin to iterating based on feedback provided in the collaborative dynamics of mixing. The system should either provide means for alteration and fine-tuning of the output or offer a way to receive feedback to further improve the result in the desired direction.
In future work, our focus will be on investigating both subjective and objective evaluation methods to assess the system's effectiveness in capturing and incorporating contextual information provided into the output. Additionally, we aim to delve deeper into certain concepts that emerged in this study but were not extensively explored. One such concept is the notion of mixing style and its variations across different genres and artistic styles.
Overall, our study underscores the importance of effective communication and collaboration in the mixing process. It provides insights into the different forms of communication employed by artists and the valuable information gained by mixing engineers from these channels. By
Figure 12: Comparing reference songs against mix
understanding and incorporating these dynamics, future AI-based mixing systems can enhance the interaction and collaboration between artists and engineers, expectantly leading to improved outcomes in the music production process.
## 5 Acknowledgment
We express our sincere gratitude to the JAES reviewers for providing valuable feedback on our work. We are truly appreciative of the contributions made by all the mixing engineers and participants who participated in the interviews and studies and shared their valuable knowledge with us. We extend our thanks to Steinberg's research and development team for their unwavering support and honest feedback throughout this project.
This work is funded and supported by UK Research and Innovation [grant number EP/S022694/1] and Steinberg Media Technologies GmbH under the AI and Music postgraduate research program at the Centre for Digital Music, QMUL.
|
2309.13617 | Regularising the Cauchy problem for Laplace's equation by fractional
operators | In this paper we revisit the classical Cauchy problem for Laplace's equation
as well as two further related problems in the light of regularisation of this
highly ill-conditioned problem by replacing integer derivatives with fractional
ones. We do so in the spirit of quasi reversibility, replacing a classically
severely ill-posed PDE problem by a nearby well-posed or only mildly ill-posed
one. In order to be able to make use of the known stabilising effect of
one-dimensional fractional derivatives of Abel type we work in a particular
rectangular (in higher space dimensions cylindrical) geometry. We start with
the plain Cauchy problem of reconstructing the values of a harmonic function
inside this domain from its Dirichlet and Neumann trace on part of the boundary
(the cylinder base) and explore three options for doing this with fractional
operators. The two other related problems are the recovery of a free boundary
and then this together with simultaneous recovery of the impedance function in
the boundary condition. Our main technique here will be Newton's method. The
paper contains numerical reconstructions and convergence results for the
devised methods. | Barbara Kaltenbacher an William Rundell | 2023-09-24T12:16:06Z | http://arxiv.org/abs/2309.13617v1 | # Regularising the Cauchy problem for Laplace's equation by fractional operators
###### Abstract
In this paper we revisit the classical Cauchy problem for Laplace's equation as well as two further related problems in the light of regularisation of this highly ill-conditioned problem by replacing integer derivatives with fractional ones. We do so in the spirit of quasi reversibility, replacing a classically severely ill-posed PDE problem by a nearby well-posed or only mildly ill-posed one. In order to be able to make use of the known stabilising effect of one-dimensional fractional derivatives of Abel type we work in a particular rectangular (in higher space dimensions cylindrical) geometry. We start with the plain Cauchy problem of reconstructing the values of a harmonic function inside this domain from its Dirichlet and Neumann trace on part of the boundary (the cylinder base) and explore three options for doing this with fractional operators. The two other related problems are the recovery of a free boundary and then this together with simultaneous recovery of the impedance function in the boundary condition. Our main technique here will be Newton's method. The paper contains numerical reconstructions and convergence results for the devised methods.
**ams classification:** 35J25, 35R11, 35R30, 35R35, 65J20.
## 0 Introduction
As its name suggests, the Cauchy Problem for Laplace's equation has a long history. By the early-middle of the nineteenth century it was known that prescribing the values \(u\) on the boundary \(\partial\Omega\) of a domain \(\Omega\) where \(-\triangle u=0\) held, allowed \(u\) to be determined uniquely within \(\Omega\). There was a similar statement for "flux" or the value of the normal derivative: the so-called Dirichlet and Neumann problems. These problems held great significance for an enormous range of applications evident at that time and provided solutions that depended continuously on the boundary measurements. It was also recognised that frequently a case would arise where part of the boundary is inaccessible and no measurements could be made there. In compensation one could measure both the value and the flux at the accessible
part: the Cauchy problem. Since solutions of Laplace's equation can be considered as the real part of an underlying analytic functions, analytic continuation still allowed uniqueness of the solution. However, the continuous dependence on the boundary data was lost; in fact in an extreme way.
A famous reference to this state of affairs dates from the beginning of the twentieth century when Hadamard singled such problems out as being "incorrectly set" and hence unworthy of mathematical study, as they had "no physical significance." The backwards heat problem and the Cauchy problem were the prime exhibits, [12, 13, 14].
By the middle of the twentieth century such problems had shown to have enormous physical significance and could not be ignored from any perspective. Methods had to be found to overcome the severe ill-conditioning. During this period the subject was extended to general inverse problems and included a vast range of situations for which the inverse map is an unbounded operator. Examples of still foundational papers from this period including applications, are [9, 31, 32, 33].
One of the popular techniques dating from this period is the _method of quasi-reversibility_ of Lattes and Lions, [28]. In this approach the original partial differential equation was replaced by one in which the "incorrect" data allowed a well-posed recovery of its solution. This new equation contained a parameter \(\epsilon\) that allowed for stable inversion for \(\epsilon>0\) but, in addition (in a sense that had to be carefully defined), solutions of the regularising equation converged to that of the original as \(\epsilon\to 0\). It is now recognised that the original initial suggested choices of that time brought with them new problematic issues either because of additional unnatural boundary conditions required or an operator whose solutions behaved in a strongly different manner from the original that offset any regularising amelioration that it offered. Thus, the method came with a basic and significant challenge: finding a "closely-related" partial differential equation, depending on a parameter \(\epsilon\), that could use the data in a well-posed manner for \(\epsilon>0\) and also be such that its solutions converged to those of the original equation as \(\epsilon\to 0\).
In other words, the central issue in using quasi-reversibility is in the choice of the regularising equation. Here we will follow recent ideas for the backwards heat problem and replace the usual derivative in the "difficult" direction by a fractional derivative. In the parabolic case this was a time fractional derivative and one of the first papers taking this direction was [30]. It was later shown in [19] that the effectiveness of the method and the choice of the fractional exponent used strongly depended on both the final time \(T\) and the frequency components in the initial time function \(u_{0}(x)\). This led to the current authors proposing a multi-level version with different fractional exponents depending on the frequencies in \(u_{0}\), which of course had also to appear in the measured final time and were thus identifiable, [21]. This "split frequency" method will also be used in the current paper.
The advantage here is that such fractional operators arise naturally. The diffusion equation in \((x,y)-\) coordinates results from a diffusive process in which the underlying stochastic process arises form sampling through a probabilty density function \(\psi\). If \(\psi\) function has both finite mean and variance then it can be shown that the long term limit approaches Brownian motion resulting in classical derivatives. This can be viewed as a
direct result of the central limit theorem. Allowing a finite mean but an infinite variance can lead directly to fractional derivatives, [22].
However, while the use of time fractional derivatives and their behaviour in resulting partial differential equations is now well-understood, the same cannot be said to the same degree for the case of space fractional derivatives. This statement notwithstanding it can identify the standard derivative as a limiting situation of one of fractional type. This makes such an operator a natural candidate for a quasi-reversibility operator.
In our case if \(\Omega\) is a rectangle with the top side inaccessible but data can be measured on the other three sides then a basic problem is to recover the solution \(u(x,y)\) by also measuring the flux \(\frac{\partial u}{\partial y}\) at, say, the bottom edge. In the usual language we have Dirichlet data on the two sides and Cauchy data on the bottom. We will consider this problem as Problem 1 in Section 1.
However there is a further possibility: the top side may be a curve \(\ell(x)\) and we also do not know this curve - and want to do so. This is a classical example of a free-boundary value problem and a typical, and well studied, example here is of corrosion to a partly inaccessible metal plate. This is our Problem 2 (cf. Section 2).
In addition, while the original top side was, say, a pure conductor or insulator with either \(u=0\) or \(\frac{\partial u}{\partial\nu}=0\) there, this has now to be re-modelled as an impedance condition where the impedance parameter is also likely unknown as a function of \(x\). Recovery of both the boundary curve and the impedance coefficient is the topic of Problem 3 in Section 3.
Related to this are obstacle problems for elliptic problems in a domain \(\Omega\) that seek to recover an interior object \(D\) from additional boundary data. This comes under this same classification, albeit with a different geometry. The boundary of \(D\) can be purely conductive, or purely insulating, or satisfy an impedance condition with a perhaps unknown parameter. The existing literature here is again extensive. We mention the survey by Isakov, [17] which includes not only elliptic but also parabolic and hyperbolic equation-based problems. Other significant papers from this time period are [1, 25, 8, 34, 4]; a very recent overview on numerical methods for the Cauchy problem with further references can be found in [7].
For our purposes we wish to take advantage of the geometry described earlier where we are able, in some sense, to separate the variables and treat each of the differential operator's components in a distinct manner.
The stucture of the paper is as follows. Each of the three sections 1, 2, 3 first of all contains a formulation of the problem along with the derivation of a reconstruction method and numerical reconstruction results. In Section 1, this is a quasi reversibility approach based on fractional derivatives; in Sections 2, 3 dealing with nonlinear problems, these are regularised Newton type methods. Sections 1, 3 also contain convergence results. In particular, in Section 3 we verify a range invariance condition on the forward operator that allows us to prove convergence of a regularised frozen Newton method.
## 1 Problem 1
**Problem 1.1**.: _Given \(f\), \(g\) in a region \(\Omega\)_
\[\begin{split}&-\triangle u=-\triangle_{x}u-\partial_{y}^{2}u=0 \text{ in }\Omega\times(0,\vec{l})\\ & u(\cdot,y)=0\text{ on }\partial\Omega\times(0,\vec{l})\\ & u(x,0)=f(x)\,,\partial_{y}u(x,0)=g(x)\quad x\in\Omega\end{split} \tag{1}\]
_find \(u(x,y)\) in the whole cylinder \(\Omega\times(0,\vec{l})\)._
This is a classical inverse problem going back to before Hadamard [14] and there exists a huge amount of literature on it. For a recent review and further references, see, e.g., [7].
Expansion of \(u(\cdot,y)\), \(f\), \(g\) with respect to the eigenfunctions \(\phi_{j}\) (with corresponding eigenvalues \(\lambda_{j}\)) of \(-\triangle_{x}\) on \(\Omega\) with homogeneous Dirichlet boundary conditions yields
\[u(x,y)=\sum_{j=1}^{\infty}u_{j}(y)\phi_{j}(x)\,,\quad f(x)=\sum_{j=1}^{\infty }f_{j}\phi_{j}(x)\,,\quad g(x)=\sum_{j=1}^{\infty}g_{j}\phi_{j}(x)\,,\]
where for all \(j\in\mathbb{N}\)
\[u_{j}^{\prime\prime}(y)-\lambda_{j}u_{j}(y)=0\quad y\in(0,\vec{l})\,,\quad u _{j}(0)=f_{j}\,,\quad u_{j}^{\prime}(0)=g_{j}\,. \tag{2}\]
Thus
\[u(x,y)=\sum_{j=1}^{\infty}a_{j}\phi_{j}(x)\,, \tag{3}\]
where
\[\begin{split} a_{j}&=f_{j}\cosh(\sqrt{\lambda_{j}} y)+g_{j}\frac{\sinh(\sqrt{\lambda_{j}}y)}{\sqrt{\lambda_{j}}}\\ &=\frac{\sqrt{\lambda_{j}}f_{j}+g_{j}}{2\sqrt{\lambda_{j}}}\exp (\sqrt{\lambda_{j}}y)+\frac{\sqrt{\lambda_{j}}f_{j}-g_{j}}{2\sqrt{\lambda_{j} }}\exp(-\sqrt{\lambda_{j}}y).\end{split} \tag{4}\]
Here the negative Laplacian \(-\triangle_{x}\) on \(\Omega\) with homogeneous Dirichlet boundary conditions can obviously be replaced by an arbitrary symmetric positive definite operator acting on a Hilbert space, in particular by an elliptic differential operator with possibly \(x\) dependent coefficients on a \(d\)-dimensional Lipschitz domain \(\Omega\) with homogeneous Dirichlet, Neumann or impedance boundary conditions.
### Regularisation by fractional differentiation
Since the values of \(u\) have to be propagated in the \(y\) direction, starting from the data \(f\), \(g\) at \(y=0\), the reason for ill-posedness (as is clearly visible in the exponential amplification of noise in this data, cf. (4)), results from the \(y\)-derivative in the PDE. We thus consider several options of regularising Problem 1.1 by replacing the second order derivative with
respect to \(y\) by a fractional one, in the spirit of quasi reversibility [3, 10, 28, 35, 36, 37]. We note in particular [22, Sections 8.3, 10.1] in the context of fractional derivatives.
In order to make use of integer (0th and 1st) order derivative data at \(y=0\), we use the Djrbashian-Caputo (rather than the Riemann-Liouville) version of the Abel fractional derivative. This has a left- and a right-sided version defined by
\[{}_{0}D_{y}^{\beta}v=h_{2-\beta}\ast\partial_{y}^{2}v,\quad\overline{{}_{y}D_ {\overline{l}}^{\beta}v}^{\overline{l}}=h_{2-\beta}\ast\partial_{y}^{2} \overline{v}^{\overline{l}},\quad h_{2-\beta}(y)=\frac{1}{\Gamma(2-\beta)\,y^ {\beta-1}},\quad\overline{v}^{\overline{l}}(y)=v(\overline{l}-y)\]
for \(\beta\in(1,2)\), where \(\ast\) denotes the (Laplace) convolution. Note that the Laplace transform of \(h_{2-\beta}\) is given by \(\widehat{h}_{2-\beta}(s)=s^{\beta-2}\). Correspondingly, as solutions to initial value problems for fractional ODEs, Mittag-Leffler functions, as defined by (see, e.g., [22, Section 3.4])
\[E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+\beta)} \quad z\in\mathbb{C},\]
for \(\alpha>0\), and \(\beta\in\mathbb{R}\) will play a role.
While using the spectral decomposition
\[u^{(\alpha)}(x,y)=\sum_{j=1}^{\infty}a_{j}^{(\alpha)}\phi_{j}(x)\,, \tag{5}\]
to approximate (3), (4) in the analysis, the computational implementation does not need the eigenvalues and eigenfunctions of \(-\triangle_{x}\) but relies on the numerical solution of fractional PDEs, for which efficient methods are available, see, e.g., [2, 18, 27, 29].
#### Left-sided Djrbashian-Caputo fractional derivative
Replacing \(\partial_{y}^{2}\) by \({}_{0}D_{y}^{2\alpha}\) with \(2\alpha\approx 2\) amounts to considering, instead of (2), the fractional ODEs
\[{}_{0}D_{y}^{2\alpha}u_{j}(y)-\lambda_{j}u_{j}(y)=0\quad y\in(0,\overline{l}) \,,\quad u_{j}(0)=f_{j}\,,\quad u_{j}^{\prime}(0)=g_{j},\]
whose solution by means of Mittag-Leffler functions (see, e.g., [22, Theorem 5.4]) yields
\[a_{j}^{(\alpha)}=f_{j}E_{2\alpha,1}(\lambda_{j}y^{2\alpha})+g_{j}yE_{2\alpha, 2}(\lambda_{j}y^{2\alpha}). \tag{6}\]
In view of the fact that \(E_{2,1}(z)=\cosh(\sqrt{z})\) and \(E_{2,2}(z)=\frac{\sinh\sqrt{z}}{\sqrt{z}}\), this is consistent with (4).
#### Right-sided Djrbashian-Caputo fractional derivative
Replacing \(\partial_{y}^{2}\) by \({}_{y}D_{\overline{l}}^{2\alpha}\) with \(2\alpha\approx 2\) corresponds to replacing (2) by
\[{}_{y}D_{\overline{l}}^{2\alpha}u_{j}(y)-\lambda_{j}u_{j}(y)=0\quad y\in(0, \overline{l})\,,\quad u_{j}(0)=f_{j}\,,\quad u_{j}^{\prime}(0)=g_{j}\]
together with
\[u_{j}(\vec{l})=\bar{a}_{j}\,,\quad u^{\prime}_{j}(\vec{l})=\bar{b}_{j}\,.\]
From the identity
\[{}_{y}D^{2\alpha}_{\vec{l}}u_{j}(\vec{l}-\eta)=(h_{2-2\alpha}*w^{\prime\prime}_{ j})(\eta)\quad\text{ for }w_{j}(\eta)=u_{j}(\vec{l}-\eta)\text{ and }\widehat{h}_{2-2\alpha}(s)=s^{2\alpha-2}\]
we obtain the initial value problem
\[(h_{2-2\alpha}*w^{\prime\prime}_{j})(\eta)-\lambda_{j}w_{j}(\eta)=0\quad\eta \in(0,\vec{l})\,,\quad w_{j}(0)=\bar{a}_{j}\,,\quad w^{\prime}_{j}(0)=-\bar{b} _{j}\,.\]
Taking Laplace transforms yields
\[s^{2\alpha}\widehat{w}_{j}(s)-s^{2\alpha-1}\bar{a}_{j}+s^{2\alpha-2}\bar{b}_{ j}-\lambda_{j}\widehat{w}_{j}(s)=0\]
i.e.,
\[\widehat{w}_{j}(s)=\frac{s^{2\alpha-1}}{s^{2\alpha}-\lambda_{j}}\bar{a}_{j}- \frac{s^{2\alpha-2}}{s^{2\alpha}-\lambda_{j}}\bar{b}_{j}, \tag{7}\]
and for the derivative
\[\widehat{w^{\prime}_{j}}(s)=s\widehat{w}_{j}(s)-\bar{a}_{j}=\frac{\lambda_{j }}{s^{2\alpha}-\lambda_{j}}\bar{a}_{j}-\frac{s^{2\alpha-1}}{s^{2\alpha}- \lambda_{j}}\bar{b}_{j}. \tag{8}\]
From [22, Lemma 4.12] we obtain
\[\mathcal{L}(\eta^{k-1}E_{2\alpha,k}(\lambda\eta^{2\alpha}))(s)=\frac{s^{2 \alpha-k}}{s^{2\alpha}-\lambda_{j}}\,,\ k\in\{1,2\}\,,\quad\mathcal{L}(-\eta^{ 2\alpha-1}E_{2\alpha,2\alpha}(\lambda\eta^{2\alpha}))(s)=\frac{1}{s^{2\alpha} -\lambda_{j}}\,.\]
Inserting this into (7), (8) and evaluating at \(\eta=\vec{l}\) we obtain
\[f_{j}= E_{2\alpha,1}(\lambda_{j}\vec{l}^{2\alpha})\bar{a}_{j}-\vec{l}E_{2 \alpha,2}(\lambda_{j}\vec{l}^{2\alpha})\bar{b}_{j}\] \[-g_{j}= -\lambda_{j}\vec{l}^{2\alpha-1}E_{2\alpha,2\alpha}(\lambda_{j} \vec{l}^{2\alpha})\bar{a}_{j}-E_{2\alpha,1}(\lambda_{j}\vec{l}^{2\alpha})\bar {b}_{j}\,.\]
Resolving for \(\bar{a}_{j}\) and replacing \(\vec{l}\) by \(y\) we get
\[a^{(\alpha)}_{j}=\frac{f_{j}E_{2\alpha,1}(\lambda_{j}y^{2\alpha})+g_{j}yE_{2 \alpha,2}(\lambda_{j}y^{2\alpha})}{\left(E_{2\alpha,1}(\lambda_{j}y^{2\alpha}) \right)^{2}-\lambda_{j}y^{2\alpha}E_{2\alpha,2\alpha}(\lambda_{j}y^{2\alpha})E _{2\alpha,2}(\lambda_{j}y^{2\alpha})}. \tag{9}\]
#### Factorisation of the Laplacian
An analysis of the two one-sided fractional approximations of \(\partial_{y}^{2}\) does not seem to be possible since it would require a stability estimate for Mittag-Leffler functions with positive argument and index close to two. While convergence from below of the fractional to the integer derivative holds at any integer (thus also second) order, a stability estimate is not available. Therefore we look for a possibility to reduce the problem to one with first order \(y\) derivatives (and treat the inverse problem similarly to a backwards heat problem to take
advantage of recent work in this direction [21, 22]). One way to do so is to factorise the negative Laplacian so that the Cauchy problem becomes:
Given \(f\), \(g\) in
\[-\triangle u =-\triangle_{x}u-\partial_{y}^{2}u=(\partial_{y}-\sqrt{-\triangle_{ x}})(-\partial_{y}-\sqrt{-\triangle_{x}})u=0\mbox{ in }\Omega\times(0,\overline{l})\] \[u(\cdot,y) =0\mbox{ on }\partial\Omega\times(0,\overline{l})\] \[u(x,0) =f(x)\,,\partial_{y}u(x,0)=g(x)\quad x\in\Omega\]
find \(u(x,y)\).
More precisely, with \(u_{\pm}=\frac{1}{2}(u\pm\sqrt{-\triangle_{x}}^{-1}\partial_{y}u)\) we get the representation
\[u=u_{+}+u_{-}\]
where \(u_{+}\), \(u_{-}\) can be obtained as solutions to the subproblems
\[\begin{array}{l}\partial_{y}u_{+}-\sqrt{-\triangle_{x}}u_{+}=0\mbox{ in }\Omega\times(0,\overline{l})\\ u_{+}(\cdot,y)=0\mbox{ on }\partial\Omega\times(0,\overline{l})\\ u_{+}(x,0)=\frac{1}{2}(f(x)+\sqrt{-\triangle_{x}}^{-1}g(x))=:u_{+0}(x)\quad x \in\Omega\end{array} \tag{10}\]
and
\[\begin{array}{l}\partial_{y}u_{-}+\sqrt{-\triangle_{x}}u_{-}=0\mbox{ in }\Omega\times(0,\overline{l})\\ u_{-}(\cdot,y)=0\mbox{ on }\partial\Omega\times(0,\overline{l})\\ u_{-}(x,0)=\frac{1}{2}(f(x)-\sqrt{-\triangle_{x}}^{-1}g(x))\quad x\in\Omega\,.\end{array} \tag{11}\]
In fact, it is readily checked that if \(u_{\pm}\) solve (10), (11) then \(u=u_{+}+u_{-}\) solves (1). The numerical solution of the initial value problem (11) and of the final value problem for the PDE in (10) can be stably and efficiently carried out combining an implicit time stepping scheme with methods recently developed for the solution of PDEs with fractional powers of the Laplacian. See, e.g., [5, 6, 15].
Since \(\sqrt{-\triangle_{x}}\) is positive definite, the second equation (11) is well-posed, so there is no need to regularise. The first one (10) after the change of variables \(t=\overline{l}-y\), \(w_{+}(\overline{l}-t)=u_{+}(y)\) becomes a backwards heat equation (but with \(\sqrt{-\triangle}\) in place of \(-\triangle\))
\[\begin{array}{l}\partial_{t}w_{+}+\sqrt{-\triangle_{x}}w_{+}=0\mbox{ in }\Omega\times(0,\overline{l})\\ w_{+}(\cdot,t)=0\mbox{ on }\partial\Omega\times(0,\overline{l})\\ w_{+}(x,\overline{l})=\frac{1}{2}(f(x)+\sqrt{-\triangle_{x}}^{-1}g(x))\quad x \in\Omega\end{array} \tag{12}\]
Regularizing (12) by using in place of \(\partial_{t}\) a fractional "time" derivative \({}_{0}D_{t}^{\alpha}\) with \(\alpha\approx 1\), \(\alpha<1\) (while leaving (11) unregularised) amounts to setting
\[a_{j}^{\alpha}=\frac{\sqrt{\lambda_{j}}f_{j}+g_{j}}{2\sqrt{\lambda_{j}}}\frac {1}{E_{\alpha,1}(-\sqrt{\lambda_{j}}y^{\alpha})}+\frac{\sqrt{\lambda_{j}}f_{j} -g_{j}}{2\sqrt{\lambda_{j}}}\exp(-\sqrt{\lambda_{j}}y)\,. \tag{13}\]
This approach can be refined by splitting the frequency range \((\lambda_{j})_{j\in\mathbb{N}}\) into subsets \((\{\lambda_{K_{i}+1},\ldots,\lambda_{K_{i+1}}\})_{i\in\mathbb{N}}\) and choosing the breakpoint \(K_{i}\) as well as the fractional order \(\alpha_{i}\) for each of these subsets according to a discrepancy principle. For details, see [21].
### Reconstructions
In this section we compare reconstructions with the three options (6), (9), (13). The latter was refined by the split frequency approach from [21] using the discrepancy principle for determining the breakpoints and differentiation orders. While this method is backed up by convergence theory, the same does not hold true for the options (6) and (9). Indeed, not even stability can be expected to hold from the known behaviour of the Mittag Leffler functions with positive argument, in particular for (6). This becomes visible in the reconstructions in Figure 1. The differentiation orders for (6), (9) were taken as the smallest (thus most stable) ones obtained in (13).
The relative \(L^{2}\) errors over the whole domain were \(0.0275\) for (6), \(0.0135\) for (9), \(1.8597\cdot 10^{-4}\) for (13).
The split frequency factorised Laplace approach also worked well with much higher noise levels, as can be seen in Sections 2.1 and 3.1, where it is intrinsically used as part of the reconstruction algorithm. On the other hand, (6), (9) failed to provide reasonable reconstructions at higher noise levels, which is to be expected from a theoretical point of view.
### Convergence of the scheme (13)
As usual in regularisation methods, the error can be decomposed as
\[\|u-u^{\alpha,\delta}\|_{X}\leq\|u-u^{(\alpha)}\|_{X}+\|u^{(\alpha)}-u^{\alpha,\delta}\|_{X} \tag{14}\]
where \(u^{\alpha,\delta}\) is the actual reconstruction from noisy data \(f^{\delta}\), \(g^{\delta}\) leading to
\[u^{\delta}_{+0}=\tfrac{1}{2}(f^{\delta}(x)+\sqrt{-\triangle_{x}}^{-1}g^{ \delta}(x))\text{ with }\|u^{\delta}_{+0}-u_{+0}\|_{Y}\leq\delta. \tag{15}\]
Figure 1: Reconstructions from formulas (6) (blue dotted), (9) (red, dashed) and (13) (green, irregularly dashed) as compared to the actual value (black, solid) from data with \(1\) per cent noise at different distances from the Cauchy data boundary. Left: \(y=\tfrac{1}{3}\), Middle: \(y=\tfrac{2}{3}\), Right: \(y=1\).
Using \(X=L^{2}(0,\bar{l};H^{\sigma}(\Omega))\) as the space for the sought-after function \(u\), we can write the approximation error as
\[\begin{split}&\|u-u^{(\alpha)}\|_{L^{2}(0,\bar{l};H^{\sigma}( \Omega))}=\left(\int_{0}^{\bar{l}}\sum_{j=1}^{\infty}\lambda_{j}^{\sigma}\,|u_{ j}(y)-u_{j}^{(\alpha)}(y)|^{2}\,dy\right)^{1/2}\\ &=\left(\int_{0}^{\bar{l}}\sum_{j=1}^{\infty}|u_{+,j}(y)|^{2}\, \lambda_{j}^{\sigma}\,\frac{1}{E_{\alpha,1}(-\sqrt{\lambda_{j}}y^{\alpha})^{2 }}\,\Big{|}E_{\alpha,1}(-\sqrt{\lambda_{j}}y^{\alpha})-\exp(-\sqrt{\lambda_{j} }y)\Big{|}^{2}\,dy\right)^{1/2}\end{split} \tag{16}\]
where \(u_{+,j}^{\dagger}(y)=u_{+0,j}\exp(\sqrt{\lambda_{j}}y)\), and the propagated noise term as
\[\begin{split}\|u^{(\alpha)}-u^{\alpha,\delta}\|_{L^{2}(0,\bar{l} ;H^{\sigma}(\Omega))}&=\left(\int_{0}^{\bar{l}}\sum_{j=1}^{ \infty}\lambda_{j}^{\sigma}\,|u_{j}^{(\alpha)}(y)-u_{j}^{\alpha,\delta}(y)|^{ 2}\,dy\right)^{1/2}\\ &=\left(\int_{0}^{\bar{l}}\sum_{j=1}^{\infty}|u_{+0,j}-u_{+0,j}^{ \delta}|^{2}\,\lambda_{j}^{\sigma}\,\frac{1}{E_{\alpha,1}(-\sqrt{\lambda_{j}} y^{\alpha})^{2}}dy\right)^{1/2}\!\!.\end{split} \tag{17}\]
In view of (12), convergence follows similarly to corresponding results on regularisation by backwards subdiffusion [22, Section 10.1.3] using two fundamental lemmata
* on stability, estimating \(\frac{1}{E_{\alpha,1}(-\lambda y^{\alpha})}\) (Lemma 1.1) and
* on convergence, estimating \(\big{|}E_{\alpha,1}(-\sqrt{\lambda_{j}}y^{\alpha})-\exp(-\sqrt{\lambda_{j}}y) \big{|}\) (Lemma 1.2)
that we here re-prove to track dependence of constants on the final "time" \(\bar{l}\). This is important in view of the fact that as opposed to [22, Section 10.1.3], we consider a range of "final time values", that is, in our context, \(y\) values. In the following \(\hat{\lambda}\) serves as a placeholder for \(\sqrt{\lambda_{j}}\).
**Lemma 1.1**.: _For all \(\alpha\in(0,1)\),_
\[\frac{1}{E_{\alpha,1}(-\hat{\lambda}y^{\alpha})}\leq 1+\Gamma(1-\alpha)\hat{ \lambda}y^{\alpha} \tag{18}\]
Proof.: The bound (18) is an immediate consequence of the lower bound in [22, Theorem 3.25].
**Lemma 1.2**.: _For any \(\bar{l}>0\), \(\hat{\lambda}_{1}\geq 0\), \(\alpha_{0}\in(0,1)\) and \(p\in[1,\frac{1}{1-\alpha_{0}})\), there exists \(\tilde{C}=\tilde{C}(\alpha_{0},p,\bar{l})=\sup_{\alpha^{\prime}\in[\alpha_{0},1)}C(\alpha^{\prime},p,\bar{l})>0\) with \(C(\alpha^{\prime},p,\bar{l})\) as in (58), such that for any \(\alpha\in[\alpha_{0},1)\) and for all \(\hat{\lambda}>\hat{\lambda}_{1}\)_
\[\begin{split}&\|d_{\alpha}\|_{L^{\infty}(0,\bar{l})}\leq\tilde{C} \hat{\lambda}^{1/p}(1-\alpha),\hskip 56.905512pt\|d_{\alpha}\|_{L^{p}(0,\bar{l})} \leq\tilde{C}(1-\alpha)\\ &\text{for the function $d_{\alpha}:[0,\bar{l}]\to\mathbb{R}$ defined by $d_{\alpha}(y)=E_{\alpha,1}(-\hat{\lambda}\,y^{\alpha})-\exp(-\hat{\lambda}\,y)$.} \end{split} \tag{19}\]
Proof.: see Appendix A.
The estimates from Lemma 1.2 become more straightforward if the values of \(y\) are constrained to a compact interval not containing zero, as relevant for Problem 3.1. This also allows us to derive \(L^{\infty}\) bounds on \(E_{\alpha,1}(-\hat{\lambda}y^{\alpha})-\exp(-\hat{\lambda}y)\), which would not be possible without bounding \(y\) away from zero, due to the singularities of the Mittag-Leffler functions there.
**Lemma 1.3**.: _For any \(0<\underline{l}<\overline{l}<\infty\), \(\hat{\lambda}_{1}\geq 0\), \(\alpha_{0}\in(0,1)\) and \(p\in[1,\frac{1}{1-\alpha_{0}})\), there exist constants \(\tilde{C}=\tilde{C}(\alpha_{0},p,\overline{l})>0\) as in Lemma 1.2, \(\hat{C}=\hat{C}(\alpha_{0},\underline{l},\overline{l})>0\), such that for any \(\alpha\in[\alpha_{0},1)\) and for all \(\hat{\lambda}>\hat{\lambda}_{1}\)_
\[\begin{split}&\left\|d_{\alpha}\right\|_{L^{\infty}(\underline{l}, \overline{l})}\leq\tilde{C}(1-\alpha),\hskip 72.27pt\left\|\partial_{y}d_{ \alpha}\right\|_{L^{\infty}(\underline{l},\overline{l})}\leq\hat{C}\hat{ \lambda}(1-\alpha)\\ &\text{for the function }d_{\alpha}:[\underline{l},\overline{l}]\to\mathbb{R}\text{ defined by }d_{\alpha}(y)=E_{\alpha,1}(-\hat{\lambda}\,y^{\alpha})-\exp(-\hat{\lambda}\,y).\end{split} \tag{20}\]
Proof.: see Appendix A.
Applying Lemmas 1.1, 1.2 in (14), (16), (17), we obtain the overall error estimate
\[\begin{split}\|u-u^{\alpha,\delta}\|_{L^{2}(0,\overline{l};H^{ \sigma}(\Omega))}\leq&\|u-u^{(\alpha)}\|_{L^{2}(0,\overline{l};H^ {\sigma}(\Omega))}\\ &+\|u_{+0}^{\delta}-u_{+0}\|_{H^{\sigma}(\Omega)}+\tfrac{\overline {l}^{\alpha+1/2}}{\sqrt{2\alpha+1}}\Gamma(1-\alpha)\|u_{+0}^{\delta}-u_{+0}\|_ {H^{\sigma+1}(\Omega)},\end{split} \tag{21}\]
where we further estimate the approximation error
\[\begin{split}\|u-u^{(\alpha)}\|_{L^{2}(0,\overline{l};H^{\sigma} (\Omega))}\leq&\tilde{C}\big{(}(1-\alpha)\|u_{+}^{\dagger}\|_{L^ {2}(0,\overline{l};H^{\sigma+1/p}(\Omega))}\\ &+\overline{l}^{\alpha}\,(1-\alpha)\Gamma(1-\alpha)\|u_{+}^{ \dagger}\|_{L^{2}(0,\overline{l};H^{\sigma+1+1/p}(\Omega))}\big{)}.\end{split}\]
Under the assumption \(u_{+}^{\dagger}\in L^{2}(0,\overline{l};H^{\sigma+1+1/p}(\Omega))\), from Lebesgue's Dominated Convergence Theorem and uniform boundedness of \((1-\alpha)\Gamma(1-\alpha)\) as \(\alpha\nearrow 1\), as well as convergence to zero of \(E_{\alpha,1}(-\hat{\lambda}\,{}^{\cdot\alpha})-\exp(-\hat{\lambda}\,\cdot)\) as \(\alpha\nearrow 1\), we obtain \(\|u-u^{(\alpha)}\|_{L^{2}(0,\overline{l};H^{\sigma}(\Omega))}\to 0\) as \(\alpha\nearrow 1\).
In view of the fact that the data space in (15) is typically \(Y=L^{2}(\Omega)\), considering the propagated noise term, the \(H^{\sigma}\) and \(H^{\sigma+1}\) norms in estimate (21) reveal the fact that even when aiming for the lowest order reconstruction regularity \(\sigma=0\), the data needs to be smoothed.
Due to the infinite smoothing property of the forward operator, a method with infinite qualification is required for this purpose. We therefore use Landweber iteration for defining a smoothed version of the data \(u_{+0}^{\tilde{\delta}}=v^{(i_{*})}\) by
\[v^{(i+1)}=v^{(i)}-A(v^{(i)}-u_{+0}^{\delta})\,,\qquad v^{(0)}=0\,, \tag{22}\]
where
\[A=\mu(-\triangle_{x})^{-\tilde{\sigma}} \tag{23}\]
with \(\tilde{\sigma}\in\{\sigma,\sigma+1\}\geq 1\) and \(\mu>0\) chosen so that \(\|A\|_{L^{2}\to L^{2}}\leq 1\).
For convergence and convergence rates as the noise level tends to zero, we quote (for the proof, see the appendix of [21]) a bound in terms of \(\|u_{+}(\cdot,l)\|_{L^{2}(\Omega)}\) for some fixed \(l\in(0,\overline{l})\), where \(u_{+}\) is the unstable component of the solution according to (10).
**Lemma 1.4**.: _A choice of_
\[i_{*}\sim l^{-2}\log\left(\frac{\|u_{+}(\cdot,l)\|_{L^{2}(\Omega)}}{\delta}\right) \tag{24}\]
_yields_
\[\begin{array}{l}\|u_{+0}-u_{+0}^{\tilde{\delta}}\|_{L^{2}(\Omega)}\leq C_{1} \delta\,,\\ \|u_{+0}-u_{+0}^{\tilde{\delta}}\|_{H^{\delta}(\Omega)}\leq C_{2}\,l^{-1}\, \delta\,\sqrt{\log\left(\frac{\|u_{+}(\cdot,l)\|_{L^{2}(\Omega)}}{\delta} \right)}=:\tilde{\delta}\end{array} \tag{25}\]
_for some \(C_{1},C_{2}>0\) independent of \(l\) and \(\delta\)._
Thus, using \(u_{+0}^{\tilde{\delta}}\) in place of \(u_{+0}\) in the reconstruction, we obtain the following convergence result.
**Theorem 1.1**.: _Let the exact solution \(u^{\dagger}\) of Problem 1.1 satisfy \(u_{+}^{\dagger}\in L^{2}(0,\overline{l};H^{\sigma+1+1/p}(\Omega))\) for some \(\sigma\geq 0\), \(p>1\) and let the noisy data satisfy (15) with smoothed data constructed as in Lemma 1.4. Further, assume that \(\alpha=\alpha(\tilde{\delta})\) is chosen such that \(\alpha(\tilde{\delta})\to 1\) and \(\Gamma(1-\alpha(\tilde{\delta}))\tilde{\delta}\to 0\) as \(\delta\to 0\). Then \(\|u-u^{\alpha(\tilde{\delta}),\delta}\|_{L^{2}(0,\overline{l};H^{\sigma}( \Omega))}\to 0\) as \(\delta\to 0\)._
Since \(\Gamma(1-\alpha)\sim(1-\alpha)^{-1}\) as \(\alpha\nearrow 1\), the condition \(\Gamma(1-\alpha(\tilde{\delta}))\tilde{\delta}\to 0\) means that \(\alpha(\tilde{\delta})\) must not converge to unity too fast as the noise level vanishes - a well known condition in the context of regularisation of ill-posed problems.
## 2 Problem 2
**Problem 2.1**.: _Given \(f\), \(g\) in_
\[\begin{array}{l}-\triangle u=0\mbox{ in }x\in(0,L)\,,\ y\in(0,\ell(x)) \\ B_{0}u(0,y)=0\,,\ x\in(0,\ell(0))\quad B_{L}u(L,y)=0\,,\ x\in(0,\ell(L))\\ u(x,0)=f(x)\,,u_{y}(x,0)=g(x)\quad x\in\Omega\end{array}\]
_find \(\ell:(0,w)\to(0,\bar{l})\) such that one of the following three conditions holds on the interface defined by \(\ell\)._
\[\begin{array}{l}(N)\quad\partial_{\nu}u=0\ :\quad 0=\partial_{\tilde{\nu}}u( x,\ell(x))=u_{y}(x,\ell(x))-\ell^{\prime}(x)u_{x}(x,\ell(x))\quad x\in(0,L) \mbox{ or}\\ (D)\quad u=0\ :\quad u(x,\ell(x))\quad x\in(0,L)\mbox{ or}\\ (I)\quad\partial_{\nu}u+\gamma u=0\ :\quad 0=\partial_{\tilde{\nu}}u(x,\ell(x))+ \sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)u(x,\ell(x))\quad x\in(0,L).\end{array} \tag{26}\]
(Note that in (26) \(\tilde{\nu}\) is the non-normalised outer normal direction, but for the zero level set this does not matter.)
A by now classical reference for this problem is [1].
Here we make the a priori assumption that the searched for domain is contained in the rectangular box \((0,L)\times(0,\overline{l})\).
The operators \(B_{0}\) and \(B_{L}\) determine the boundary conditions on the lateral boundary parts, which may also be of Dirichlet, Neumann, or impedance type.
To emphasise dependence on the parametrisation \(\ell\), we denote the domain as well as the fixed and variable parts of its boundary as follows
\[\begin{array}{l}D(\ell)=\left\{(x,y)\in(0,L)\times(0,\overline{l})\,:\,y\in(0, \ell(x))\right\},\\ \Gamma_{0}(\ell)=\left\{(x,\ell(x))\,:\,x\in(0,L)\right\}\\ \Gamma_{1}=(0,L)\times\left\{0\right\},\\ \Gamma_{2}(\ell)=\left\{0\right\}\times(0,\ell(0))\cup\left\{L\right\}\times( 0,\ell(L))\end{array} \tag{27}\]
(note that \(\Gamma_{2}(\ell)\) depends on \(\ell\) only weakly via its endpoints \(\ell(0)\) and \(\ell(L)\)). With this we can write the forward problem as
\[\begin{array}{c}-\triangle u=0\mbox{ in }D(\ell)\\ u=f\mbox{ on }\Gamma_{1}\\ Bu=0\mbox{ on }\Gamma_{2}(\ell)\\ (N)\;\;\;\partial_{\tilde{\nu}}u=0\qquad\qquad\mbox{ or }\\ (D)\;\;\;u=0\qquad\qquad\mbox{ or }\\ (I)\;\;\;\partial_{\nu}u+\gamma u=0\.\end{array} \tag{28}\]
We will split the full inverse problem into two subproblems: The linear severely ill-posed Cauchy problem on \(D(\overline{l})\) (which is our Problem 1.1) and a well posed (or maybe mildly ill-posed) nonlinear problem of reconstructing the curve \(\ell\). Thus a straightforward approach for solving Problem 2.1 would be to first solve a regularised version of Problem 1.1 (e.g., in the way devised in Section 1) and then applying Newton's method to recover \(\ell\) from (26).
However, we follow a combined approach, writing Problem 2.1 as an operator equation with the total forward operator \(F\), and applying a Newton scheme, in which we make use of a regularised solution of Problem 1.1.
The forward operator is defined by
\[F:\ell\mapsto u_{y}(x,0)-g(x)\mbox{ where }u\mbox{ solves }\eqref{eq:d
where in the Neumann case (N) we can also use the alternative formulation \(\partial_{\tilde{\nu}}v(x,\ell(x))=\frac{d}{dx}\left(\underline{d\ell}(x)u_{x}(x, \ell(x))\right)=\frac{d}{dx}\left(\underline{d\ell}(x)\frac{1}{1+\ell^{\prime}( x)^{2}}\partial_{\tilde{\tau}}u(x,\ell(x))\right)\) and in the impedance case the boundary value function \(G\) is defined by
\[G(u,\ell)\underline{d\ell}:= \underline{d\ell^{\prime}}(x)\Big{(}u_{x}(x,\ell(x))-\frac{\ell^ {\prime}(x)}{\sqrt{1+\ell^{\prime}(x)^{2}}}\gamma(x)u(x,\ell(x))\Big{)}\] \[-\underline{d\ell}(x)\Big{(}u_{yy}(x,\ell(x))-\ell^{\prime}(x)u_ {xy}(x,\ell(x))+\sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)u_{y}(x,\ell(x))\Big{)}\] \[=: \tfrac{d}{dx}[\mathfrak{a}[\ell,u]\underline{d\ell}]-\mathfrak{ b}[\ell,u];\]
see see Appendix B for more explicit formulas for \(\mathfrak{a}[\ell,u]\), \(\mathfrak{b}[\ell,u]\). Note that this is obtained in a similar manner to the formula for the shape derivative [34, equation (3.1)], see also [16], and the identity \(ds=\sqrt{1+\ell^{\prime}(x)^{2}}\,dx\) for the arclength parametrisation \(s\), but using \(\tilde{\nu}\) in place of \(\nu\), as well as
\[\vec{x}_{h}(x)=\left(\begin{array}{c}0\\ \underline{d\ell}(x)\end{array}\right)\,, \vec{\tilde{\nu}}(x)=\left(\begin{array}{c}-\ell^{\prime}(x)\\ 1\end{array}\right)\,, \vec{\nu}(x)=\frac{1}{\sqrt{1+\ell^{\prime}(x)^{2}}}\vec{ \tilde{\nu}}(x)\] \[\vec{\tilde{\tau}}(x)=\left(\begin{array}{c}1\\ \ell^{\prime}(x)\end{array}\right)\,, \vec{\tau}(x)=\frac{1}{\sqrt{1+\ell^{\prime}(x)^{2}}}\vec{ \tilde{\tau}}(x).\]
Thus, computation of a Newton step \(\underline{d\ell}=\underline{d\ell}^{(k)}\) starting from some iterate \(\ell^{(k)}=\ell\) amounts to solving the system
\[-\triangle z =0\text{ in }D(\ell)\] \[z =f\text{ on }\Gamma_{1}\] \[Bz =0\text{ on }\Gamma_{2}(\ell)\] \[z_{y} =g\text{ on }\Gamma_{1}\] \[(N) \partial_{\tilde{\nu}}z(x,\ell(x))=\tfrac{d}{dx}\left(\underline{ d\ell}(x)u_{x}(x,\ell(x))\right)\] or \[(D) z(x,\ell(x))=-u_{y}(x,\ell(x))\underline{d\ell}(x)\] \[(I) \partial_{\tilde{\nu}}z(x,\ell(x))+\sqrt{1+\ell^{\prime}(x)^{2} }\gamma(x)z(x,\ell(x))=\tfrac{d}{dx}[\mathfrak{a}[\ell,u]\underline{d\ell}]- \mathfrak{b}[\ell,u] \tag{30}\]
(note that \(z_{y}|_{\Gamma_{1}}-g=F(\ell)+F^{\prime}(\ell)\underline{d\ell}\) ).
If we solve (a regularised version of) the Cauchy problem on the rectangular hold-all domain
\[-\triangle\bar{z}=0\text{ in }D(\vec{l})=(0,L)\times(0,\vec{l})\] \[\bar{z}=f\text{ on }\Gamma_{1} \tag{31}\] \[B\bar{z}=0\text{ on }\Gamma_{2}(\vec{l})=\{0,L\}\times(0,\vec{l})\] \[\partial_{y}\bar{z}-g=0\text{ on }\Gamma_{1}\]
in advance, then by uniqueness of solutions to the Cauchy problem, \(z\) coincides with \(\bar{z}\) on \(D(\ell)\).
Therefore, in each Newton step it only remains to compute \(u^{(k)}=u\) from the well-posed mixed elliptic boundary value problem (28) with \(\ell=\ell^{(k)}\) and update as follows.
In the Neumann case,
\[(N)\ \ \ell^{(k+1)}(x) =\ell^{(k)}(x)+\frac{1}{u_{x}^{(k)}(x,\ell^{(k)}(x))}\int_{0}^{x} \partial_{\bar{\nu}}\bar{z}(\xi,\ell^{(k)}(\xi))\,d\xi \tag{32}\] \[=\ell^{(k)}(x)+\frac{1}{u_{x}^{(k)}(x,\ell^{(k)}(x))}\int_{0}^{x} \Bigl{(}\partial_{y}\bar{z}(\xi,\ell^{(k)}(\xi))-\ell^{(k)^{\prime}}(\xi) \partial_{x}\bar{z}(\xi,\ell^{(k)}(\xi))\Bigr{)}\,d\xi.\]
If \(B\) denotes the lateral Dirichlet trace, then this also needs to be regularised, since due to the identity \(u(0,y)=0=u(L,y)\), the partial derivative \(u_{x}(\cdot,y)\) has to vanish at least at one interior point \(x\) for each \(y\). To avoid problems arising from division by zero, we thus solve a regularised version
\[(\ell^{(k+1)}-\ell^{(k)})=\operatorname{argmin}_{\underline{d \ell}}\int_{0}^{L}\Bigl{(}\partial_{\bar{\nu}}z(x,\ell(x))-\tfrac{d}{dx}[ \underline{d\ell}(x)u_{x}(x,\ell(x))]\Bigr{)}^{2}\,dx\\ +\frac{1}{\rho_{1}}\int_{0}^{L}\underline{d\ell}^{\prime}(x)^{2} \,dx+\rho_{2}(\underline{d\ell}(0)^{2}+\underline{d\ell}(L)^{2})\]
with a regularisation parameter \(\frac{1}{\rho_{1}}\) and a penalisation parameter \(\rho_{2}\) enforcing our assumption of \(\underline{d\ell}\) to be known at the boundary points.
In the Dirichlet case, the Newton step computes as
\[(D)\ \ \ell^{(k+1)}(x)=\ell^{(k)}(x)-\frac{\bar{z}(x,\ell^{(k)}(x))}{u_{y}^{(k)} (x,\ell^{(k)}(x))}. \tag{33}\]
With lateral Dirichlet conditions \(Bu=u=0\) we have \(u_{y}^{(k)}(0,\ell(0))=u_{y}^{(k)}(L,\ell(L))=0\) and so would have to divide by numbers close to zero near the endpoints. This can be avoided by imposing Neumann conditions \(Bu=\partial_{\nu}u=0\) on the lateral boundary. Still, the problem is mildly ill-posed and thus needs to be regularised for the following reason. In view of the Implicit Function Theorem, the function \(\ell\), being implicitly defined by \(u(x,\ell(x))=0\), has the same order of differentiability as \(u\). However, (33) contains an additional derivative of \(u\) as compared to \(\ell\). Obtaining a bound on \(u_{y}^{(k)}\) in terms of \(\ell^{(k)}\) from elliptic regularity, (cf., e.g., [11]) cannot be expected to be possible with the same level of differentiability.
In the impedance case, with
\[\mathfrak{a}[\ell,u](x) =\begin{cases}\frac{d}{dx}u(x,\ell(x))=u_{x}(x,\ell(x))&\text{if } \ell^{\prime}(x)=0\\ \frac{1}{\ell^{\prime}(x)}\Bigl{(}\frac{\gamma(x)}{\sqrt{1+\ell^{\prime}(x)^{ 2}}}u(x,\ell(x))+u_{y}(x,\ell(x)))\Bigr{)}&\text{otherwise}\end{cases}\] \[\mathfrak{b}[\ell,u](x) =\Bigl{(}-\sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)u_{y}(x,\ell(x))+ \tfrac{d}{dx}\Bigl{[}\frac{\ell^{\prime}(x)}{\sqrt{1+\ell^{\prime}(x)^{2}}} \gamma(x)u(x,\ell(x))\Bigr{]}\Bigr{)}\] \[=\Bigl{(}\frac{\ell^{\prime\prime}(x)}{\sqrt{1+\ell^{\prime}(x)^{ 2}}}\gamma(x)+\frac{\ell^{\prime}(x)}{\sqrt{1+\ell^{\prime}(x)^{2}}}\gamma^{ \prime}(x)+\gamma(x)^{2}\Bigr{)}u(x,\ell(x))\]
\[\phi(x) =\underline{d\ell}(x)\,\mathfrak{a}[\ell,u](x)\qquad a(x)=\tfrac{ \mathfrak{b}[\ell,u]}{\mathfrak{a}[\ell,u]}(x),\] \[b(x) =\partial_{\tilde{\nu}}\bar{z}(x,\ell^{(k)}(x))+\sqrt{1+\ell^{ \prime}(x)^{2}}\gamma(x)\bar{z}(x,\ell^{(k)}(x))\]
the Newton step amounts to solving \(\frac{d}{dx}\phi(x)-a(x)\phi(x)=b(x)\), which yields
\[(I)\ \ \ell^{(k+1)}(x)=\ell^{(k)}(x)-\tfrac{1}{\mathfrak{a}[\ell^{( k)},u^{(k)}](x)}\Big{\{}\exp\Bigl{(}-\int_{0}^{x}\tfrac{\mathfrak{b}[\ell^{(k)},u^{(k)} ]}{\mathfrak{a}[\ell^{(k)},u^{(k)}]}(s)\,ds\Bigr{)}\underline{d\ell}(0) \mathfrak{a}[\ell^{(k)},u^{(k)}](0)\\ +\int_{0}^{x}b(s)\exp\Bigl{(}-\int_{s}^{x}\tfrac{\mathfrak{b}[ \ell^{(k)},u^{(k)}]}{\mathfrak{a}[\ell^{(k)},u^{(k)}]}(t)\,dt\Bigr{)}\,ds \Big{\}}. \tag{34}\]
See Appendix B for details on the derivation of this formula. Also here, due to the appearance of derivatives of \(u\) and \(\ell\), regularisation is needed.
In Section 3, we will prove convergence of a regularised frozen Newton method for simultaneously recovering \(\ell\) and \(\gamma\).
**Remark 2.1**.: _(Uniqueness) In the Neumann case \(\partial_{\tilde{\nu}}u=0\) on \(\Gamma_{0}(\ell)\), the linearisation \(F^{\prime}(\ell)\) is not injective since \(F^{\prime}(\ell)\underline{d\ell}=0\) only implies that \(\underline{d\ell}(x)u_{x}(x,\ell(x))\) is constant. There is nonuniqueness in the nonlinear inverse problem \(F(\ell)=0\) as well, as the counterexample \(f(x)=\sin(\pi x/L)\), \(g(x)=0\), \(u(x,y)=f(x)\) shows; all horizontal lines \(\ell(x)\equiv c\) for \(c\in\mathbb{R}^{+}\) solve the inverse problem._
_In the Dirichlet case \(u=0\) on \(\Gamma_{0}(\ell)\), linearised uniqueness follows from the formula \(z(x,\ell(x))=-u_{y}(x,\ell(x))\underline{d\ell}(x)\) provided \(u_{y}\) does not vanish on an open subset \(\Gamma\) of \(\Gamma_{0}(\ell)\). The latter can be excluded by Holmgren's theorem, since \(-\triangle u=0\), together with the conditions \(u=0\), \(u_{y}=0\) on \(\Gamma_{0}(\ell)\) defines a noncharacteristic Cauchy problem and therefore would imply \(u\equiv 0\) on \(D(\ell)\), a contradiction to \(f\neq 0\). Full uniqueness can be seen from the fact that if \(\ell\) and \(\tilde{\ell}\) solve the inverse problem, then on the domain enclosed by these two curves (plus possibly some \(\Gamma_{2}\) boundary part), \(u\) satisfies a homogeneous Dirichlet Laplace problem and therefore has to vanish identically. This, on the other hand would yield a homogeneous Cauchy problem for \(u\) on the part \(D(\min\{\ell,\tilde{\ell}\})\) that lies below both curves and thus imply that \(u\) vanishes identically there. Again we would then have a contradiction to \(f\neq 0\). This uniqueness proof would also work with Neumann or impedance instead of Dirichlet conditions on the lateral boundary \(\Gamma_{2}\)._
_For uniqueness in the impedance case, see also [26, Theorem 2.2]._
### Reconstructions
Figure 2 shows reconstructions of \(\ell(x)\) at \(1\) per cent noise. Here the actual curve was defined by
\[\ell(x)=\ell^{\dagger}(x):=\overline{l}(0.8+0.1\cos(2\pi x)) \tag{35}\]
with \(\overline{l}=0.1\), and the starting value was far from the actual curve (taken to be at \(y=0.2\overline{l}\)).
The left panel shows the case of Dirichlet conditions on the interface. No further progress in convergence took place after iteration 5. The lateral boundary conditions were of homogeneous Neumann type in order to avoid singularities near the corners.
In figure 2 the middle and rightmost figures we show only the first (green) and second (blue) iterations as the latter was effective convergence. In the leftmost figure we additionally show iteration 3 (navy blue), iteration 4 (purple) and iteration 5 (red); the latter being effective convergence.
The relative errors at different noise levels are given in the second column of Table 1.
The same runs were also done with a larger distance \(\overline{l}\) and the actual curve still according to (35). This resulted in the relative errors shown in column three of Table 1.
The middle and right panels of Figure 2 show reconstructions of \(\ell(x)\) in the case of Neumann and impedance conditions (\(\gamma=0.1\)) on the interface. The starting value was again relatively far from the actual curve and there was again \(1\%\) noise in the data. No further progress in convergence took place after iteration 2. For the relative errors at different noise levels, see the last two columns of Table 1.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline & \multicolumn{4}{|c|}{\(\|\ell-\ell^{\dagger}\|_{L^{2}(\Omega)}/\|\ell^{\dagger}\|_{L^{2}(\Omega)}\)} \\ \hline noise level & (D), \(\overline{l}=0.1\) & (D), \(\overline{l}=0.5\) & (N), \(\overline{l}=0.1\) & (I), \(\overline{l}=0.1\) \\ \hline \(1\%\) & 0.0038 & 0.0158 & 0.0018 & 0.0077 \\ \(2\%\) & 0.0084 & 0.0205 & 0.0093 & 0.0087 \\ \(5\%\) & 0.0198 & 0.0380 & 0.0144 & 0.0110 \\ \(10\%\) & 0.0394 & 0.0735 & 0.0563 & 0.0158 \\ \hline \end{tabular}
\end{table}
Table 1: Relative errors for reconstructions of \(\ell\) at several noise levels.
Figure 2: **Recovery of \(\ell(x)\) from data in with \(1\%\) noise: Dirichlet case (left), Neumann case (middle), impedance case (right).**
## 3 Problem 3
**Problem 3.1**.: _Given two pairs of Cauchy data \((f_{1},g_{1})\), \((f_{2},g_{2})\) in_
\[-\triangle u_{j}= 0 \text{in }x\in(0,L)\,,\ y\in(0,\ell(x))\] \[B_{0}u_{j}(0,y)= 0 x\in(0,\ell(0))\] \[B_{L}u_{j}(L,y)= 0 x\in(0,\ell(L))\] \[u_{j}(x,0)=f_{j}(x)\,,\ u_{jy}(x,0)= g_{j}(x) x\in\Omega\]
_for \(j=1,2\), find \(\ell:(0,L)\to(0,\bar{l})\) and \(\tilde{\gamma}:(0,L)\to(0,\infty)\) such that_
\[\begin{split} 0=B_{\ell,\tilde{\gamma}}u_{j}:=& \partial_{\tilde{\nu}}u_{j}(x,\ell(x))+\tilde{\gamma}(x)u_{j}(x, \ell(x))\\ =& u_{jy}(x,\ell(x))-\ell^{\prime}(x)u_{jx}(x,\ell( x))+\tilde{\gamma}(x)u_{j}(x,\ell(x))\quad x\in(0,L)\end{split} \tag{36}\]
_where \(\tilde{\gamma}\) is the combined coefficient defined by \(\tilde{\gamma}(x)=\sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)\). (Note that \(\tilde{\nu}\) is the non-normalised outer normal direction, but for the zero level set this does not matter.)_
The setting is as in Problem 2.1 otherwise and using the notation (27) we rewrite the forward problem as
\[\begin{split}-\triangle u_{j}=& 0\quad\text{ in }D(\ell)\\ u_{j}=& f_{j}\quad\text{ on }\Gamma_{1}\\ Bu_{j}=& 0\quad\text{ on }\Gamma_{2}(\ell)\\ B_{\ell,\tilde{\gamma}}u_{j}=& 0\quad\text{ on }\Gamma_{0}(\ell).\end{split} \tag{37}\]
Note that \(u_{j}\) actually satisfies the Poisson equation on the hold-all domain \(D(\bar{l})\) with a fixed upper boundary defined by \(\bar{l}\geq\ell\). We also point out that using the weak form of the forward problem
\[\begin{split}& u-f_{j}\in H^{1}_{\diamondsuit}(D(\ell)):=\{w\in H^{ 1}(\Omega)\,:\,Bw=0\}\text{ on }\Gamma_{2}(\ell)\,,\ w=0\text{ on }\Gamma_{2}\}\\ &\text{ and for all }w\in H^{1}_{\diamondsuit}(D(\ell))\,:\ \int_{0}^{L}\Bigl{(}\int_{0}^{\ell(x)}(\nabla u\cdot\nabla w)(x,y)\,dy+ \tilde{\gamma}(x)(u\cdot w)(x,\ell(x))\Bigr{)}\,dx=0\end{split}\]
no derivative of \(\ell\) nor \(\tilde{\gamma}\) is needed for computing \(u\).
The forward operator \(F=(F_{1},F_{2})\) is defined by
\[F_{j}:\ell\mapsto u_{jy}(x,0)-g_{j}(x)\text{ where }u_{j}\text{ solves \eqref{eq:forward_eq}, }j\in\{1,2\}.\]
Its linearisation is defined by \(F_{j}^{\prime}(\ell)[\underline{d\ell},\underline{d\gamma}]=v_{j\,y}|_{\Gamma_ {1}}\), where \(v_{j}\) solves
\[\begin{split}-\triangle v_{j}=& 0\quad\text{ in }D(\ell)\\ v_{j}=& 0\quad\text{ on }\Gamma_{1}\\ Bv_{j}=& 0\quad\text{ on }\Gamma_{2}(\ell)\\ B_{\ell,\tilde{\gamma}}v_{j}=& G(u_{j},\ell,\tilde{ \gamma})(\underline{d\ell},\underline{d\gamma})\quad\text{ on }\Gamma_{0}(\ell)\end{split}\]
cf., (36), where
\[\begin{split} G(u_{j},\ell,\tilde{\gamma})(\underline{d\ell}, \underline{d\gamma}))(x):=&-\big{(}u_{j\,yy}(x,\ell(x))-\ell^{ \prime}(x)u_{j\,xy}(x,\ell(x))+\tilde{\gamma}(x)u_{j\,y}(x,\ell(x))\big{)} \underline{d\ell}\\ &+\underline{d\ell^{\prime}}(x)u_{j\,x}(x,\ell(x))-\underline{d \gamma}(x)u_{j}(x,\ell(x)).\end{split} \tag{38}\]
Using the PDE we have the identity
\[G(u_{j},\ell,\tilde{\gamma})(\underline{d\ell},\underline{d\gamma})=\tfrac{d} {dx}\Big{[}\underline{d\ell}(x)\,u_{jx}(x,\ell(x))\Big{]}-\underline{d\ell}( x)\tilde{\gamma}(x)u_{jy}(x,\ell(x))-\underline{d\gamma}(x)u_{j}(x,\ell(x)).\]
Thus, computation of a Newton step \((\underline{d\ell},\underline{d\gamma})=(\underline{d\ell}^{(k)},\underline{ d\gamma}^{(k)})\) starting from some iterate \((\ell^{(k)},\tilde{\gamma}^{(k)})=(\ell,\tilde{\gamma})\) amounts to solving the system
\[\begin{array}{rcl}-\triangle z_{j}=&0&\text{in }D(\ell)\\ z_{j}=&f_{j}&\text{on }\Gamma_{1}\\ Bz_{j}=&0&\text{on }\Gamma_{2}(\ell)\\ z_{j\,y}-g_{j}=&0&\text{on }\Gamma_{1}\\ B_{\ell,\tilde{\gamma}}z_{j}=&G(u_{j},\ell,\tilde{\gamma})(\underline{d\ell}, \underline{d\gamma})&\text{on }\Gamma_{0}(\ell)\end{array}\quad j\in\{1,2\} \tag{39}\]
(note that \(z_{j}=u_{j}+v_{j}\) and \(z_{j\,y}|_{\Gamma_{1}}-g_{j}=F_{j}(\ell)+F_{j}^{\prime}(\underline{d\ell}, \underline{d\gamma})\)).
With pre-computed (regularised) solutions \(\bar{z}_{j}\) of the Cauchy problem on the rectangular hold-all domain
\[\begin{split}-\triangle\bar{z}_{j}=& 0\text{ in }D(\vec{l})=(0,L)\times(0,\vec{l})\\ \bar{z}_{j}=& f_{j}\text{ on }\Gamma_{1}\\ B\bar{z}_{j}=& 0\text{ on }\Gamma_{2}(\vec{l})=\{0,L\} \times(0,\vec{l})\\ \bar{z}_{j\,y}-g_{j}=& 0\text{ on }\Gamma_{1}\end{split} \tag{40}\]
this reduces to resolving the following system for \((\underline{d\ell},\underline{d\gamma})\) on \(\Gamma_{0}(\ell)\)
\[\begin{split} G(u_{1},\ell,\tilde{\gamma})(\underline{d\ell}, \underline{d\gamma})=& B_{\ell,\tilde{\gamma}}\bar{z}_{1}\\ G(u_{2},\ell,\tilde{\gamma})(\underline{d\ell},\underline{d\gamma})=& B_{\ell,\tilde{\gamma}}\bar{z}_{2}.\end{split} \tag{41}\]
To obtain more explicit expressions for \(\underline{d\ell}\) and \(\underline{d\gamma}\) from (41), one can apply an elimination strategy, that is, multiply the boundary condition on \(\Gamma_{0}(\ell)\) with \(u_{j\pm 1}(x)\) and subtract, to obtain
\[\begin{split} B_{\ell,\tilde{\gamma}}v_{1}\,u_{2}-B_{\ell, \tilde{\gamma}}v_{2}\,u_{1}&=u_{2}\tfrac{d}{dx}\Big{[}\underline{d \ell}\,u_{1x}\Big{]}-u_{1}\tfrac{d}{dx}\Big{[}\underline{d\ell}\,u_{2x}\Big{]} -\underline{d\ell}\tilde{\gamma}\Big{(}u_{1y}u_{2}-u_{2y}u_{1}\Big{)}\\ &=\tfrac{d}{dx}\Big{[}\underline{d\ell}\Big{(}u_{1x}u_{2}-u_{2x} u_{1}\Big{)}\Big{]}-\underline{d\ell}\Big{(}\ell^{\prime}\big{(}u_{1x}u_{2y}-u_{2x} u_{1y}\big{)}+\tilde{\gamma}\big{(}u_{1y}u_{2}-u_{2y}u_{1}\big{)}\Big{)}\\ &=:\tfrac{d}{dx}\Big{[}\underline{d\ell}\tilde{\alpha}\Big{]}- \underline{d\ell}\tilde{\beta}\end{split}\]
where we have skipped the arguments \((x)\) of \(b_{1}\), \(b_{2}\), \(\underline{d\ell}\), \(\underline{d\gamma}\), \(\ell^{\prime}\) and \((x,\ell(x))\) of \(u_{1}\), \(u_{2}\) and its derivatives for better readability.
With the pre-computed (regularised) solutions \(\bar{z}_{j}\) of the Cauchy problem (40) one can therefore carry out a Newton step by computing \(u_{j}^{(k)}=u_{j}\) from the well-posed mixed elliptic boundary value problem (*) with \(\ell=\ell^{(k)}\), \(\tilde{\gamma}=\tilde{\gamma}^{(k)}\) and updating
\[\ell^{(k+1)}(x)=\ell^{(k)}(x)-\underline{d\ell}(x) \tag{42}\] \[\text{where}\quad\underline{d\ell}(x) =\frac{1}{\tilde{\alpha}[\ell^{(k)},u^{(k)}](x)}\Big{\{}\!\exp \!\Big{(}\!-\int_{0}^{x}\frac{\tilde{\beta}[\ell^{(k)},u^{(k)}]}{\tilde{ \alpha}[\ell^{(k)},u^{(k)}]}(s)\,ds\Big{)}\underline{d\ell}(0)\tilde{\alpha}[ \ell^{(k)},u^{(k)}](0)\] \[\qquad\qquad\qquad+\int_{0}^{x}\tilde{b}(s)\exp\!\Big{(}\!-\int_ {s}^{x}\frac{\tilde{\beta}[\ell^{(k)},u^{(k)}]}{\tilde{\alpha}[\ell^{(k)},u^{( k)}]}(t)\,dt\Big{)}\,ds\Big{\}}\] \[\text{with}\quad\tilde{b}(x) =\big{(}\partial_{\tilde{\nu}}\bar{z}_{1}(x,\ell^{(k)}(x))+\tilde{ \gamma}^{(k)}(x)\bar{z}_{1}(x,\ell^{(k)}(x))\big{)}u_{2}^{(k)}(x,\ell^{(k)}(x))\] \[\qquad\qquad\qquad-\big{(}\partial_{\tilde{\nu}}\bar{z}_{2}(x, \ell^{(k)}(x))+\tilde{\gamma}^{(k)}(x)\bar{z}_{2}(x,\ell^{(k)}(x))\big{)}u_{1} ^{(k)}(x,\ell^{(k)}(x))\] \[\tilde{\gamma}^{(k+1)}(x) =\tilde{\gamma}^{(k)}(x)-\underline{d\gamma}(x)\] (43) \[\text{where}\quad\underline{d\gamma}(x) =\frac{1}{u_{1}^{(k)}(x,\ell^{(k)}(x))}\Big{(}\frac{d}{dx}\Big{[} \underline{d\ell}(x)\,u_{1x}^{(k)}(x,\ell^{(k)}(x))\Big{]}-\underline{d\ell}( x)\tilde{\gamma}^{(k)}(x)u_{1y}^{(k)}(x,\ell(x))-b_{1}(x)\Big{)}\] \[=\frac{1}{u_{1}^{(k)}(x,\ell^{(k)}(x))}\Big{(}\frac{d}{dx}\Big{[} \underline{d\ell}(x)\tilde{\alpha}[\ell^{(k)},u^{(k)}](x)\Big{]}\] \[\qquad\qquad-\underline{d\ell}(x)\Big{(}\tilde{\gamma}^{(k)}(x)u _{1y}^{(k)}(x,\ell(x))+\frac{d}{dx}\Big{[}\frac{u_{1x}^{(k)}(x,\ell^{(k)}(x))} {\tilde{\alpha}[\ell^{(k)},u^{(k)}](x)}\Big{]}\Big{)}-b_{1}(x)\Big{)}\]
Since this elimination procedure needs second derivative computations, we do not use it for the reconstruction but stay with a simultaneous computation of the Newton step \((\underline{d\ell},\underline{d\gamma})\) from (41). Still, (42), (43) will be useful in the proof of linearised uniqueness to follow.
### Reconstructions
In Figure 3 we show a simultaneous reconstruction of \(\ell(x)\) and \(\tilde{\gamma}(x)\) obtained from excitations with \(f_{1}(x)=1+x+x^{2}\) and \(f_{2}(x)=4x^{2}-3x^{3}\) (chosen to comply with the impedance conditions \(Bu=\partial_{\nu}u+u=0\) at \(x\in\{0,L\}\), \(L=1\)).
Figure 3: **Simultaneous recovery of \(\ell(x)\) and \(\tilde{\gamma}(x)\)**
Assuming the boundary values of \(\ell\) and \(\tilde{\gamma}\) to be known, we used them in our (linear; in the concrete setting constant) starting guesses. The actual values are shown in black (solid), the reconstructed in red, the starting guesses as dashed lines.
The relative errors at different noise levels are listed in Table 2.
While the numbers suggest approximately logarithmic convergence and this is in line with what is to be expected for such a severely ill-posed problem, our result in Section 3.2 is about convergence only. Proving rates would go beyond the scope of the objective of this paper.
### Convergence
Proving convergence of iterative regularisation methods for nonlinear ill-posed problems always requires certain conditions on the nonlinearity of the forward operator. These can be verified for a slightly modified formulation of the problem; in particular, instead of a reduced formulation involving the parameter-to-state map as used in Section 3.1, we consider the all-at-once formulation (involving a second copy of \(\tilde{\gamma}\))
\[\begin{split}\tilde{F}_{1}(u_{1},\ell,\tilde{\gamma}_{1})=& 0\\ \tilde{F}_{2}(u_{2},\ell,\tilde{\gamma}_{2})=& 0\\ P(\tilde{\gamma}_{1},\tilde{\gamma}_{2})=& 0,\end{split} \tag{44}\]
where
\[\tilde{F}_{j}:(u_{j},\ell,\tilde{\gamma}_{j})\mapsto\left(\begin{array}{c}- \triangle u_{j}\\ u_{j}|_{\Gamma_{1}}-f_{j}\\ u_{j}|_{\Gamma_{1}}-g_{j}\\ Bu_{j}|_{\Gamma_{2}(\ell)}\\ B_{\ell,\tilde{\gamma}}u_{j}|_{\Gamma_{0}(\ell)}\end{array}\right)\;,\;j\in \{1,2\},\quad P(\tilde{\gamma}_{1},\tilde{\gamma}_{2}):=(\tilde{\gamma}_{1}- \tilde{\gamma}_{2},\ell(0)-\ell^{0}) \tag{45}\]
with a given endpoint value \(\ell^{0}\). (The first component of \(\tilde{F}_{j}\) means that we consider the Poisson equation on the hold-all domain \(D(\tilde{l})\).)
\begin{table}
\begin{tabular}{|c||c|c|} \hline noise level & \(\|\ell-\ell^{\dagger}\|_{L^{2}(\Omega)}/\|\ell^{\dagger}\|_{L^{2}(\Omega)}\) & \(\|\tilde{\gamma}-\tilde{\gamma}^{\dagger}\|_{L^{2}(\Omega)}/\|\tilde{\gamma}^{ \dagger}\|_{L^{2}(\Omega)}\) \\ \hline
1\% & 0.0145 & 0.0251 \\
2\% & 0.0152 & 0.0263 \\
5\% & 0.0191 & 0.0355 \\
10\% & 0.0284 & 0.0587 \\ \hline \end{tabular}
\end{table}
Table 2: Relative errors for reconstructions of \(\ell\) and \(\tilde{\gamma}\) at several noise levels.
#### Range invariance of the forward operator
The forward operator \(\tilde{F}:\mathbb{X}\to\mathbb{Y}\) defined by \(\tilde{F}(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2})=(\tilde{F}_{1} (u_{1},\ell,\tilde{\gamma}_{1}),\tilde{F}_{2}(u_{2},\ell,\tilde{\gamma}_{2})\) along with its linearisation
\[\tilde{F}^{\prime}_{j}(u_{j},\ell,\tilde{\gamma}_{j})(\underline{du}_{j}, \underline{d\ell},\underline{d\gamma}_{j})=\left(\begin{array}{c}-\triangle \underline{du}_{j}\\ \underline{du}_{j}|_{\Gamma_{1}}\\ \underline{du}_{j}\,y|_{\Gamma_{1}}\\ B\underline{du}_{j}|_{\Gamma_{2}(\ell)}\\ B_{\ell,\tilde{\gamma}_{j}}\underline{du}_{j}|_{\Gamma_{0}(\ell)}-G(u_{j}, \ell,\tilde{\gamma})(\underline{d\ell},\underline{d\gamma})\end{array}\right) \tag{46}\]
and the operator \(r=(r_{u\,1},r_{u\,2},r_{\ell},r_{\tilde{\gamma}\,1},r_{\tilde{\gamma}\,2})\) defined so that \([B_{\ell,\tilde{\gamma}_{j}}-B_{\ell_{0},\tilde{\gamma}_{0,j}}]u_{j}+G(u_{0,j}, \ell_{0},\tilde{\gamma}_{0,j})(\ell-\ell_{0}),r_{\gamma\,j}(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2}))=0\), that is,
\[r_{u\,j}(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2}) =u_{j}-u_{0,j}\] \[r_{\ell}(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2}) =\ell-\ell_{0}\] \[r_{\gamma\,j}(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma} _{2}) =\frac{1}{u_{0,j}(\ell_{0})}\] \[\quad\times\Bigl{(}-\bigl{(}u_{0,j\,yy}(\ell_{0})-\ell_{0}^{ \prime}(x)u_{0,j\,xy}(\ell_{0})+\tilde{\gamma}_{0}(x)u_{0,j\,y}(\ell_{0}) \bigr{)}(\ell-\ell_{0})+(\ell^{\prime}-\ell_{0}^{\prime})u_{0,j\,x}(\ell_{0})\] \[\qquad+\bigl{(}u_{j\,y}(\ell)-\ell^{\prime}(x)u_{j\,x}(\ell)+ \tilde{\gamma}(x)u_{j}(\ell)\bigr{)}-\bigl{(}u_{j\,y}(\ell_{0})-\ell_{0}^{ \prime}(x)u_{j\,x}(\ell_{0})+\tilde{\gamma}_{0}(x)u_{j}(\ell_{0})\bigr{)} \Bigr{)}\]
\(j\in\{1,2\}\), satisfies the differential range invariance condition
\[\text{for all }\xi\in U\,\exists r(\xi)\in\mathbb{X}:=V^{2}\times X^{\ell} \times(X^{\tilde{\gamma}})^{2}\,:\;\tilde{F}(\xi)-\tilde{F}(\xi_{0})=\tilde{F }^{\prime}(\xi_{0})\,r(\xi), \tag{47}\]
in a neighborhood \(U\) of the reference point \(\xi_{0}:=(u_{0,1},u_{0,2},\ell_{0},\tilde{\gamma}_{0,1},\tilde{\gamma}_{0,2})\). Here we use the abbreviation \(u(\ell)\) for \((u(\ell))(x)=u(x,\ell(x))\) and
\[\xi:=(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2}).\]
The difference between \(r\) and the shifted identity can be written as
\[r(u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2})-\bigl{(} (u_{1},u_{2},\ell,\tilde{\gamma}_{1},\tilde{\gamma}_{2})-(u_{0,1},u_{0,2},\ell _{0},\tilde{\gamma}_{0,1},\tilde{\gamma}_{0,2})\bigr{)}\] \[=\bigl{(}0,0,0,\tfrac{1}{u_{0,1}(\ell_{0})}(I_{1}+II_{1}+III_{1}), \tfrac{1}{u_{0,2}(\ell_{0})}(I_{2}+II_{2}+III_{2})\bigr{)}^{T},\]
with
\[I_{j}= \tilde{\gamma}_{j}(u_{j}(\ell)-u_{0,j}(\ell_{0})-\tilde{\gamma}_{ 0,j}u_{0,j\,y}(\ell_{0})(\ell-\ell_{0})\] \[= \tilde{\gamma}_{j}\Bigl{(}(u_{j}-u_{0,j})(\ell)-(u_{j}-u_{0,j})( \ell_{0})\Bigr{)}+(\tilde{\gamma}_{j}-\tilde{\gamma}_{0,j})(u_{0,j}(\ell)-u_{0, j}(\ell_{0}))\] \[\qquad+\tilde{\gamma}_{0,j}\Bigl{(}u_{0,j}(\ell)-u_{0,j}(\ell_{0 })-u_{0,j\,y}(\ell_{0})(\ell-\ell_{0})\Bigr{)}\] \[II_{j}= u_{0,j\,y}(\ell)-u_{0,j\,y}(\ell_{0})-u_{0,j\,yy}(\ell_{0})(\ell- \ell_{0})\] \[III_{j}= -\ell_{0,j}^{\prime}\Bigl{(}u_{0,j\,x}(\ell)-u_{0,j\,x}(\ell_{0})- u_{0,j\,xy}(\ell_{0})(\ell-\ell_{0})\Bigr{)}\] \[\qquad-\ell^{\prime}\Bigl{(}(u_{j\,x}-u_{0,j\,x})(\ell)-(u_{j\,x}- u_{0,j\,x})(\ell_{0})\Bigr{)}\] \[\qquad-(\ell^{\prime}-\ell_{0}^{\prime})\Bigl{(}u_{0,j\,x}(\ell)- u_{0,j\,x}(\ell_{0})\,+\,u_{j\,x}(\ell_{0})-u_{0,j\,x}(\ell_{0})\Bigr{)}.\]
The function spaces are supposed to be chosen according to
\[V\subseteq W^{2,\infty}(D(\overline{l})),\quad X^{\ell}=W^{1,p}(0,L),\quad X^{ \tilde{\gamma}}=L^{p}(0,L),\quad\text{ with }p>1\]
so that \(W^{1,p}(0,L)\) continuously embeds into \(L^{\infty}(0,L)\), 1 e.g.
Footnote 1: which also allows to guarantee \(\ell\leq\overline{l}\) for all \(\ell\) with \(\|\ell-\ell_{0}\|_{X^{\ell}}\) small enough
\[X^{\ell}=H^{1}(0,L),\quad X^{\tilde{\gamma}}=L^{2}(0,L), \tag{48}\]
in order to work with Hilbert spaces. In this setting, using
\[|v(x,\ell(x))-v(x,\ell_{0}(x))|=\left|\int_{0}^{1}v_{y}(x,\ell_{0} (x)+s(\ell(x)-\ell_{0}(x)))\,ds\,(\ell(x)-\ell_{0}(x))\right|\] \[\qquad\leq\|v_{y}\|_{L^{\infty}(D(\overline{(}\ell)))}\,|\ell(x) -\ell_{0}(x)|\] \[|v(x,\ell(x))-v(x,\ell_{0}(x))-v_{y}(x,\ell_{0}(x))|\] \[\qquad=\left|\int_{0}^{1}\int_{0}^{1}v_{yy}(x,\ell_{0}(x)+s^{ \prime}s(\ell(x)-\ell_{0}(x)))\,s\,ds\,ds^{\prime}\,(\ell(x)-\ell_{0}(x))^{2}\right|\] \[\qquad\leq\|v_{yy}\|_{L^{\infty}(D((\overline{(}\ell)))}\,|\ell( x)-\ell_{0}(x)|^{2}\]
we can further estimate
\[\|I_{j}\|_{L^{p}(0,L)}\leq \|\tilde{\gamma}_{j}\|_{L^{p}(0,L)}\|u_{j\,y}-u_{0,j\,y}\|_{L^{ \infty}(D(\overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}\] \[+\|\tilde{\gamma}_{j}-\tilde{\gamma}_{0,j}\|_{L^{p}(0,L)}\|u_{0,j \,y}\|_{L^{\infty}(D(\overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}^{2}\] \[+\tfrac{1}{2}\|\tilde{\gamma}_{0,j}\|_{L^{p}(0,L)}\|u_{0,j\,yy}\| _{L^{\infty}(D(\overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}^{2}\] \[\|II_{j}\|_{L^{p}(0,L)}\leq L^{1/p}\Big{(}\tfrac{1}{2}\|u_{0,j\,yyy}\|_{L^{\infty}(D(\overline{l})}\|\ell- \ell_{0}\|_{L^{\infty}(0,L)}^{2}+\|u_{j\,yy}-u_{0,j\,yy}\|_{L^{\infty}(D( \overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}\Big{)}\] \[\|III_{j}\|_{L^{p}(0,L)}\leq \|\ell_{0,j}^{\prime}\|_{L^{p}(0,L)}\|u_{0,j\,yy}\|_{L^{\infty}(D (\overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}^{2}\] \[+\|\ell^{\prime}\|_{L^{p}(0,L)}\|u_{j\,xy}-u_{0,j\,xy}\|_{L^{ \infty}(D(\overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}\] \[+\|\ell^{\prime}-\ell_{0}^{\prime}\|_{L^{p}(0,L)}\Big{(}\|u_{0,j\, xy}\|_{L^{\infty}(D(\overline{l})}\|\ell-\ell_{0}\|_{L^{\infty}(0,L)}+\|u_{j\,x}( \ell_{0})-u_{0,j\,x}(\ell_{0})\|_{L^{\infty}(0,L)}\Big{)}.\]
Altogether, choosing \(u_{0,j}\) to be smooth enough and bounded away from zero on \(\ell_{0}\)2
Footnote 2: Note that in our all-at-once setting, \(u_{0,j}\) does not necessarily need to satisfy a PDE, which (up to closeness to \(u^{\dagger}\)) allows for plenty of freedom in its choice.
\[\tfrac{1}{u_{0,j}(\ell_{0})}\in L^{\infty}(0,L)\,,\quad u_{0,j\,y},\,u_{0,j\, yy}\in W^{1,\infty}(D(\overline{l})\,,\quad j\in\{1,2\}, \tag{49}\]
we have shown that
\[\|r(\xi)-(\xi-\xi_{0})\|_{V^{2}\times X^{\ell}\times(X^{\tilde{\gamma}})^{2}} \leq C\|\xi-\xi_{0}\|_{V^{2}\times X^{\ell}\times(X^{\tilde{\gamma}})^{2}}^{2} \tag{50}\]
for some \(C>0\). Analogously to, e.g., [24, 23] this provides us with the estimate
\[\begin{split}\exists\,c_{r}\in(0,1)\,\forall\xi\in U(\subseteq V ^{2}\times X^{\ell}\times(X^{\tilde{\gamma}})^{2})&\,:\|r(\xi)-r( \xi^{\dagger})-(\xi-\xi^{\dagger})\|_{X}\\ &\leq c_{r}\|\xi-\xi^{\dagger}\|_{X}\end{split} \tag{51}\]
where \(\xi^{\dagger}\) is the actual solution.
#### Linearised uniqueness
Results on uniqueness of Problem 3.1 can be found in [4, 34]. In particular, linear independence of the functions \(g_{1}\), \(g_{2}\) is sufficient for determining both \(\ell\) and \(\tilde{\gamma}\) in (36).
Here we will show linearised uniqueness, as this is another ingredient of the convergence proof. More precisely, we show that the intersection of the nullspaces of the linearised forward operator \(\tilde{F}^{\prime}(u_{0,1},u_{0,2},\ell_{0},\tilde{\gamma}_{0,1},\tilde{ \gamma}_{0,2})\) and the penalisation operator \(P\) is trivial. To this end, assume that \(\tilde{F}^{\prime}(u_{0,1},u_{0,2},\ell_{0},\tilde{\gamma}_{0,1},\tilde{ \gamma}_{0,2})\,(\underline{du}_{1},\underline{du}_{2},\underline{d\ell}, \underline{d\gamma}_{1},\underline{d\gamma}_{2})=0\) and \(P(\underline{du}_{1},\underline{du}_{2},\underline{d\ell},\underline{d\gamma }_{1},\underline{d\gamma}_{2})=0\), where the latter simply means \(\underline{d\gamma}_{1}=\underline{d\gamma}_{2}=:\underline{d\gamma}\) and \(\underline{d\ell}(0)=0\). From the first four lines in (46) we conclude that \(\underline{du}_{j}\) satisfies a homogeneous Cauchy problems and therefore has to vanish on \(D(\ell_{j})\) for \(j\in\{1,2\}\). Thus, \(B_{\ell,\tilde{\gamma}_{j}}\underline{du}_{j}=0\) and by the same elimination procedure that led to (42), (43) (using also \(\underline{d\ell}(0)=0\)) we obtain that \(\underline{d\ell}=0\) and therefore \(\underline{d\gamma}=0\).
#### Convergence of Newton type schemes
#### (a) Regularised frozen Newton with penalty
We apply a frozen Newton method with conventional Tikhonov (and no fractional) regularisation but with penalty \(P\) as in [24, 23].
\[\xi_{n+1}^{\delta}=\xi_{n}^{\delta}+(K^{\star}K+P^{\star}P+\varepsilon_{n}I)^ {-1}\Big{(}K^{\star}(\vec{h}^{\delta}-\tilde{F}(\xi_{n}^{\delta}))-P^{\star}P \xi_{n}^{\delta}+\varepsilon_{n}(\xi_{0}-\xi_{n}^{\delta})\Big{)} \tag{52}\]
where \(K=\tilde{F}^{\prime}(\xi_{0})\) and \(K^{\star}\) denotes the Hilbert space adjoint of \(K:\mathbb{X}\to\mathbb{Y}\) and (cf. (48))
\[\mathbb{X}=H^{3+\epsilon}(D(\bar{l}))^{2}\!\times\!H^{1}(0,L)\!\times\!L^{2}(0,L)^{2},\qquad\mathbb{Y}:=\big{(}L^{2}(D(\bar{l}))\!\times\!\Gamma_{1}^{2}\! \times\!\Gamma_{2}(\bar{l})\!\times\!L^{2}(0,L)\big{)}^{2} \tag{53}\]
The regularisation parameters are taken from a geometric sequence \(\varepsilon_{n}=\varepsilon_{0}\theta^{n}\) for some \(\theta\in(0,1)\), and the stopping index is chosen such that
\[\varepsilon_{n_{\star}(\delta)}\to 0\text{ and }\delta^{2}/\varepsilon_{n_{\star}(\delta)-1}\to 0 \text{ as }\delta\to 0, \tag{54}\]
where \(\delta\) is the noise level according to
\[\|(f_{1}^{\delta},f_{2}^{\delta},g_{1}^{\delta},g_{2}^{\delta})-(f_{1},f_{2},g_{2},g_{2})\|_{L^{2}(0,T;L^{2}(\Omega))}\leq\delta. \tag{55}\]
An application of [20, Theorem 2.2] together with our verification of range invariance (47) with (50) and linearised uniqueness yields the following convergence result.
**Theorem 3.1**.: _Let \(\xi_{0}\in U:=\mathcal{B}_{\rho}(\xi^{\dagger})\) for some \(\rho>0\) sufficiently small, assume that (49) holds and let the stopping index \(n_{*}=n_{*}(\delta)\) be chosen according to (54)._
_Then the iterates \((\xi_{n}^{\delta})_{n\in\{1,\ldots,n_{*}(\delta)\}}\) are well-defined by (52), remain in \(\mathcal{B}_{\rho}(\xi^{\dagger})\) and converge in \(X\) (defined as in (53)), \(\|\xi_{n_{*}(\delta)}^{\delta}-\xi^{\dagger}\|_{X}\to 0\) as \(\delta\to 0\)._
_In the noise free case \(\delta=0\), \(n_{*}(\delta)=\infty\) we have \(\|\xi_{n}-\xi^{\dagger}\|_{X}\to 0\) as \(n\to\infty\)._
**(b) Frozen Newton with penalty applied to fractionally regularised problem**
Replace \(\tilde{F}_{j}\) in (45) by
\[\tilde{F}_{j}^{\alpha}:(u_{+j},u_{-j},\ell,\tilde{\gamma}_{j})\mapsto\left( \begin{array}{c}\partial_{y}^{\alpha}\overline{u}_{+j}^{\overline{l}}-\sqrt{- \triangle_{x}}\overline{u}_{+j}^{\overline{l}}\\ \partial_{y}u_{-j}-\sqrt{-\triangle_{x}}u_{-j}\\ u_{+j}|_{\Gamma_{1}}-\frac{1}{2}(f+\sqrt{-\triangle_{x}}^{-1}g)\\ u_{-j}|_{\Gamma_{1}}-\frac{1}{2}(f+\sqrt{-\triangle_{x}}^{-1}g)\\ (Bw_{+j},Bu_{-j})|_{\Gamma_{2}(\overline{l})}\\ B_{\ell,\tilde{\gamma}}(u_{+j}+u_{-j})|_{\Gamma_{0}(\ell)}\end{array}\right) \,,\,\,j\in\left\{1,2\right\}, \tag{56}\]
where \(\partial_{y}^{\alpha}w\) is the fractional DC derivative with endpoint \(0\) and \(\overline{u}^{\overline{l}}(y)=u(\overline{l}-y)\), cf. Section 1. Range invariance and linearised uniqueness can be derived analogously to the previous section and therefore we can apply (52) with \(\tilde{F}^{\alpha}\) in place of \(\tilde{F}\) and conclude its convergence to a solution \((u_{+1}^{\alpha,\dagger},u_{-1}^{\alpha,\dagger},u_{+2}^{\alpha,\dagger},u_{- 2}^{\alpha,\dagger},\ell^{\alpha,\dagger},\tilde{\gamma}_{1}^{\alpha,\dagger }=\tilde{\gamma}_{2}^{\alpha,\dagger})\) of (44) with \(\tilde{F}_{j}^{\alpha}(u_{+j},u_{-j},\ell,\tilde{\gamma}_{j})\) in place of \(\tilde{F}_{j}(u_{j},\ell,\tilde{\gamma}_{j})\) for any fixed \(\alpha\in(0,1)\). With the abbreviation
\[\xi^{\alpha}:=(u_{+,1}^{\alpha},u_{-,1}^{\alpha},u_{+,2}^{\alpha},u_{-,2}^{ \alpha},\ell^{\alpha},\tilde{\gamma}_{1}^{\alpha}=\tilde{\gamma}_{2}^{\alpha})\]
we thus have the following convergence result.
**Corollary 3.1**.: _Let \(\xi_{0}^{\alpha}\in U:=\mathcal{B}_{\rho}(\xi^{\alpha,\dagger})\) for some \(\rho>0\) sufficiently small, assume that (49) holds and let the stopping index \(n_{*}=n_{*}(\delta)\) be chosen according to (54)._
_Then the iterates \((\xi_{n}^{\alpha,\delta})_{n\in\left\{1,\ldots,n_{*}(\delta)\right\}}\) are well-defined by (52) with \(\tilde{F}:=\tilde{F}^{\alpha(\delta)}\), remain in \(\mathcal{B}_{\rho}(\xi^{\dagger})\) and converge in \(X\) (defined as in (53) with \(H^{3+\epsilon}(D(\overline{l}))^{2}\) replaced by \(H^{3+\epsilon}(D(\overline{l}))^{4}\)) \(\|\xi_{n_{*}(\delta)}^{\alpha,\delta}-\xi^{\alpha,\dagger}\|_{X}\to 0\) as \(\delta\to 0\). In the noise free case \(\delta=0\), \(n_{*}(\delta)=\infty\) we have \(\|\xi_{n}^{\alpha}-\xi^{\alpha,\dagger}\|_{X}\to 0\) as \(n\to\infty\)._
It remains to estimate the approximation error \((\ell^{\alpha,\dagger}-\ell^{\dagger},\tilde{\gamma}^{\alpha,\dagger}-\tilde{ \gamma}^{\dagger})\). In Section 1 we have seen that \(u_{j}^{\dagger}=u_{+,j}^{\dagger,\dagger}+u_{-,j}^{\dagger,\dagger}\) where \(u_{-,j}^{\alpha,\dagger}-u_{-,j}^{1,\dagger}=0\) and
\[u_{+,j}^{\alpha,\dagger}(x,y)-u_{+,j}^{1,\dagger}(x,y)=\tfrac{1}{2}\sum_{i=1}^ {\infty}(f_{ij}+\tfrac{1}{\sqrt{\lambda_{i}}}g_{ij})\Big{(}\tfrac{1}{E_{\alpha, 1}(-\sqrt{\lambda_{i}}y^{\alpha})}-\exp(\sqrt{\lambda_{i}}y)\Big{)}\phi_{i}(x).\]
Moreover, subtracting the two identities \(B_{\ell^{\alpha,\dagger},\tilde{\gamma}^{\alpha,\dagger}}u_{+,j}^{\alpha, \dagger}=0\), \(B_{\ell^{\dagger},\tilde{\gamma}^{\dagger}}u_{+,j}^{\dagger}=0\), we arrive at the following differential algebraic system for \(\underline{d}\underline{\ell}=\ell^{\alpha,\dagger}-\ell^{\dagger}\), \(\underline{d}\underline{\gamma}=\tilde{\gamma}^{\alpha,\dagger}-\tilde{\gamma }^{\dagger}\)
\[-u_{+,1,x}^{\dagger}(\ell^{\dagger})\,\underline{d}\underline{\ell}^{\prime}+d _{1}^{\alpha}\,\underline{d}\underline{\ell}+u_{+,1}^{\dagger}(\ell^{\dagger}) \,\underline{d}\underline{\gamma}= b_{1}^{\alpha}\]
\[-u_{+,2,x}^{\dagger}(\ell^{\dagger})\,\underline{d}\underline{\ell}^{\prime}+d _{2}^{\alpha}\,\underline{d}\underline{\ell}+u_{+,2}^{\dagger}(\ell^{\dagger}) \,\underline{d}\underline{\gamma}= b_{2}^{\alpha},\]
where
\[d_{j}^{\alpha}(x)= \int_{0}^{1}I(u_{+,j,y}^{\alpha,\dagger},\ell^{\alpha,\dagger}, \tilde{\gamma}^{\alpha,\dagger};x,\ell^{\dagger}(x)+\theta(\ell^{\alpha, \dagger}(x)-\ell^{\dagger}(x)))\,d\theta\] \[b_{j}^{\alpha}(x)= -I(u_{+,j}^{\alpha,\dagger}-u_{+,j}^{\dagger},\ell^{\alpha, \dagger},\tilde{\gamma}^{\alpha,\dagger};x,\ell^{\alpha,\dagger}(x))\] \[I(u,\ell,\tilde{\gamma};x,y)= u_{y}(x,y)-\ell^{\prime}(x)u_{x}(x,y)+\tilde{\gamma}u(x,y).\]
Assuming that the Wronskian
\[W:=u^{\dagger}_{+,1,x}(\ell^{\dagger})\,u^{\dagger}_{+,2}(\ell^{\dagger})-u^{ \dagger}_{+,2,x}(\ell^{\dagger})\,u^{\dagger}_{+,1}(\ell^{\dagger}) \tag{57}\]
and one of the factors \(u^{\dagger}_{+,j}(\ell^{\dagger})\) of \(\underline{d\gamma}\) are bounded away from zero, we can conclude existence of a constant \(C>0\) independent of \(\alpha\) (note that \(W\) and \(u^{\dagger}_{+,j}\) do not depend on \(\alpha\)) such that
\[\|\ell^{\alpha,\dagger}-\ell^{\dagger}\|_{C^{1}(0,L)}+\|\tilde{\gamma}^{\alpha,\dagger}-\tilde{\gamma}^{\dagger}\|_{C(0,L)}\leq C\sum_{j=1}^{2}\|b^{\alpha}_ {j}\|_{C(0,L)}\leq\tilde{C}\sum_{j=1}^{2}\|u^{\alpha,\dagger}_{+,j}-u^{\dagger }_{+,j}\|_{C((0,L)\times(\underline{l}\,\bar{J}))}\]
The latter can be estimated by means of Lemma 1.4 in Section 1.
Combining this with Corollary 3.1 we have the following convergence result.
**Theorem 3.2**.: _Assume that \(W\) according to (57) and either \(u^{\dagger}_{+,1}(\ell^{\dagger})\) or \(u^{\dagger}_{+,2}(\ell^{\dagger})\) be bounded away from zero and let \(\alpha(\delta)\to 0\) as \(\delta\to 0\). Moreover, let the assumptions of Corollary 3.1 hold for \(\alpha=\alpha(\delta)\), and let \((\xi^{\alpha(\delta),\delta}_{n})_{n\in\{1,\ldots,n_{*}(\delta)\}}\) be defined by (52), (54) with \(\tilde{F}:=\tilde{F}^{\alpha(\delta)}\). Then_
\[\|\ell^{\alpha(\delta),\delta}_{n_{*}(\delta)}-\ell^{\dagger}\|_{H^{1}(0,L)}+ \|\tilde{\gamma}^{\alpha(\delta),\delta}_{n_{*}(\delta)}-\tilde{\gamma}^{ \dagger}\|_{L^{2}(0,L)}\to 0\text{ as }\delta\to 0.\]
## Appendix A Appendix: Cauchy 1
**Lemma A.1**.: _For any \(\hat{\lambda}_{1}\geq 0\), \(\alpha\in(0,1)\), \(p\in(1,\frac{1}{1-\alpha})\), \(\hat{p}:=1+\frac{1}{p}\) and the constant_
\[C(\alpha,p,\bar{l}):=\|E_{\alpha,\alpha/2}\|_{L^{\infty}(\mathbb{R}^{+})}( \tilde{C}_{0}(\alpha,\alpha/2)+\tilde{C}_{1}(\alpha,\alpha/2))\Big{(}\tfrac{ \hat{p}}{\alpha-\hat{p}}\Big{)}^{\hat{p}}\max\{1,\sqrt{\bar{l}}\}^{\alpha- \hat{p}}, \tag{58}\]
_with \(\tilde{C}_{0}\), \(\tilde{C}_{1}\) as in (62), the following estimate holds_
\[\sup_{\hat{\lambda}>\hat{\lambda}_{1}}\|E_{\alpha,1}(-\hat{\lambda}\cdot^{ \alpha})-\,^{\alpha-1}E_{\alpha,\alpha}(-\hat{\lambda}\cdot^{\alpha})\|_{L^{p} (0,\bar{l})}\leq C(\alpha,p,\bar{l})\,(1-\alpha).\]
Proof.: Abbreviating
\[\mathfrak{g}_{\mu}(y)=\tfrac{1}{\Gamma(\mu)}y^{\mu-1},\quad e_{\alpha,\beta}( y)=e_{\alpha,\beta}(y;\hat{\lambda})=y^{\beta-1}E_{\alpha,\beta}(-\hat{\lambda}y^{ \alpha}),\]
the quantity to be estimated is \(w(y):=E_{\alpha,1}(-\hat{\lambda}y^{\alpha})-y^{\alpha-1}E_{\alpha,\alpha}(- \hat{\lambda}y^{\alpha})=e_{\alpha,1}(y)-e_{\alpha,\alpha}(y)\). Using the Laplace transform identities
\[(\mathcal{L}\mathfrak{g}_{\mu})(\xi)=\xi^{-\mu},\quad(\mathcal{L}e_{\alpha, \beta})(\xi)=\frac{\xi^{\alpha-\beta}}{\hat{\lambda}+\xi^{\alpha}}, \tag{59}\]
we obtain, for some \(\beta\in(\alpha-1,\alpha)\) yet to be chosen,
\[(\mathcal{L}w)(\xi)=\frac{\xi^{\alpha-1}-1}{\hat{\lambda}+\xi^{\alpha}}=\frac {\xi^{\alpha-\beta}}{\hat{\lambda}+\xi^{\alpha}}(\xi^{\beta-1}-\xi^{\beta- \alpha})=(\mathcal{L}e_{\alpha,\beta})(\xi)\,\Big{(}(\mathcal{L}\mathfrak{g}_ {1-\beta})(\xi)-(\mathcal{L}\mathfrak{g}_{\alpha-\beta})(\xi)\Big{)},\]
hence, by Young's Convolution Inequality,
\[\|w\|_{L^{p}(0,\bar{l})}=\|e_{\alpha,\beta}\ast(\mathfrak{g}_{1-\beta}-\mathfrak{g }_{\alpha-\beta})\|_{L^{p}(0,\bar{l})}\leq\|e_{\alpha,\beta}\|_{L^{q}(0,\bar{ l})}\|\mathfrak{g}_{1-\beta}-\mathfrak{g}_{\alpha-\beta}\|_{L^{r}(0,\bar{l})}\]
provided \(\frac{1}{q}+\frac{1}{r}=1+\frac{1}{p}\).
For the first factor, under the condition
\[1\leq q<\frac{1}{1-\beta}, \tag{60}\]
that is necessary for integrability near zero, we get
\[\|e_{\alpha,\beta}\|_{L^{q}(0,\bar{l})}\leq\|E_{\alpha,\beta}\|_{L^{\infty}( \mathbb{R}^{+})}\frac{\max\{1,\bar{l}\}^{(\beta-1)+1/q}}{((\beta-1)q+1)^{1/q}}. \tag{61}\]
The second factor can be estimated by applying the Mean Value Theorem to the function \(\theta(y;\alpha,\beta):=\mathfrak{g}_{\alpha-\beta}(y)\) as follows
\[\mathfrak{g}_{1-\beta}(y)-\mathfrak{g}_{\alpha-\beta}(y)=\theta(y,1)-\theta( y;\alpha)=\frac{d}{d\alpha}\theta(y;\tilde{\alpha},\beta)\left(1-\alpha \right)=\tilde{\theta}(y;\tilde{\alpha},\beta)\left(1-\alpha\right)\]
where
\[\tilde{\theta}(y;\tilde{\alpha},\beta)= y^{\tilde{\alpha}-\beta-1}\Big{(}\tfrac{1}{\Gamma}(\tilde{ \alpha}-\beta)\log(y)-\tfrac{\Gamma^{\prime}}{\Gamma^{2}}(\tilde{\alpha}- \beta)\Big{)}\] \[= \mathfrak{g}_{\tilde{\alpha}-\beta}(y)\Big{(}\log(y)-(\log\Gamma )^{\prime}(\tilde{\alpha}-\beta)\Big{)}\]
for some \(\tilde{\alpha}\in(\alpha,1)\), with the digamma function \(\psi=\frac{\Gamma^{\prime}}{\Gamma}=(\log\circ\Gamma)^{\prime}\), for which \(\frac{\psi}{\Gamma}=\frac{\Gamma^{\prime}}{\Gamma^{2}}\) is known to be an entire function as is also the reciprocal Gamma function, thus
\[\tilde{C}_{0}(\alpha,\beta):=\sup_{\tilde{\alpha}\in(\alpha,1)}|\tfrac{\psi}{ \Gamma}(\tilde{\alpha}-\beta)|<\infty,\qquad\tilde{C}_{1}(\alpha,\beta):=\sup _{\tilde{\alpha}\in(\alpha,1)}|\tfrac{1}{\Gamma}(\tilde{\alpha}-\beta)|<\infty. \tag{62}\]
Integrability near \(y=0\) of \((y^{\tilde{\alpha}-\beta-1})^{r}\) and of \((y^{\tilde{\alpha}-\beta-1}\log(y))^{r}\) holds iff
\[1\leq r<\frac{1}{1-\tilde{\alpha}+\beta} \tag{63}\]
and yields
\[\|\mathfrak{g}_{1-\beta}-\mathfrak{g}_{\alpha-\beta}\|_{L^{r}(0,\bar{l})} \leq\sup_{\tilde{\alpha}\in(\alpha,1)}\|\tilde{\theta}(\cdot,\tilde{\alpha}, \beta)\|_{L^{r}(0,\bar{\ell})}\left(1-\alpha\right) \tag{64}\]
where
\[\|\tilde{\theta}(\cdot,\tilde{\alpha},\beta)\|_{L^{r}(0,\ell)}\leq(\tilde{C}_ {0}(\alpha,\beta)+\tilde{C}_{1}(\alpha,\beta))\frac{\max\{1,\bar{l}\}^{( \alpha-\beta-1)+1/r}}{((\alpha-\beta-1)r+1)^{1/r}}. \tag{65}\]
Conditions (60), (63) together with \(\tilde{\alpha}\in(\alpha,1)\) are equivalent to
\[\frac{1}{p}=\frac{1}{q}+\frac{1}{r}-1>1-\tilde{\alpha}\text{ and }1<\beta+\frac{1}{q}<\tilde{\alpha}+\frac{1}{p}\]
which together with \(\tilde{\alpha}\in(\alpha,1)\) leads to the assumption
\[\frac{1}{p}>1-\alpha\]
and the choice \(\min\{0,1-\frac{1}{q}\}<\beta<\alpha+\frac{1}{p}-\frac{1}{q}\).
To minimize the factors3
Footnote 3: we do not go for asymptotics with respect to \(\bar{l}\) since we assume \(\bar{l}\) to be moderately sized anyway
\[c_{1}(q,\beta)=((\beta-1)q+1)^{-1/q},\quad c_{2}(r,\alpha-\beta)=((\alpha- \beta-1)r+1)^{-1/r}\]
in (61), (65) under the constraints (60), (63) and \(\frac{1}{q}+\frac{1}{r}=1+\frac{1}{p}\) we make the choice \(\frac{1}{q}=\frac{1}{r}=\frac{1}{2}(1+\frac{1}{p})\), \(\beta=\alpha/2\) that balances the competing pairs \(q\leftrightarrow r\), \(\beta\leftrightarrow\alpha-\beta\) and arrive at
\[c_{1}(q,\beta)=c_{2}(r,\alpha-\beta)=\left(\frac{1+\frac{1}{p}}{\alpha-1+ \frac{1}{p}}\right)^{1+\frac{1}{p}}\]
Proof.: (Lemma 1.2) To prove (19), we employ an energy estimate for the ode satisfied by \(v(y):=E_{\alpha,1}(-\hat{\lambda}y^{\alpha})-\exp(-\hat{\lambda}y)=u_{\alpha, \hat{\lambda}}(y)-u_{1,\hat{\lambda}}(y)\),
\[\partial_{y}v+\hat{\lambda}v=-(\partial_{y}^{\alpha}-\partial_{y} )u_{\alpha,\hat{\lambda}}=:\hat{\lambda}w\] \[\text{where }w=-\tfrac{1}{\hat{\lambda}}(\partial_{y}^{\alpha}- \partial_{y})E_{\alpha,1}(-\hat{\lambda}y^{\alpha})=E_{\alpha,1}(-\hat{ \lambda}y^{\alpha})-y^{\alpha-1}E_{\alpha,\alpha}(-\hat{\lambda}y^{\alpha})\]
Testing with \(|v(\tau)|^{p-1}\text{sign}(v(\tau))\), integrating from \(0\) to \(t\), and applying Holder's and Young's inequalities yields, after multiplication with \(p\),
\[|v(y)|^{p}+\hat{\lambda}\int_{0}^{y}|v(\tau)|^{p}\,d\tau\leq\hat{\lambda}\int _{0}^{y}|w(\tau)|^{p}\,d\tau\,, \tag{66}\]
in particular
\[\|v\|_{L^{\infty}(0,\bar{l})}\leq\hat{\lambda}^{1/p}\|w\|_{L^{p}(0,\bar{l})}, \quad\|v\|_{L^{p}(0,\bar{l})}\leq\|w\|_{L^{p}(0,\bar{l})}\]
for any \(\bar{l}\in(0,\infty]\). The result then follows from Lemma A.1.
Proof.: (Lemma 1.3) For the second estimate, with \(v=e_{\alpha,1}-e_{1,1}\) as in the proof of Lemma 1.2, we have to bound \(\partial_{y}v=-\hat{\lambda}(e_{\alpha,\alpha}-e_{1,1})\), where
\[\mathcal{L}(e_{\alpha,\alpha}-e_{1,1})(\xi)=\frac{1}{\hat{\lambda}+\xi^{ \alpha}}-\frac{1}{\hat{\lambda}+\xi}=\frac{\xi-\xi^{\alpha}}{(\hat{\lambda}+ \xi^{\alpha})(\hat{\lambda}+\xi)}=\frac{\xi^{1-\gamma}}{\hat{\lambda}+\xi} \,\frac{\xi^{\alpha-\beta}}{\hat{\lambda}+\xi^{\alpha}}\,(\xi^{\beta+\gamma- \alpha}-\xi^{\beta+\gamma-1})\]
for \(\beta,\gamma>0\) with \(\beta+\gamma<\alpha\) yet to be chosen. In view of (59) we thus have
\[e_{\alpha,\alpha}-e_{1,1}=e_{1,\gamma}*e_{\alpha,\beta}*(\mathfrak{g}_{\alpha -\beta-\gamma}-\mathfrak{g}_{1-\beta-\gamma})=:e_{1,\gamma}*e_{\alpha,\beta}* \underline{d\mathfrak{g}}\]
Now, applying the elementary estimate
\[\|a*b\|_{L^{\infty}(\ell_{2},\ell_{3})} =\sup_{y\in(\ell_{2},\ell_{3})}\left|\int_{0}^{y}a(y-z)b(z)\,dz\right|\] \[=\sup_{y\in(\ell_{2},\ell_{3})}\left|\int_{0}^{y-\ell_{1}}a(y-z)b( z)\,dz+\int_{y-\ell_{1}}^{y}a(y-z)b(z)\,dz\right|\] \[\leq \|a\|_{L^{\infty}(\ell_{1},\ell_{3})}\|b\|_{L^{1}(0,\ell_{3}-\ell _{1})}+\|b\|_{L^{\infty}(\ell_{2}-\ell_{1},\ell_{3})}\|a\|_{L^{1}(0,\ell_{1})}\]
for \(0<\ell_{1}<\ell_{2}<\ell_{3}\), \(a,b\in L^{1}(0,\ell_{3})\), \(a|_{(\ell_{1},\ell_{3})}\in L^{\infty}(\ell_{1},\ell_{3})\), \(b|_{(\ell_{2}-\ell_{1},\ell_{3})}\in L^{\infty}(\ell_{2}-\ell_{1},\ell_{3})\), twice, namely with \(a=e_{1,\gamma}\), \(b=e_{\alpha,\beta}\), \(\ell_{1}=\underline{l}/3\), \(\ell_{2}=\underline{l}\), \(\ell_{3}=\bar{l}\) and with \(a=e_{\alpha,\beta}\), \(b=\underline{d\mathfrak{g}}\), \(\ell_{1}=\underline{l}/3\), \(\ell_{2}=2\underline{l}/3\), \(\ell_{3}=\bar{l}\), along with Young's Convolution Inequality we obtain
\[\|e_{1,\gamma}* (e_{\alpha,\beta}*d\underline{\mathfrak{g}})\|_{L^{\infty}( \underline{l},\bar{l})}\] \[\leq\|e_{1,\gamma}\|_{L^{\infty}(\underline{l}/3,\bar{l})}\|e_{ \alpha,\beta}*\underline{d\mathfrak{g}}\|_{L^{1}(0,\bar{l})}+\|e_{\alpha,\beta }*\underline{d\mathfrak{g}}\|_{L^{\infty}(2\underline{l}/3,\bar{l})}\|e_{1, \gamma}\|_{L^{1}(0,\bar{l})}\] \[\leq\|e_{1,\gamma}\|_{L^{\infty}(\underline{l}/3,\bar{l})}\|e_{ \alpha,\beta}\|_{L^{1}(0,\bar{l})}\|\underline{d\mathfrak{g}}\|_{L^{1}(0,\bar {l})}\] \[\quad+\Big{(}\|e_{\alpha,\beta}\|_{L^{\infty}(\underline{l}/3,\bar{ l})}\|\underline{d\mathfrak{g}}\|_{L^{1}(0,\bar{l}-\underline{l}/3)}+\| \underline{d\mathfrak{g}}\|_{L^{\infty}(\underline{l}/3,\bar{l})}\|e_{\alpha, \beta}\|_{L^{1}(0,\underline{l}/3)}\Big{)}\|e_{1,\gamma}\|_{L^{1}(0,\bar{l})}\]
Using this with \(\beta=\gamma=\alpha/3\) and (cf. (64))
\[\|\underline{d\mathfrak{g}}\|_{L^{1}(0,\bar{l})} \leq\sup_{\tilde{\alpha}\in(\alpha,1)}\|\tilde{\theta}(\cdot, \tilde{\alpha},\alpha/3)\|_{L^{1}(0,\bar{l})}\,(1-\alpha),\] \[\|\underline{d\mathfrak{g}}\|_{L^{\infty}(\underline{l}/3,\bar{l})} \leq\sup_{\tilde{\alpha}\in(\alpha,1)}\|\tilde{\theta}(\cdot,\tilde{ \alpha},\alpha/3)\|_{L^{\infty}(\underline{l}/3,\bar{l})}\,(1-\alpha),\]
we arrive at the second estimate in (20) with \(\hat{C}(\alpha_{0},\underline{l},\bar{l})=\sup_{\alpha\in(\alpha_{0},1)}\tilde {C}(\alpha,\underline{l},\bar{l})\),
\[\tilde{C}(\alpha,\underline{l},\bar{l})= \,\|e_{1,\alpha/3}\|_{L^{\infty}(\underline{l}/3,\bar{l})}\|e_{ \alpha,\alpha/3}\|_{L^{1}(0,\bar{l})}\sup_{\tilde{\alpha}\in(\alpha,1)}\| \tilde{\theta}(\cdot,\tilde{\alpha},\alpha/3)\|_{L^{1}(0,\bar{l})}\] \[+\|e_{1,\alpha/3}\|_{L^{1}(0,\bar{l})}\|e_{\alpha,\alpha/3}\|_{L^{ \infty}(\underline{l}/3,\bar{l})}\sup_{\tilde{\alpha}\in(\alpha,1)}\|\tilde{ \theta}(\cdot,\tilde{\alpha},\alpha/3)\|_{L^{1}(0,\bar{l})}\] \[+\|e_{1,\alpha/3}\|_{L^{1}(0,\bar{l})}\|e_{\alpha,\alpha/3}\|_{L^{ 1}(0,\underline{l}/3)}\sup_{\tilde{\alpha}\in(\alpha,1)}\|\tilde{\theta}( \cdot,\tilde{\alpha},\alpha/3)\|_{L^{\infty}(\underline{l}/3,\bar{l})}\]
The first estimate can be shown analogously.
Appendix: Cauchy 2
In the impedance case, using the PDE, the right hand side of (29) can be written as
\[\begin{split} G(u,\ell)\underline{d\ell}=& \underline{d\ell^{\prime}}(x)\Big{(}u_{x}(x,\ell(x))-\tfrac{\ell^{\prime}(x)}{ \sqrt{1+\ell^{\prime}(x)^{2}}}\gamma(x)u(x,\ell(x))\Big{)}\\ &+\underline{d\ell}(x)\Big{(}\tfrac{d}{dx}\Big{[}u_{x}(x,\ell(x) )\Big{]}-\sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)u_{y}(x,\ell(x))\Big{)}\\ =&\tfrac{d}{dx}\Big{[}\underline{d\ell}(x)\Big{(}u_ {x}(x,\ell(x))-\tfrac{\ell^{\prime}(x)}{\sqrt{1+\ell^{\prime}(x)^{2}}}\gamma(x )u(x,\ell(x))\Big{)}\Big{]}\\ &+\underline{d\ell}(x)\Big{(}-\sqrt{1+\ell^{\prime}(x)^{2}} \gamma(x)u_{y}(x,\ell(x))+\tfrac{d}{dx}\Big{[}\tfrac{\ell^{\prime}(x)}{\sqrt{1 +\ell^{\prime}(x)^{2}}}\gamma(x)u(x,\ell(x))\Big{]}\Big{)}\\ =&\tfrac{d}{dx}\phi(x)-a(x)\phi(x)\ =\tfrac{d}{dx}[ \underline{d\ell}(x)\mathfrak{a}[\ell,u](x)]-\underline{d\ell}(x)\mathfrak{b} [\ell,u](x)\end{split}\]
Here using the impedance conditions on \(u\), that yield
\[\begin{split}& u_{x}(x,\ell(x))-\tfrac{\ell^{\prime}(x)}{\sqrt{1+ \ell^{\prime}(x)^{2}}}\gamma(x)u(x,\ell(x))\\ &=u_{x}(x,\ell(x))+\tfrac{\ell^{\prime}(x)}{1+\ell^{\prime}(x)^{ 2}}\big{(}u_{y}(x,\ell(x))-\ell^{\prime}(x)u_{x}(x,\ell(x))\big{)}\\ &=\tfrac{1}{1+\ell^{\prime}(x)^{2}}\big{(}u_{x}(x,\ell(x))+\ell^ {\prime}(x)u_{y}(x,\ell(x))\big{)}=\tfrac{\partial_{\ell}u(x,\ell(x))}{1+\ell^ {\prime}(x)^{2}}=\tfrac{1}{1+\ell^{\prime}(x)^{2}}\tfrac{d}{dx}u(x,\ell(x))\\ &=\begin{cases}\tfrac{d}{dx}u(x,\ell(x))=u_{x}(x,\ell(x))&\text{ if }\ell^{\prime}(x)=0\\ \tfrac{1}{\ell^{\prime}(x)}\Big{(}\tfrac{\gamma(x)}{\sqrt{1+\ell^{\prime}(x)^ {2}}}u(x,\ell(x))+u_{y}(x,\ell(x))\Big{)}\end{cases}\quad\text{ else }\qquad\qquad\qquad=:\mathfrak{a}[\ell,u](x)\end{split} \tag{67}\]
In our implementation we use the last expression that is based on
\[u_{x}(x,\ell(x))=\frac{1}{\ell^{\prime}(x)}\Big{(}\sqrt{1+\ell^{\prime}(x)^{2} }\gamma(x)u(x,\ell(x))+u_{y}(x,\ell(x)))\Big{)}\quad\text{ if }\ell^{\prime}(x)\neq 0,\]
since \(u_{x}\) is difficult to evaluate numerically unless the boundary is flat (case \(\ell^{\prime}(x)=0\)). Moreover,
\[\begin{split}&\Big{(}-\sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)u _{y}(x,\ell(x))+\tfrac{d}{dx}\Big{[}\tfrac{\ell^{\prime}(x)}{\sqrt{1+\ell^{ \prime}(x)^{2}}}\gamma(x)u(x,\ell(x))\Big{]}\Big{)}\\ &=\Big{\{}\tfrac{\ell^{\prime\prime}(x)}{\sqrt{1+\ell^{\prime}(x )^{2}}}\gamma(x)+\tfrac{\ell^{\prime}(x)}{\sqrt{1+\ell^{\prime}(x)^{2}}}\gamma ^{\prime}(x)+\gamma(x)^{2}\Big{\}}u(x,\ell(x))\qquad=:\mathfrak{b}[\ell,u](x) \end{split} \tag{68}\]
we have
\[\phi(x)=\underline{d\ell}(x)\,\mathfrak{a}[\ell,u](x)\qquad a(x)=\tfrac{ \mathfrak{b}[\ell,u]}{\mathfrak{a}[\ell,u]}(x) \tag{69}\]
with \(\mathfrak{a}\), \(\mathfrak{b}\) as defined in (67), (68). Thus the Newton step in the impedance case reads as
\[(I)\ \ell^{(k+1)}(x)=\ell^{(k)}(x)-\tfrac{1}{\mathfrak{a}[\ell^{(k )},u^{(k)}](x)}\Big{\{}\exp\Bigl{(}-\int_{0}^{x}\tfrac{\mathfrak{b}[\ell^{(k)},u^{(k)}]}{\mathfrak{a}[\ell^{(k)},u^{(k)}]}(s)\,ds\Bigr{)}\underline{d\ell}(0 )\mathfrak{a}[\ell^{(k)},u^{(k)}](0)\\ +\int_{0}^{x}b(s)\exp\Bigl{(}-\int_{s}^{x}\tfrac{\mathfrak{b}[ \ell^{(k)},u^{(k)}]}{\mathfrak{a}[\ell^{(k)},u^{(k)}]}(t)\,dt\Bigr{)}\,ds \Big{\}}\\ \text{where }b(x)=\partial_{\bar{\nu}}\bar{z}(x,\ell^{(k)}(x))+ \sqrt{1+\ell^{\prime}(x)^{2}}\gamma(x)\bar{z}(x,\ell^{(k)}(x)).\]
In particular, with Neumann conditions on the lateral boundary \(B=\pm\partial_{x}\) under the compatibility condition \(\ell^{\prime}(0)=0\) we have \(\mathfrak{a}[\ell,u](0)=\partial_{\tilde{\tau}}(0,\ell(0))=u_{x}(0,\ell(0))=0\) and therefore
\[(I)\ \ell^{(k+1)}(x)=\ell^{(k)}(x)-\tfrac{1}{\mathfrak{a}[\ell^{(k)},u^{(k)}](x) }\int_{0}^{x}b(s)\exp\Bigl{(}-\int_{s}^{x}\tfrac{\mathfrak{b}[\ell^{(k)},u^{(k )}]}{\mathfrak{a}[\ell^{(k)},u^{(k)}]}(t)\,dt\Bigr{)}\,ds=\ell^{(k)}(x)- \underline{d\ell}(x),\]
where the value at the left hand boundary point can be computed by means of l'Hospital's rule as (skipping the argument \([\ell^{(k)}u^{(k)}]\) for better readability)
\[\lim_{x\to 0}\underline{d\ell}(x)=\lim_{x\to 0}\frac{\phi(x)}{ \mathfrak{a}(x)}=\lim_{x\to 0}\frac{\phi^{\prime}(x)}{\mathfrak{a}^{\prime}(x)}= \lim_{x\to 0}\frac{b(x)-\tfrac{\mathfrak{b}}{\mathfrak{a}}(x)\phi(x)}{ \mathfrak{a}^{\prime}(x)}=\lim_{x\to 0}\frac{b(x)-\mathfrak{b}(x) \underline{d\ell}(x)}{\mathfrak{a}^{\prime}(x)}\]
hence
\[\lim_{x\to 0}\underline{d\ell}(x)=\lim_{x\to 0}\frac{1}{1+\tfrac{ \mathfrak{b}(x)}{\mathfrak{a}^{\prime}(x)}}\ \frac{b(x)}{\mathfrak{a}^{\prime}(x)}=\lim_{x\to 0} \tfrac{b(x)}{\mathfrak{a}^{\prime}(x)+\mathfrak{b}(x)}=\tfrac{\bar{z}_{y}(0, \ell^{(k)}(0))+\gamma(0)\bar{z}(0,\ell^{(k)}(0))}{\tfrac{\ell^{(k)}}{u_{xx}^{ (k)}(0,\ell^{(k)}(0))+(\ell^{(k)})^{\prime\prime}}\gamma(0)+\gamma(0)^{2})u^{( k)}(0,\ell^{(k)}(0))}.\]
## Acknowledgement
The work of the first author was supported by the Austrian Science Fund through grant P36318; the second author was supported in part by the National Science Foundation through award DMS-2111020.
|
2309.08551 | Augmenting conformers with structured state-space sequence models for
online speech recognition | Online speech recognition, where the model only accesses context to the left,
is an important and challenging use case for ASR systems. In this work, we
investigate augmenting neural encoders for online ASR by incorporating
structured state-space sequence models (S4), a family of models that provide a
parameter-efficient way of accessing arbitrarily long left context. We
performed systematic ablation studies to compare variants of S4 models and
propose two novel approaches that combine them with convolutions. We found that
the most effective design is to stack a small S4 using real-valued recurrent
weights with a local convolution, allowing them to work complementarily. Our
best model achieves WERs of 4.01%/8.53% on test sets from Librispeech,
outperforming Conformers with extensively tuned convolution. | Haozhe Shan, Albert Gu, Zhong Meng, Weiran Wang, Krzysztof Choromanski, Tara Sainath | 2023-09-15T17:14:17Z | http://arxiv.org/abs/2309.08551v2 | # Augmenting Conformers with Structured State Space Models for Online Speech Recognition
###### Abstract
Online speech recognition, where the model only accesses context to the left, is an important and challenging use case for ASR systems. In this work, we investigate augmenting neural encoders for online ASR by incorporating structured state-space sequence models (S4), which are a family of models that provide a parameter-efficient way of accessing arbitrarily long left context. We perform systematic ablation studies to compare variants of S4 models and propose two novel approaches that combine them with convolutions. We find that the most effective design is to stack a small S4 using real-valued recurrent weights with a local convolution, allowing them to work complementarily. Our best model achieves WERs of 4.01%/8.53% on test sets from Librispeech, outperforming Conformers with extensively tuned convolution.
Haozhe Shan Albert Gu Zhong Meng Weiran Wang Krzysztof Choromanski Tara Sainath Google LLC.
{hzshan,gua,zhongmeng,weiranwang,kchoro,tsainath}@google.com Online ASR, causal model, state-space model, conformer
## 1 Introduction
The search of neural architectures for encoding audio features in automatic speech recognition (ASR) has been an important and ongoing research direction. The commonly used building blocks are DNNs [1], recurrent networks such as LSTMs [2], convolutional neural networks (CNNs) [3], and multi-head self attention (MHSA) [4]. These components have different inductive biases for learning and characteristics for optimization. The recent trend is to leverage capabilities of different components and carefully combine them. For example, [5] and [6] proposed combining the global context as modeled by MHSA, and local context as modeled by depthwise convolution (simply "convolution" afterwards), to build state-of-the-art encoders for ASR, i.e., the conformers.
Recently, structured state-space sequence models (S4) [7] emerged as a promising paradigm for sequence modeling. These models can be interpreted as a combination of RNNs, CNNs, and classical state-space models (SSMs) [8]. For simplicity, in this work we use "SSMs" as synonymous with this family of S4 models [7, 9, 10, 11, 12] as opposed to classical state-space models such as the Kalman filter and Hidden Markov Models (HMMs). As sequence models for deep learning, SSMs have been shown to have advantages such as being able to capture long-term dependencies and enjoy time complexity sub-quadratic in sequence length. Naturally, this raises the question of whether they can be used to augment ASR encoders. There have been several studies in this direction. [13] proposed an attention-free multi-head state-space (MH-SSM) architecture that achieves competitive performance on offline ASR without MHSA. Motivated by the CNN view of SSMs, [14] introduced the DSSformer for offline ASR, where they replaced convolution components in the conformer encoder with SSMs. Besides these encoder-focused studies, [15] used SSMs in the attention-based _decoder_, where the per-layer MHSA is replaced with SSMs while the cross-attention remains unchanged. All such work focused on offline ASR where encoders have full context.
**Our contributions** In this paper, we investigate augmenting ASR encoders with SSMs in both the offline setting, similar to [13] and [14], and the online setting, which was not covered by previous studies. We systematically compared the approach from [13], where the SSM is used as a drop-in replacement (DIR) for convolution in conformers, with two novel approaches: one where the SSM is stacked with a local convolution component (COM); another where the SSM is used to reparameterized the finite-size kernel within the convolution component (REP). In all three approaches, we present careful ablation studies on various SSM design choices. We found that in online ASR, the DIR approach produces word-error rates (WERs) on par with but no better that that from conformers. On the other hand, both novel approaches that we propose produce small but consistent WER reduction. In both online and offline settings, we found that COM is the most effective approach. In addition, success of the REP approach suggests that SSMs can be effective even when they are forced to model local context, pointing to the need of further theoretical understanding of their capabilities.
## 2 State Space Models
In this work, we exclusively considered the S4D variants of SSMs [10], which have been shown to be simpler to implement and cheaper to compute while being as effective as older variants. We refer the readers to [10] for further details and
summarize the structure here. Each SSM has a dimension \(N\), which can be viewed as the hidden-state size. It contains 4 sets of trainable parameters (using \(H\) to denote input dimension): \(N\times N\) weights \(\mathbf{A}\), \(N\times H\) weights \(\mathbf{B}\), \(H\times N\) weights \(\mathbf{C}\), and residual weights \(\mathbf{D}\in\mathbb{R}^{H\times H}\). Following [10], \(\mathbf{B}\) are assumed to be uniformly \(1\) and untrainable. For mapping a discrete-time input \(\{\mathbf{x}_{k}\in\mathbb{R}^{H}\}_{k=1,...,T}\) to a discrete-time output \(\{\mathbf{y}_{k}\mathbb{R}^{H}\}_{k=1,...,T}\) with a timestep \(\Delta\), this model is discretized via the zero-order hold method [9] as
\[\mathbf{x}_{k}=\bar{\mathbf{A}}\mathbf{x}_{k-1}+\bar{\mathbf{B}}\mathbf{u}_{k}\qquad \mathbf{y}_{k}=\mathbf{C}\mathbf{x}_{k}+\mathbf{D}\mathbf{u}_{k}\] \[\bar{\mathbf{A}}=e^{\mathbf{A}\Delta}\qquad\bar{\mathbf{B}}=(\bar{\mathbf{A}}- \mathbb{I})\mathbf{A}^{-1}\mathbf{B}. \tag{1}\]
These expressions reveal that computationally, SSMs can be seen as specially parameterized linear recurrent neural networks (RNN) with recurrent weights \(\bar{\mathbf{A}}\in\mathbb{C}^{N\times N}\), input weights \(\bar{\mathbf{B}}\in\mathbb{C}^{N\times H}\), readout weights \(\bar{\mathbf{C}}\in\mathbb{C}^{H\times N}\), and residual weights \(\mathbf{D}\in\mathbb{R}^{H\times H}\). This allows their efficient computation via scan algorithms [16]. Via the well-known duality between linear RNNs and time convolution, their output \(\{\mathbf{y}_{k}\}\) (before adding the residual part) is equivalently the result of (causally) convolving the input sequence with a size \(T\times H\) kernel, given by
\[\bar{\mathbf{K}}=[\mathbf{C}\bar{\mathbf{B}},\mathbf{C}\bar{\mathbf{A}}\bar{\mathbf{B}},...,\mathbf{C}\bar {\mathbf{A}}^{T-1}\bar{\mathbf{B}}]. \tag{2}\]
Note that \(\bar{\mathbf{K}}\) can be computed for input sequences of arbitrary length using a fixed number of parameters.
Recent interest in applying SSMs to sequence modeling arises from a series of work that proposed effective ways to parameterize and initialize such models [7, 9, 10]. In particular, [10] suggested restricting the \(\mathbf{A}\) to the space of _diagonal_ matrices. In addition, to ensure that powers of \(\bar{\mathbf{A}}\) does not diverge, it is sufficient to ensure that \(\mathbf{A}\) has eigenvalues with negative real parts. We considered two parameterization-initialization schemes from [10]. In the **S4D-Lin** scheme, \(\mathbf{A}\) is a complex diagonal matrix of size \(N\times N\) (it thus contains \(2N\) trainable parameters, composed of real and imaginary components of the diagonal entries). The \(n=0,...,N-1\)th diagonal entry is initialized at \(A_{n}^{lin}=-\frac{1}{2}+i\pi n\).. Both real and imaginary parts are trainable in this scheme. Real parts are ensured negative by reparameterizing them with a non-positive function (e.g., \(Re(A_{n})=-\exp(x_{n})\), where \(x_{n}\) is trained). In the **S4D-Real** scheme, \(\mathbf{A}\) is a real diagonal matrix (with \(N\) trainable parameters) with the \(n\)th diagonal entry initialized at \(A_{n}^{real}=-n-1\) and again ensured negative via reparameterization.
We utilize SSMs as parts in a sequence-to-sequence encoder. During training and inference, an input of size \(B\times T\times H\) is passed to the SSM, where \(B\) is the batch size. We treat the feature dimension as \(H\) one-dimensional time series, and each passes through a SSM. Each of the \(H\) SSMs is parameterized by the 3-tuple \((\text{diag}(\mathbf{A}^{(h)}),\mathbf{B}^{(h)},\mathbf{C}^{(h)})_{h=1,...,H}\), and we assume \(\mathbf{A}^{(h)}\) to be tied across \(h=1,...,H\) (while \(B^{(h)}\), \(C^{(h)}\) are not). For concreteness, for encoder dimension \(H=512\) and \(N=4\), our SSMs have around 4000 parameters.
## 3 Augmenting ASR Encoders with SSMs
The conformer [5] is the current state-of-the-art encoder architecture for ASR. The baseline conformer encoder that we studied uses hyperparameter settings of the Conformer (L) encoder from [5]. Each conformer layer consists of a MHSA module that is efficient at capturing long-term dependency, and a convolution module that is good at capturing local dependency. Previous research on augmenting offline ASR encoders with SSMs [13] indicates that MHSA modules are indispensable for state-of-the-art performance. Therefore, we did not modify the MHSA module and restrict our analysis to augmenting the convolution module with SSMs. Concretely, we considered three approaches (Fig. 1)
* _Drop-in Replacement_ (DIR). Here, the convolution component inside the module is replaced by an SSM. As the SSM has unlimited left context, this endows the modified module with unlimited left context. We note that this is the approach taken in [9] for offline ASR, albeit with older SSM variants.
* _Combination_ (COM). Recent work utilizing SSMs for sequence modeling has suggested a potential advantage of combining the SSM with local (small kernel size) convolution [17, 18]. We therefore propose stacking them inside the module. This approach similarly endows the module with unlimited left context.
* _Reparameterization_ (REP). As discussed above, the SSM can be viewed as a way to parameterize a convolution kernel, \(\tilde{\mathbf{K}}\), that can be match the length of arbitrary inputs. This perspective suggests that, similar to how conformers perform better when their convolution
Figure 1: Architecture of the augmented convolution module. GL=gated linear ; LN=layer normalization; conv=convolution; BN=batch normalization. Inside the dashed box are three approaches of using SSMs that we studied. DIR=drop-in replacement, COM=combination, REP=reparameterization. Following conventions from [5], the entire block shown here is referred to as the convolution _module_, while we study ways to modify the convolution _component_ within it.
kernel size is tuned to some finite value, performance of the SSM can be similarly improved by truncating \(\mathbf{\tilde{K}}\) along the time dimension. This essentially means that we are using conformers but the (finite size) convolution kernel is reparameterized by the SSM. Unlike the previous two approaches, this does not increase the left context size of the module compared to conformers.
All approaches only modify the convolution component inside the convolution module. All other settings are the same as those used in the baseline conformer. The same modification is applied to all layers in the network. The resultant encoder models are collectively referred to as SSMformers.
## 4 Experiments
To assess and compare quality of the various SSM-augmented conformers, we perform both offline and online ASR on the Librispeech dataset [19]. We use the RNN-Transducer model [20] equipped with a single-layer LSTM decoder for all experiments, label-sync and frame-sync beam search with beam size \(8\) are used for decoding in the offline and online settings respectively. We compute 80-channel filterbank features from input audio and apply two layers of 2D-convolution sub-sampling [21]. The resulting audio features, of 40ms frame rate, are fed to the encoder. Models are trained on all 970 hours of labeled training set. SpecAugment [22] and variational noise [23] are used to control overfitting. For the baseline conformer, we use the recommended configuration with around 119M parameters from [5], with 17 layers, 8 attention heads, convolution kernel size 32, encoder dimension 512, and relative positional embedding [24]). In the offline scenario, the attention module has unlimited left and right context. In the online scenario, the attention module only accesses unlimited left context; right context was also removed from the wav2vec convolution and all convolution modules (so the effective filter size becomes 16). Hyperparameter tuning and early stopping used the average WER on dev clean and dev other.
### Offline ASR with SSMformers
Although we focus our analysis on the online scenario, we first test our method in the offline setting. Results with DIR and COM with different SSM dimensions are reported in Table 1. We observe that an SSMformer using the COM approach achieves the state-of-the-art at offline ASR on Librispeech (1.96%/4.33% on test clean/other, 119M params, Table 1), on par with a larger model from a recently proposed SSM-augmented encoder model (1.91%/4.36%, 140M params) [13]. Our conformer implementation is also on par with published results [5] (ours: 2.12%/4.63%; reported: 2.1%/4.3%).
### Online ASR
We now focus on the online setting, which is a difficult yet important use case not examined by previous research on encoders with SSMs. We provide results from exhaustive ablation studies on SSM settings for each of the three approaches. All models have \(119\pm 1M\) parameters. Since our baseline conformer has parameters tuned for offline but not online ASR, we performed a sweep of its convolution kernel size (from 2 to 64) and used the best setting (4) for comparison with SSMformers. All results are summarized in Table 6.
#### 4.2.1 Drop-in replacement (DIR)
Results from ablation studies using the DIR approach are shown in Table 2. On test sets, the most performant SSMformer achieves \(4.18\%/8.77\%\), on par with but no better than our tuned conformer (\(4.15\%/8.70\%\)). The most effective SSM to use has the smallest \(N\) that we tested. This is somewhat surprising, as previous work on SSMs for ASR [9] used much larger SSM dimension (64), which was in part motivated by theoretical results showing that SSMs with diagonal weights are equivalent to those with non-diagonal weights at infinite dimension (further discussion in [9]). Our results indicate that for online ASR, a small-\(N\) SSM is the most effective, despite having fewer parameters.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c||c|} \hline Approach & \(N\) & \begin{tabular}{c} dev \\ clean \\ \end{tabular} & \begin{tabular}{c} dev \\ other \\ \end{tabular} & \begin{tabular}{c} test \\ clean \\ \end{tabular} & \begin{tabular}{c} test \\ other \\ \end{tabular} &
\begin{tabular}{c} test \\ avg. \\ \end{tabular} \\ \hline Conformer & NA & 1.84 & 4.49 & 2.12 & 4.63 & 3.38 \\ \hline \multirow{2}{*}{DIR} & 2 & 1.87 & 4.43 & 2.08 & 4.61 & 3.35 \\ & 4 & 1.89 & 4.54 & 2.05 & 4.63 & 3.34 \\ \hline \multirow{3}{*}{COM} & 2 & 1.83 & **4.33** & 2.00 & 4.45 & 3.23 \\ & 4 & 1.84 & **4.33** & 2.02 & 4.39 & 3.21 \\ \cline{1-1} & 32 & **1.78** & 4.36 & **1.96** & **4.33** & **3.15** \\ \hline \end{tabular}
\end{table}
Table 1: WER (%) of offline ASR using encoders augmented with DIR/COM approaches. All SSMformers used SSMs with the S4D-Real scheme.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c||c|} \hline Scheme & \(N\) & \begin{tabular}{c} dev \\ clean \\ \end{tabular} & \begin{tabular}{c} dev \\ other \\ \end{tabular} & \begin{tabular}{c} test \\ clean \\ \end{tabular} & \begin{tabular}{c} test \\ other \\ \end{tabular} &
\begin{tabular}{c} test \\ avg. \\ \end{tabular} \\ \hline \multirow{3}{*}{S4D-Real} & 2 & **3.76** & **9.21** & **4.18** & **8.77** & **6.48** \\ & 4 & 3.97 & 9.44 & 4.38 & 9.08 & 6.73 \\ & 8 & 3.99 & 9.58 & 4.35 & 9.14 & 6.75 \\ \hline \multirow{3}{*}{S4D-Lin} & 2 & 3.90 & 9.53 & 4.40 & 9.23 & 6.82 \\ & 4 & 3.96 & 9.47 & 4.25 & 9.06 & 6.66 \\ \cline{1-1} & 8 & 3.96 & 9.37 & 4.29 & 8.97 & 6.63 \\ \hline \end{tabular}
\end{table}
Table 2: WER (%) of online ASR using the drop-in replacement (DIR) approach.
#### 4.2.2 Combination (COM)
We next test stacking the SSM and convolution to see whether they can act complimentarily. We first sweep through different convolution sizes (Table 3). Adding the convolution resulted in a substantial WER reduction (relative 4-7%) compared to having the SSM alone (DIR), improving test set WER from 4.38%/9.08% to 4.16%/8.67%. However, our experiments suggest that for the combination to be effective, kernel size of the convolution needs to be substantially reduced. Using a large-ish kernel size (16), the resultant model in fact performs worse (SSMformer built with COM gives 4.85%/9.91%; conformer using this size gives 4.46%/9.46%).
We next test different SSM settings in the COM approach. In these experiments, we fixed the convolution kernel size to 2 and varied SSM parameterization and dimension (Table 4). We found that, as in the DIR approach, the most performant SSM here is S4D-Real with a small SSM dimension.
#### 4.2.3 Reparameterization (REP)
We now consider the final approach, where the SSM is used to reparameterize a finite-size convolution kernel. In all experiments, we use SSMs with S4D-Real and dimension 4. To parameterize a convolution kernel of size \(L\) using an SSM, we compute the kernel as
\[\tilde{\mathbf{K}}(L)=[\mathbf{C}\bar{\mathbf{B}},\mathbf{C}\bar{\mathbf{A}}\bar{\mathbf{B}},...,\mathbf{ C}\bar{\mathbf{A}}^{L-1}\bar{\mathbf{B}}]. \tag{3}\]
We also note that, while in principle one could stack one convolution component using an SSM-parameterized kernel \(\tilde{\mathbf{K}}(L)\) and another one using a conventional kernel, we found that such stacking resulted in performance on par with or worse than that of COM (results not presented here). As shown in Table 5, the best REP model achieves \(4.09\%/8.66\%\) on test sets, slightly worse than the best COM model but also better than the tuned conformer (\(4.15\%/8.70\%\)), despite having the same kernel size and left context. Compared to DIR or COM, the advantage of this approach is that it does not modify the forward pass of the encoder at test time. The model simply needs to compute and cache the finite-size kernel \(\tilde{\mathbf{K}}(L)\) and perform convolution with it as a vanilla conformer.
### Summary
In Table 6, we summarize WERs from the most performant SSMformer using each of the approaches. The conformer shown here uses the best convolution kernel size we found (size 4). Compared to this carefully tuned conformer, COM and REP still manage small improvement on test sets, achieving WERs of \(4.01\%/8.53\%\) and \(4.09\%/8.66\%\), respectively. Our key empirical takeaways are
* Compared to conformers with tuned convolution, SSM-formers achieve a small but consistent WER reduction in online ASR.
* In all settings, S4D-Real is more effective than S4D-Lin.
* In online ASR, SSMs with very small \(N\) (2-4) have the best performance. Larger \(N\) is more effective for offline ASR.
* How SSMs are incorporated into the encoder matters. We found COM \(>\) REP \(>\) DIR.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c||c|} \hline \multirow{2}{*}{Scheme} & \multirow{2}{*}{\(N\)} & dev & dev & test & test & test \\ & & clean & other & clean & other & avg. \\ \hline S4D-Real & 2 & 3.69 & 9.14 & **4.01** & **8.53** & **6.27** \\ S4D-Real & 4 & **3.67** & 9.10 & 4.16 & 8.67 & 6.42 \\ S4D-Real & 8 & 3.85 & 9.29 & 4.29 & 8.97 & 6.63 \\ S4D-Real & 16 & 3.70 & 9.19 & 4.27 & 8.74 & 6.51 \\ S4D-Real & 32 & 3.77 & 9.00 & 4.24 & 8.63 & 6.44 \\ S4D-Real & 64 & 3.70 & **8.99** & 4.18 & 8.77 & 6.48 \\ \hline S4D-Lin & 4 & 3.71 & 9.21 & 4.25 & 8.86 & 6.56 \\ S4D-Lin & 32 & 3.71 & 9.27 & 4.19 & 8.93 & 6.56 \\ \hline \end{tabular}
\end{table}
Table 4: WERs(%) of online ASR from SSMformers augmented with the COM approach using different recurrent weights (\(\mathbf{A}\)). All models use a pre-SSM convolution with kernel size 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c||c|} \hline \multirow{2}{*}{Appr.} & conv. & \multirow{2}{*}{\(N\)} & dev & dev & test & test & test \\ & size & & clean & other & clean & other & avg. \\ \hline \multirow{3}{*}{DIR} & NA & 2 & 3.76 & 9.21 & 4.18 & 8.77 & 6.48 \\ & NA & 4 & 3.97 & 9.44 & 4.38 & 9.08 & 6.73 \\ \hline \multirow{3}{*}{COM} & 2 & 4 & **3.67** & **9.10** & **4.16** & **8.67** & **6.42** \\ & 4 & 4 & 3.75 & 9.46 & 4.27 & 8.93 & 6.60 \\ \cline{1-1} & 8 & 4 & 3.75 & 9.35 & 4.17 & 8.98 & 6.58 \\ \cline{1-1} & 16 & 4 & 4.40 & 10.19 & 4.85 & 9.91 & 7.38 \\ \hline \end{tabular}
\end{table}
Table 3: WERs(%) of online ASR from encoders using different pre-SSM convolution kernel sizes. Both DIR and COM used the S4D-Real parameterization scheme with \(N=4\) (using other \(N\) worsened performance). Note that in COM models, the conv. size only describes the convolution kernel size; the SSMs they use have unlimited left context.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c||c|} \hline \multirow{2}{*}{Appr.} & left & dev & dev & test & test & test \\ & cont. & clean & other & clean & other & avg. \\ \hline \multirow{4}{*}{REP} & \(\infty\) & 3.97 & 9.44 & 4.38 & 9.08 & 6.73 \\ & 32 & 3.86 & 9.35 & 4.21 & 8.97 & 6.59 \\ & 16 & 3.79 & 9.33 & 4.26 & 8.95 & 6.61 \\ & 8 & **3.75** & **9.02** & **4.09** & **8.66** & **6.38** \\ & 4 & 3.80 & 9.21 & 4.18 & 8.83 & 6.51 \\ & 2 & 3.79 & 9.04 & 4.17 & 8.71 & 6.44 \\ \hline \end{tabular}
\end{table}
Table 5: WERs(%) of online ASR from encoders augmented using the reparameterization (REP) approach.
## 5 Discussion
We systematically investigated ways to augment encoder models for online ASR with SSMs. We proposed two new approaches, COM and REP, both of which resulted in models performing at the state-of-the-art. The success of REP is somewhat surprising, as traditionally SSMs are thought to excel at modeling long-range dependencies [8, 7, 10], yet the REP approach prevents them from doing so. This suggests that even for short-range dependencies, SSMs can have advantages over simple convolution, pointing to the need of further theoretical research.
|
2309.07662 | Guaranteed approximations of arbitrarily quantified reachability
problems | We propose an approach to compute inner and outer-approximations of the sets
of values satisfying constraints expressed as arbitrarily quantified formulas.
Such formulas arise for instance when specifying important problems in control
such as robustness, motion planning or controllers comparison. We propose an
interval-based method which allows for tractable but tight approximations. We
demonstrate its applicability through a series of examples and benchmarks using
a prototype implementation. | Eric Goubault, Sylvie Putot | 2023-09-14T12:26:41Z | http://arxiv.org/abs/2309.07662v1 | # Guaranteed approximations of arbitrarily quantified reachability problems
###### Abstract
We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.
Keywords:Reachability, Quantified problems, Inner-approximations
## 1 Introduction
We consider the problem of computing inner and outer approximations of sets of reachable states constrained by arbitrarily quantified formulas. Although this can be applied to a number of computer science and verification problems, we focus here on quantified formulas that arise in control and validation. Controlled systems are usually subject to disturbances, and are defined by the flow \(\varphi(t;x_{0},u,w)\) at time \(t\), for any initial state \(x_{0}\), control \(u\) and disturbance \(w\). Robust reachability in the sense of [20] is defined as computing, for time \(t\in[0,T]\), a set such as \(R_{\forall\exists}(\varphi)(t)\):
\[R_{\forall\exists}(\varphi)(t)=\{z\ \mid\ \forall w\in\mathbb{W},\ \exists x_{0}\in\mathbb{X}_{0},\ \exists u\in\mathbb{U},z=\varphi(t;x_{0},u,w)\}\]
and solves the problem of knowing whether a controller can compensate disturbances or change of values of parameters that are known to the controller. This is an example of the quantified reachability problems targeted in this work.
In classical robust control, the problem can be different and consider the existence of a controller leading to a target robustly whatever the disturbances in a given set. In this case, we may need to relax the problem to find a non-empty solution, for instance by some tolerance in time or space on reaching the target. This leads to more complex quantified problems of the form, for example here with a relaxation in time:
\[R_{\exists\forall\exists}(\varphi)=\{z\in\mathbb{R}^{m}\mid \exists u\in\mathbb{U},\ \exists x_{0}\in\mathbb{X}_{0},\ \forall w\in\mathbb{W},\ \exists s\in[0,T]\\ z=\varphi(s;x_{0},u,w)\} \tag{1}\]
This generalization is one of the motivations of the work described hereafter, that considers arbitrary alternations of quantifiers. We discuss in Section 1.2 other problems in control requiring such alternations, among which motion planning problems and problems specified by hyperproperties such as robustness or comparisons of controllers.
### Problem statement
Let \(f\) be a function from \(\mathbb{R}^{p}\) to \(\mathbb{R}^{m}\), which can be a flow function \(\varphi\) as above, a discrete-time dynamical system etc. We suppose the \(p\) arguments of \(f\) are partitioned into consecutive \(j_{i}\) arguments \(i=1,\ldots,2n\) corresponding to the alternations of quantifiers, with \(p=\sum\limits_{i=1}^{2n}j_{i}\). This partition, identified with the sequence \((j_{1},\ldots,j_{2n})\) is denoted by \(\vec{p}\). For simplicity's sake, we will note \(\vec{x}_{i}=(x_{k_{i}+1},\ldots,x_{k_{i+1}})\) where \(k_{i}\) stands for \(\sum\limits_{l=1}^{i-1}j_{l}\), \(i=1,\ldots,2n+1\), and \(f(x_{1},x_{2},\ldots,\)\(x_{k_{2n}})=f(\vec{x}_{1},\ldots,\vec{x}_{2n})\).
We consider the general quantified problems, with \(n\) alternations of quantifiers \(\forall\exists\), of finding \(R_{\vec{p}}(f)\) defined as:
\[R_{\vec{p}}(f)=\big{\{}z\in\mathbb{R}^{m}\mid\forall\vec{x}_{1 }\in[-1,1]^{j_{1}},\ \exists\vec{x}_{2}\in[-1,1]^{j_{2}},\ \ldots,\\ \forall\vec{x}_{2n-1}\in[-1,1]^{j_{2n-1}},\exists\vec{x}_{2\vec{ n}}\in[-1,1]^{j_{2n}},\ z=f(\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{2n})\big{\}} \tag{2}\]
Remark 1: Note that this formulation does not prevent us from considering a formula starting with an existential quantifier (nor one finishing with a universal quantifier): formally this can be done by adding a universal quantifier at the start of the sequence of quantifiers, quantifying over a dummy variable.
When only few quantifier alternations are involved, we will use the notations \(R_{\forall}(f)\), \(R_{\exists}(f)\), \(R_{\forall\exists}(f)\), \(R_{\exists\forall}(f)\) etc. instead of \(R_{\vec{p}}(f)\), for brevity.
Remark 2: Problem (2) naturally also includes, up to reparametrization, quantified problems with other boxes than \([-1,1]^{j_{i}}\). It is also possible to consider more general sets over which to quantify variables \(x_{i}\). As shown in Proposition 2, any outer-approximation (resp. inner-approximation) of the set of values for universally quantified variables \(x_{2i-1}\) and inner-approximation of the set for existentially quantified variables \(x_{2i}\), by boxes, provides with our method an inner-approximation (resp. outer-approximation) of \(R_{\vec{p}}(f)\).
Remark 3: In control applications, control \(u\) and disturbance \(w\) are generally functions of time. We are not quantifying over functions here, which would be a much more intricate problem to solve, but, as in e.g. [21], we are considering that control and disturbances are discretized, hence constant, over small time intervals: they thus are identified with a finite set of parameters, over a bounded time horizon.
Computing reachable sets \(R_{\vec{p}}(f)\) being intractable in general, as it includes in particular the computation of the range of a function, we focus on computing tight inner- and outer-approximations.
Running exampleIn the sequel, we will illustrate our approach on a simple Dubbins vehicle model described below, where function \(f\) in (4) is the flow function \(\varphi\) of the system. In general, the flow function does not admit closed forms, but our method will still be applicable in that context as it will only require outer-approximations of its values and of its Jacobian. Still, when comparing with quantifier elimination methods, we will need to give polynomial approximations for \(\varphi\), which will be developed in Example 3.
Example 1 (Dubbins vehicle [21]): We simplify the model from [21] to consider only uncertainties on the \(x\) axis: \(\dot{x}=vcos(\theta)+b_{1}\), \(\dot{y}=vsin(\theta)\), \(\dot{\theta}=a\). We suppose that the speed \(v\) is equal to \(1\) and we have a control period of \(t=0.5\). The initial conditions are uncertain given in \(\mathbb{X}_{0}=\{(x,y,\theta)\;\mid\;x\in[-0.1,0.1],\;y\in[-0.1,0.1],\;\theta \in[-0.1,0.1]\}\), the control \(a\) can take values in \(\mathbb{U}=[-0.01,0.01]\) and disturbance \(b_{1}\) can take values in \(\mathbb{W}=[-0.01,0.01]\). Both control and disturbance are supposed to be constant over the control period \([0,0.5]\). This could naturally be extended for any number of control periods, with piecewise constant control and disturbances. We are interested in computing approximations of reachable sets of the form (4) where \(\varphi\) is the solution flow of the system.
### Quantified reachability problems
Quantified reachability problems are central in control and hybrid systems, we detail below a few examples.
General robust reachabilityA classical robust reachability problem consists in computing the states reachable at some time \(T\geq 0\) for some control, independently of disturbances which can even be adversarial with respect to the control and initial state:
\[R_{\exists\forall}(\varphi)=\{z\in\mathbb{R}^{m}\mid\exists u\in\mathbb{U}, \exists x_{0}\in\mathbb{X}_{0},\;\forall w\in\mathbb{W},\;z=\varphi(T;x_{0},u,w)\} \tag{3}\]
However, requiring to reach a given target point \(z\in R_{\exists\forall}(\varphi)\) at time \(T\) independently of the disturbance is most often a too constrained problem. A better quantified problem is the relaxation to whether we can reach this point within time \([0,T]\) instead of at fixed time:
\[R_{\exists\forall\exists}(\varphi)=\{z\in\mathbb{R}^{m}\mid \exists u\in\mathbb{U},\;\exists x_{0}\in\mathbb{X}_{0},\;\forall w\in \mathbb{W},\;\exists s\in[0,T]\\ z=\varphi(s;x_{0},u,w)\} \tag{4}\]
Example 2 (Dubbins vehicle (continued)): We want the robust reachable set within one time period, i.e. until time \(t=0.5\), with one control value \(a_{0}\) applied between times \(0\) and \(0.5\). This corresponds to \(R_{\exists\forall\exists}(\varphi)\) of Equation (4) with sets \(\mathbb{U}\), \(\mathbb{X}_{0}\), \(\mathbb{W}\) defined in Example 1 and \(T=0.5\).
Another possible relaxation of Equation (3) is to consider the set of states that can be reached up to \(\delta\). This corresponds to the quantified problems:
\[R_{\exists\forall\exists}(\varphi)=\{z\in\mathbb{R}^{m}\mid \exists u\in\mathbb{U},\;\exists x_{0}\in\mathbb{X}_{0},\;\forall w\in \mathbb{W},\\ \exists\delta\in[-\delta,\delta]^{m},\;z=\varphi(T;x_{0},u,w)+\delta\} \tag{5}\]
Time and space tolerances can also be combined. We will for instance do so on the running example, by considering the quantified problem of Equation (17).
Finally, even more complicated quantified problems are of interest in robust control, such as:
\[R_{\forall\exists\forall\exists}(\varphi)=\{z\in\mathbb{R}^{m}\mid \forall x_{0}\in\mathbb{X}_{0},\ \exists u\in\mathbb{U},\ \forall w\in\mathbb{W},\exists s\in[0,T],\\ z=\varphi(s;x_{0},u,w) \tag{6}\]
where the control \(u\) can observe and react to the initial conditions \(x_{0}\), but not to the disturbance \(w\).
Motion planningMotion planning problems are typically described by quantified formulas, for instance when prescribing waypoints or regions along with specific time intervals through which a controller should steer a dynamical system.
As an example, suppose we want to go through regions \(S_{j}\) between times \(T_{j-1}\) and \(T_{j}\), for \(j=1,\ldots,k\), and characterize the set of final states or locations \(z_{k}\), this implies finding the following set \(R_{\exists\forall\ldots\forall\exists}(\varphi)\):
\[\{z_{k}\in\mathbb{R}^{m}\mid\exists u_{1}\in\mathbb{U},\ \forall x_{0}\in\mathbb{X}_{0},\ \forall w_{1}\in\mathbb{W},\ \exists t_{1}\in[0,T_{1}],\ \exists z_{1}\in S_{1},\ \exists u_{2}\in \mathbb{U},\\ \forall w_{2}\in\mathbb{W},\ \exists t_{2}\in[T_{1},T_{2}],\ \exists z_{2}\in S_{2},\ \ldots,\ \exists u_{k}\in\mathbb{U},\ \forall w_{k}\in \mathbb{W},\ \exists t_{k}\in[T_{k-1},T],\\ \left(\begin{array}{c}z_{1}\\ \ldots\\ z_{k}\end{array}\right)=\left(\begin{array}{c}\varphi(t_{1};u_{1},x_{0},w_{1 })\\ \ldots\\ \varphi(t_{k}-t_{k_{1}};u_{k},z_{k-1},w_{k})\end{array}\right) \tag{7}\]
Temporal logic propertiesTemporal logics such as Metric Interval Temporal Logic (MITL) and Signal Temporal Logic (STL) have been successful in specifying numerous properties of interest for control systems, see e.g. [11]. Such formulas naturally produce complex quantified formulas since the semantics of "always \(\Phi\) between times \(a\) and \(b\)" (resp. "eventually \(\Phi\) between times \(a\) and \(b\)") in terms or ordinary first-order propositional formulas is \(\forall t\in[a,b],\ \Phi\) (resp. \(\exists t\in[a,b],\ \Phi\)).
It is not the subject here to discuss the class of temporal logic formulas that we can interpret through Equation (2), but rather to exemplify the potential for our approach. It is important though to note that not only we can interpret the standard boolean semantics of a class of such temporal formulas, but also their robust semantics [12]. Moreover, formulas such as Equation (2) allow for quantifying over any parameters of the dynamics of a control system, hence to express quantifications over trajectories, making it possible to compare trajectories such as in hyperproperties, see e.g. [34]. For instance, if we consider the behavioural robustness of a system, which specifies that small differences in system inputs result in small differences in system outputs, this can be measured by different quantified expressions such as:
\[R_{\exists\forall\exists\forall\exists}(\varphi)=\{z\ \mid\ \exists x_{0}\in\mathbb{X}_{0},\ \exists\delta\in[-\epsilon,\epsilon]^{i},\\ \forall u\in\mathbb{U},\ \exists u^{\prime}\in\mathbb{U},\ \forall w\in\mathbb{W},\ \exists t\in[T_{1},T_{2}],\ z=\| \varphi(t;x_{0},u,w)-\varphi(t;x_{0}+\delta,u^{\prime},w)\|\}\]
which measures the distance between two trajectories of the same system when starting with close enough initial conditions, under any disturbance but taken equal for the two trajectories.
### Related work
Set-based methods for reachability analysisOur approach is related to outer-approximations of non-linear continuous and controlled systems: outer-approximations of the reachable set of such systems is a particular case of our approach and we rely on such outer-approximations to compute outer- and inner-approximations of quantified problems. Many methods for outer-approximating reachable sets for continuous systems have been developed. For linear systems, direct set-based methods have been designed for estimating the exponential of a matrix, or of Peano-Baker series for uncertain systems [3], using support functions, [23], zonotopes [15], ellipsoids [27], for efficient representations of sets of states. For non-linear continuous systems, similar set-based techniques have been applied using polytopes [35] or generalized polytopes such as polynomial zonotopes [2]. Authors have also been considering a variety of linearization, hybridization or polynomialization techniques such as in e.g. [2, 4]. Instead of directly propagating tractable sets through the dynamics, Taylor methods [30] have been applied extensively by a number of authors, e.g. [7], for computing polynomial approximations of solutions of ODEs (flowpipes), whose image can then be approximated using any of the tractable set representation we mentionned above. Another approach for reachability is through Hamilton-Jacobi techniques, see e.g. [5], that express functions whose zero sub-level set give the reachable sets as solutions to a Hamilton-Jacobi PDE.
There are far less methods for inner-approximating images of functions or sets of reachable states. Interval-based methods, relying on space discretization, have been used for inner-approximating the image of functions [17]. They were also used to outer and inner approximate solutions of differential systems with uncertain initial conditions [31]. An interesting recent work [26] calculates the inner-approximation by scaling down an outer-approximation, until a suitable criterion (involving the boundary of the reachable set of states) is met. A similar criterion is used in [40], with polytopic approximations. An important body of the inner-approximation literature uses either Hamilton-Jacobi methods methods see [32] and [29] or set-based approximate backwards reachability, i.e. through the inverted dynamics see e.g. [8] and [40].
Our approach is directly linked to previous work on modal intervals and mean-value theorems [16, 19] but extends it considerably as we are not bound to consider only \(\forall\exists\) statements. It also includes the approximations of robust reachable sets with time-varying inputs and disturbances as defined in [20, 21, 22].
Quantifier eliminationMany verification and synthesis problems in computer science and control theory can be represented by the first order formula
\[\Phi(p_{1},\ldots,p_{m})\equiv Q_{1}x_{1},\ \ldots\ Q_{n}x_{n},\ P(p_{1}, \ldots,p_{m},x_{1},\ldots,x_{n}) \tag{8}\]
where \(Q_{i}\in\{\forall,\exists\}\) are either universal or existential quantifiers, \(p_{1},\ldots,p_{m}\) are free variables and \(P\) is a quantifier-free formula constructed by conjunction, disjunction and negation of atomic formulas of the form \(f\) op \(0\) where \(\mbox{op}\in\{=,\neq,<,\leq\}\) is a relational operator and \(f\) is a polynomial.
The first quantifier elimination algorithm is due to Tarski [39] for the first order theory of real numbers. Because of its high computational complexity, this algorithm is not used in practice. The first practical algorithm is due to Collins [9], and is based on cylindrical algebraic decomposition. Still, applications of this algorithm are limited because its complexity is doubly exponential in \(n+m\).
The applications of quantifier elimination to control design [1] are numerous: output feedback stabilization, simultaneous stabilization, robust stabilization, frequency domain multiobjective design. In [1], they are mostly exemplified on linear systems. The work of [1] has been extended to non-linear systems in e.g. [25], including also some trajectory tracking properties. Reachability is not in general solvable by algebraic methods. The reason is that the solution set of a system of differential equations is not algebraic in general. However, [25] considers a more restricted form of reachability, along prescribed types of trajectories, that can be investigated using semi-algebraic tools. Further generalizations are exemplified in [38], with controller synthesis, stability, and collision avoidance problems. Quantifier elimination techniques have also been applied to model predictive control, see e.g. [37]. Finally, application of quantifier elimination to robot motion planning, similar to the one considered in Section 1.2, has been considered in e.g. [28], for instance for the classical piano mover's problem [36].
Our quantified problem of Equation (2) is an instance of general quantifier elimination, although we do not impose that functions \(f\) we consider are polynomial. We compare our method with quantifier elimination techniques in the sequel, although our method is specifically designed to give tight inner and outer approximations in a fast manner whereas quantifier elimination aims at finding exact solution sets along with algebraic conditions under which they exist, at the expense of time complexity. As quantifier elimination needs to consider polynomials, we compare our method with quantifier elimination on approximations of the flow function \(\varphi\) given by e.g. Taylor expansions [33], see Example 3.
Satisfiability modulo theory (SMT)Some SMT solvers interpret quantified formulas over theories. Still, it has long been known that there is no sound and complete procedure already for first-order logic formulas of linear arithmetic with uninterpreted function symbols [14], meaning that the corresponding SMT solvers generally rely on heuristics to deal with quantifiers. The closer SMT solver to our approach is dReal/dReach [13] which has support for some quantified SMT modulo the theory of real numbers and modulo ODEs. Such SMT solvers do not synthetize the set of states that verifies some quantified formula as we do, but can be used for checking this set is correct, up to some "resolution". The time complexity of such methods is also much higher that what we are proposing, and dReal/dReach is limited to the exists-forall fragment.
Example 3 (Dubbins vehicle (continued)): We want to compute the robust reachable set of Example 2 using quantifier elimination. As we do not have the exact flow \(\varphi(t;x_{0},u,w)\), we use approximations by Taylor expansions, see e.g. [21] where a similar example was discussed. With the initial conditions and parameters values of Example 1, we get the following Taylor expansion in time with zonotopic coefficients (which gives some Taylor model of the solution flow):
\[P(t):x=0.1\epsilon_{1}+(1+0.01\epsilon_{2})t+1.31\;10^{-7}\epsilon _{3}t^{2}\;\wedge y=0.1\epsilon_{4}+(0.01\epsilon_{6}+0.01\epsilon_{7}t)t\\ +(0.005\epsilon_{5})t^{2}\;\wedge\theta(t)=0.01\epsilon_{6}+0.01 \epsilon_{7}t \tag{9}\]
with \(\epsilon_{i}\in[-1,1]\) for \(i=1,\ldots,7\). and \(x_{0}=0.1\epsilon_{1}\), \(b_{1}=0.01\epsilon_{2}\), \(y_{0}=0.1\epsilon_{4}\), \(a=0.01\epsilon_{7}\) and \(\theta_{0}=0.01\epsilon_{6}\). These were obtained by a linearization of the cosinus and sinus and simple estimates of remainders, which could be improved but were kept simple for the sake of readability.
We interpret the \(R_{\exists\forall\exists}(\varphi)\) formula of Equation (4) by quantifying over the symbolic variables \(\epsilon_{1}\) to \(\epsilon_{7}\). We have a correspondence between initial states and inputs of the problem and the \(\epsilon_{i}\), except for \(\epsilon_{3}\) and \(\epsilon_{5}\) that abstract the remainder term of the Taylor approximation of the solution. Hence an over-approximation of \(R_{\exists\forall\exists}(\varphi)\) can be obtained by quantifier elimination on the formula:
\[\exists\epsilon_{7}\in[-1,1],\;\exists\epsilon_{1}\in[-1,1],\; \exists\epsilon_{4}\in[-1,1],\;\exists\epsilon_{6}\in[-1,1],\\ \forall\epsilon_{2}\in[-1,1],\;\exists\epsilon_{3}\in[-1,1],\; \exists\epsilon_{5}\in[-1,1],\;\exists t\in[0,0.5],\;P(t)\]
where \(P(t)\) is defined by Equation (9) and all symbols are existentially quantified except \(\epsilon_{2}\) which corresponds to the disturbance \(b_{1}\). There are numerous software implementing some form or another of quantifier elimination, e.g. QEPCAD [6], REDUCE RedLog package [10], and Mathematica [24]. We use in the sequel Mathematica and its operation Reduce. We refer the reader to the appendix, Section A, where all queries in Mathematica are provided. Using Mathematica for the problem above times out, but when we make independent queries on \(x\), \(y\) and \(\theta\), we get \(x\in[-0.1,0.595]\), \(y\in[-0.10875,0.10875]\) and \(\theta\in[-0.015,0.015]\), with a warning about potential inexact coefficients, respectively in about 25, 12 and 0.05 seconds on a MacBook Pro 2.3GHz Intel Core i9 8 cores with 16GB of memory. This gives a correct outer approximation of \(R_{\exists\forall\exists}(\varphi)\).
Similarly, for inner-approximation we eliminate the quantifiers in:
\[\exists\epsilon_{7}\in[-1,1],\;\exists\epsilon_{1}\in[-1,1],\; \exists\epsilon_{4}\in[-1,1],\;\exists\epsilon_{6}\in[-1,1],\\ \forall\epsilon_{2}\in[-1,1],\;\forall\epsilon_{3}\in[-1,1], \forall\epsilon_{5}\in[-1,1],\;\exists t\in[0,0.5],\;P(t)\]
where the uncertainties \(\epsilon_{3}\) and \(\epsilon_{5}\) are now quantified universally, reflecting the fact that inner-approximation corresponds to making no hypothesis on the values of these variables corresponding to approximation errors, apart from knowing bounds. The elimination times out for the full problem and returns the same bounds as before up to \(10^{-5}\), when solving the problem separately on each variable \(x\), \(y\) and \(\theta\), in respectively 2.2, 17.1 and 0.06 seconds. However, contrarily to the over-approximation, these independent queries do not allow us to conclude
about an actual inner-approximation for \(R_{\exists\forall\exists}(\varphi)\), as the existentially quantified variables may be assigned different values in the 3 independent queries.
### Contributions
We extend the approach of [20], which is restricted to solving problems of the form \(R_{\exists}(f)\) or \(R_{\forall\exists}(f)\), to deal with arbitrary quantified formulas of the form of \(R_{\vec{p}}(f)\) of Equation (2). These include the generalized robust reachability problems discussed in Section 1.2. The problem of finding the exact set \(R_{\vec{p}}(f)\) admits closed formulas for a scalar-valued affine function \(f\), as described in Section 2.1, culminating in Proposition 1. By local linearization techniques, akin to the ones used in [20], we get explicit formulas for inner and outer-approximations of general non-linear scalar-valued functions in Section 2.2, Theorem 1.
We consider the general vector-valued case in Section 3 and Theorem 2. The difficulty lies, as for the \(\forall\exists\) case of [20], in the computation of inner-approximations. The solution proposed is to interpret slightly relaxed quantified problems, one dimension at a time, that, altogether, give guaranteed inner-approximations of \(R_{\vec{p}}(f)\), extending the method of [20]. The combinatorics of variables, quantifiers and components of \(f\) make the intuition of the indices used in Theorem 2.2 difficult to fully apprehend: we thus begin Section 3 by an example.
The general form of quantified problems we are considering here makes solutions that we propose difficult to assess and compare: we are not aware of any existing tool solving similar problems, at the exception of quantifier elimination algorithms, discussed in Section 1.3. We also develop a sampling method, see Remark 4, for checking the tightness of our results.
Finally, we report on our implementation of this method in Julia in Section 4. Benchmarks show that this method is tractable, with experiments up to thousands of variables solved in a matter of tens of seconds.
## 2 Approximations of arbitrary quantified formulas in the case of scalar-valued functions
We first focus in Section 2.1 on the computation of \(R_{\vec{p}}(f)\) where \(f\) is an affine function from \(\mathbb{R}^{p}\) to \(\mathbb{R}\). In this case, we derive exact bounds. We then rely on this result to carry on with the general case in Section 2.2, using a mean-value theorem.
### Exact bounds for scalar affine functions
We consider affine functions, i.e. functions of the form \(f(x_{1},\ldots,x_{q})=\delta_{0}+\sum_{i=1}^{q}\delta_{i}x_{i}\). For these functions, we consider the general quantified problem defined, for \(Q_{j}=\forall\) or \(\exists\), as:
\[S_{q}(\delta_{0};Q_{1},\delta_{1};\ldots;Q_{q},\delta_{q})=\{z \in\mathbb{R}\mid Q_{1}x_{1}\in[-1,1],\\ Q_{2}x_{2}\in[-1,1],\ldots,\,Q_{q}x_{q}\in[-1,1],\ z=f(x_{1},x_{2}, \ldots,x_{q})\}\]
We first see that we have:
Lemma 1: \[\begin{split} S_{q}(\delta_{0};Q_{1},\delta_{1};&\ldots ;Q_{q},\delta_{q})=\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
The proof is given in Appendix 0.C.
Remark 5: In the sequel, when applying Proposition 1, we will use notations \(\Delta_{x}\) with \(x\) such as control \(a\), disturbance \(b_{1}\), angle \(\theta\) instead of using for indices a potentially less understandable numbering.
### Inner and outer-approximations for non-linear scalar-valued functions
We are now in a position to give inner and outer approximations of \(R_{\boldsymbol{p}}(f)\) for general scalar-valued \(f(\boldsymbol{x}_{1},\boldsymbol{x}_{2},\ldots,\boldsymbol{x}_{2n})\) from \(\mathbb{R}^{p}\) to \(\mathbb{R}\). The principle is to carefully linearize \(f\), so that inner and outer-approximations of \(R_{\boldsymbol{p}}(f)\) are given by inner and outer-approximations of a similar quantified problem on its linearization, this is Proposition 2. Combining this with e.g. simple mean-value approximations mentioned in Remark 6, we obtain Theorem 1. After exemplifying these formulas on toy examples, we apply it, twice, to the Dubbins vehicle model of Example 1. We first use the Taylor approximation of its dynamics, Example 4. We then show that we do not need to compute such approximations and that our approach can also compute direct inner and outer-approximations of \(R_{\boldsymbol{p}}(f)\) where \(f\) is the solution of a differential equation, Example 5.
As before, for a given function \(f:\ \mathbb{R}^{p}\rightarrow\mathbb{R}\), we denote by \(\boldsymbol{p}=(j_{1},\ldots,j_{2n})\) a partition of the \(p\) arguments of \(f\) and \(k_{l}=\sum\limits_{i=1}^{l-1}j_{i}\), for \(l=1,\ldots,2n+1\). We suppose we have \(p\) intervals \(A_{1},\ldots,A_{p}\) and we write \(\boldsymbol{A}_{i}=(A_{k_{i}+1},\ldots,A_{k_{i+1}}),\ i=1,\ldots,2n\) the corresponding boxes in \(\mathbb{R}^{j_{i}}\). We will use the notation:
\[\mathcal{C}(\boldsymbol{A}_{1},\ldots,\boldsymbol{A}_{2n})=\{z \mid\forall\alpha_{1}\in\boldsymbol{A}_{1},\ \exists\alpha_{2}\in\boldsymbol{A}_{2},\ldots,\\ \forall\alpha_{2n-1}\in\boldsymbol{A}_{2n-1},\ \exists\alpha_{2n}\in \boldsymbol{A}_{2n},\ z=\sum\limits_{j=1}^{2n}\alpha_{j}\}.\]
Proposition 2: _Given function \(f:\ \mathbb{R}^{p}\rightarrow\mathbb{R}\) and partition \(\boldsymbol{p}\) as above, define the following families of functions_
\[h^{x_{1},\ldots,x_{j-1}}(x_{j})=f(x_{1},\ldots,x_{j-1},x_{j},0,\ldots,0)-f(x_{ 1},\ldots,x_{j-1},0,\ldots,0)\]
_for \(j=1,\ldots,p\), and suppose we have the following inner and outer-approximations of their images, independently of \(x_{1},\ldots,x_{j-1}\), denoted by \(range(.)\): \(I_{j}\subseteq range(h^{x_{1},\ldots,x_{j-1}})\subseteq O_{j}\) for \(j=1,\ldots,p\). Then, writing \(\boldsymbol{I}_{i}=\mathop{\Pi}\limits_{j=k_{i}+1}^{k_{i+1}}[\underline{I}_{ j},\overline{I}_{j}]\), \(\boldsymbol{O}_{i}=\mathop{\Pi}\limits_{j=k_{i}+1}^{k_{i+1}}[\underline{O}_{ j},\overline{O}_{j}]\), \(i=1,\ldots,2n\), we have:_
\[f(0,\ldots,0)+\mathcal{C}(\boldsymbol{O}_{1},\boldsymbol{I}_{2 },\ldots,\boldsymbol{O}_{2n-1},\boldsymbol{I}_{2n})\subseteq R_{\boldsymbol{p }}(f)\\ \subseteq f(0,\ldots,0)+C(\boldsymbol{I}_{1},\boldsymbol{O}_{2}, \ldots,\boldsymbol{I}_{2n-1},\boldsymbol{O}_{2n}) \tag{11}\]
Proof: The proof is based on the fact that \(f\) is the sum of all the \(h_{i}\), \(i=1,\ldots,2n\), and of \(f(0,\ldots,0)\), and is proven by induction on the number of quantifier alternations, see Section E.
Remark 6: We do have such approximants of the range as necessary for Proposition 2, thanks to a generalized mean-value theorem [16, 21]. If we have, for all \(i=1,\ldots,2n\) and all \(j=k_{i}+1,\ldots,k_{i+1}\), \(\nabla_{j}=[\underline{\nabla}_{j},\overline{\nabla}_{j}]\) such that:
\[\left\{\left|\frac{\partial f}{\partial x_{j}}(\boldsymbol{x}_{1},\ldots, \boldsymbol{x}_{i},0,\ldots,0)\right|\mid\boldsymbol{x}_{l}\in[-1,1]^{j_{l}}, \ l=1,\ldots,i\right\}\subseteq\nabla_{j}\]
then we can use, for all \(j=1,\ldots,2n\): \(I_{i}=\underline{\nabla}_{j}[-1,1]\), and \(O_{j}=\overline{\nabla}_{j}[-1,1]\). We can naturally also use other approximation methods.
We now deduce inner and outer-approximations of \(R_{\mathbf{p}}(f)\):
Theorem 4.1: _With the hypotheses of Proposition 2 on sets \(I_{j}\) and \(O_{j}\), and denoting \(\sum\boldsymbol{A}\), for \(A\) any vector of reals, the sum of all its components, we have:_
\[f(0,\ldots,0)\,+\,\left[\sum_{k=1}^{n}\sum\left(\overline{\boldsymbol{O}}_{2k -1}+\underline{\boldsymbol{I}}_{2k}\right),\quad\sum_{k=1}^{n}\sum\left( \overline{\boldsymbol{I}}_{2k}+\underline{\boldsymbol{O}}_{2k-1}\right) \right]\ \subseteq\ R_{\boldsymbol{p}}(f)\]
_if \(\sum\overline{\boldsymbol{O}}_{2l-1}-\sum\underline{\boldsymbol{O}}_{2l-1} \leq\sum\limits_{k=l}^{n}\sum\left(\overline{\boldsymbol{I}}_{2k}-\underline {\boldsymbol{I}}_{2k}\right)-\sum\limits_{k=l+1}^{n}\sum\left(\overline{ \boldsymbol{O}}_{2k-1}-\underline{\boldsymbol{O}}_{2k-1}\right)\) for \(l=1,\ldots,n\), otherwise the inner-approximation is empty, and:_
\[R_{\boldsymbol{p}}(f)\ \subseteq\ f(0,\ldots,0)\,+\,\left[\sum_{k=1}^{n}\sum \left(\overline{\boldsymbol{I}}_{2k-1}+\underline{\boldsymbol{O}}_{2k}\right),\quad\sum_{k=1}^{n}\sum\left(\overline{\boldsymbol{O}}_{2k}+\underline{ \boldsymbol{I}}_{2k-1}\right)\right]\]
_if \(\sum\overline{\boldsymbol{I}}_{2l-1}-\sum\underline{\boldsymbol{I}}_{2l-1} \leq\sum\limits_{k=l}^{n}\sum\left(\overline{\boldsymbol{O}}_{2k}-\underline {\boldsymbol{O}}_{2k}\right)-\sum\limits_{k=l+1}^{n}\sum\left(\overline{ \boldsymbol{I}}_{2k-1}-\underline{\boldsymbol{I}}_{2k-1}\right)\) for \(l=1,\ldots,n\), otherwise the outer-approximation is empty._
Proof: The proof uses Proposition 1 on \(C(\boldsymbol{O}_{1},\boldsymbol{I}_{2},\ldots,\boldsymbol{O}_{2n-1}, \boldsymbol{I}_{2n})\) and \(C(\boldsymbol{I}_{1},\boldsymbol{O}_{2},\allowbreak\ldots,\allowbreak \boldsymbol{I}_{2n-1},\boldsymbol{O}_{2n})\), after rescaling of the interval \(\boldsymbol{O}_{i}\) and \(\boldsymbol{I}_{j}\) to \([-1,1]\), and Proposition 2. It is detailed in Section F.
In Example 4, we now apply Theorem 4.1 to the \(x\) component of the Dubbins vehicle as expressed in Example 3, ignoring any constraint on \(y\) and \(\theta\).
Example 4 (Dubbins vehicle (continued)): We recall that:
\[x(t)=0.1\epsilon_{1}+(1+0.01\epsilon_{2})t+(1.31\ 10^{-7}\epsilon_{3})t^{2}\]
The \(\nabla_{k}\), outer-approximations of the absolute value of the partial derivatives of Remark 6, \(\frac{\partial x}{\partial\epsilon_{k}}\) and \(\frac{\partial x}{\partial t}\) evaluated between times \(t=0\) and \(t=0.5\), are: \(\nabla_{\epsilon_{1}}=0.1\), \(\nabla_{\epsilon_{2}}=[0,0.005]\), \(\nabla_{\epsilon_{3}}=[0,3.275\ 10^{-8}]\), \(\nabla_{t}=[0.989999869,1.010000131]\). We thus have \(I_{\epsilon_{1}}=O_{\epsilon_{1}}=[-0.1,\,0.1]\), \(I_{\epsilon_{2}}=0\), \(O_{\epsilon_{2}}=[-0.005,0.005]\), and \(I_{\epsilon_{3}}=0\),
\(O_{\epsilon_{3}}=[-3.275\ 10^{-8},\,3.275\ 10^{-8}]\) by a direct application of Remark 6. Note that for computing \(I_{t}\) and \(O_{t}\), we use the generalized mean-value theorem of [16] again, but in a slightly different way than in Remark 6, since the point at which we can evaluate the corresponding function is \(t=0\), which is the lower bound of the extent of the values of \(t\) (\([0,5]\)) and not its center as for other variables. In that case we can compute the tighter bounds: \(I_{t}=\underline{\nabla}_{t}[0,0.5]=[0,0.4949999345]\) and \(O_{t}=\overline{\nabla}_{t}[0,0.5]=[0,0.50500000655]\).
The quantified formula of Example 2 has only one \(\forall\), \(\exists\) alternation, the condition of Theorem 1 will involve \(O_{\epsilon_{2}}+O_{\epsilon_{3}}=[-0.005,0.005]+[-3.275\ 10^{-8},3.275\ 10^{-8}]\) and \(I_{t}=[0,0.4949999345]\). We see that indeed, \((\overline{O}_{\epsilon_{2}}-\underline{O}_{\epsilon_{3}})+(\overline{O}_{ \epsilon_{3}}-\underline{O}_{\epsilon_{2}})=0.010000066\leq\overline{I}_{t}- \underline{I}_{t}=0.4949999345\), hence we can compute an inner-approximation for the \(x\) component of \(\varphi\). Its lower bound is:
\[\begin{array}{l}x_{c}\ \ +\underline{I}_{\epsilon_{1}}\ \ +\overline{O}_{ \epsilon_{2}}\ \ \ \ +\overline{O}_{\epsilon_{3}}\ \ \ +\underline{I}_{t}\\ \ \ =0\ -0.1\ \ +0.005\ +3.275\ 10^{-8}\ \ +0\ =\ -0.095\end{array}\]
and its upper bound:
\[\begin{array}{l}x_{c}\ \ +\overline{I}_{\epsilon_{1}}\ \ +\underline{O}_{ \epsilon_{2}}\ \ \ \ +\underline{O}_{\epsilon_{3}}\ \ \ \ +\overline{I}_{t}\\ \ \ =0\ +0.1\ \ -0.005\ -3.275\ 10^{-8}\ \ +0.4949999345\ =\ 0.59.\end{array}\]
Similarly, we compute an outer-approximation and find the following bounds for \(R_{\exists\forall\exists}(\varphi_{x})\): \([-0.095,0.590]\subseteq R_{\exists\forall\exists}(\varphi)\subseteq[-0.1,0.605]\), to be compared with the solution from Mathematica quantifier elimination \(-0.1\leq x\leq 0.595\). Sampling also yields estimate \([-0.1,0.595]\).
In the former example, and in general for continuous-time controlled systems defined by a flow function \(\varphi(t;x_{0},u,w)\) solution of an initial value problem, we do not need as with quantifier elimination techniques to first compute polynomial approximations. We only need to compute outer approximations of the flow for one initial condition ("central trajectory") and of the Jacobian of the flow for the set of initial conditions, as exemplified below.
Example 5 (Dubbins vehicle (continued)): We consider again the Dubbins vehicle, but defined as the direct solution of the ODEs of Example 1. We first compute an outer-approximation of a "central trajectory" \((x_{c},y_{c},\theta_{c})\), i.e. of the trajectory starting at \(x=0\), \(y=0\), \(\theta=0\), \(b_{1}=0\) and \(a=0\). This gives \(x_{c}=t\), \(y_{c}=0\) and \(\theta_{c}=0\).
We note that \(\frac{\partial x}{\partial t}=cos(\theta)+b_{1}\in[0.989999965,1.01]\) thus, using notations from Remark 6, we have the inner and outer-approximations of the effect of variable \(t\) on the value of \(x\), \(I_{x,t}=[0,0.494999982]\), \(O_{x,t}=[0,0.505]\), and similarly for the other variables: \(I_{y,t}=0\), \(O_{y,t}=[-sin(0.015)/2,sin(0.015)/2]=[-1.309\ 10^{-4},1.309\ 10^{-4}]\) and \(I_{\theta,t}=0\), \(O_{\theta,t}=[-0.005,0.005]\).
The Jacobian of \(\varphi\) with respect to \(x_{0}\), \(y_{0}\), \(\theta_{0}\), \(b_{1}\) and \(a\), \(J_{i,x_{0}}=\frac{\partial\varphi_{i}}{\partial t}\), \(J_{i,y_{0}}=\frac{\partial\varphi_{i}}{\partial t}\), \(J_{i,\theta_{0}}=\frac{\partial\varphi_{i}}{\partial t}\), \(J_{i,b_{1}}=\frac{\partial\varphi_{i}}{\partial t}\) and \(J_{i,a}=\frac{\partial\varphi_{i}}{\partial t}\), for \(i=x,y,\theta\) respectively, satisfies a variational equation [20], solved in appendix, Section I. By Remark 6, this gives the following inner and outer approximations for all parameters \(x_{0}\), \(y_{0}\), \(\theta_{0}\), \(a\) and \(b_{1}\), and all components \(x\), \(y\) and \(\theta\) of \(\varphi\):
* \(I_{x,a}=0\), \(O_{x,a}=[-6.545~{}10^{-7},6.545~{}10^{-7}]\), \(I_{x,x_{0}}=O_{x,x_{0}}=[-0.1,0.1]\), \(I_{x,\theta_{0}}=0\), \(O_{x,\theta_{0}}=[-1.309~{}10^{-6},1.309~{}10^{-6}]\), \(I_{x,b_{1}}=0\), \(O_{x,b_{1}}=[-0.005,0.05]\),
* \(I_{y,a}=0\), \(O_{y,a}=[-0,0025,\,0.0025]\), \(I_{y,y_{0}}=O_{y,y_{0}}=[-0.1,0.1]\), \(I_{y,\theta_{0}}=0\), \(O_{y,\theta_{0}}=[-0,005,0.005]\),
* \(I_{\theta,\theta_{0}}=O_{\theta,\theta_{0}}=[-0.01,0.01]\), \(I_{\theta,a}=0\), \(O_{\theta,a}=[0,0.005]\),
We now compute the set \(R_{\exists\forall\exists}\) consisting of \(z\) such that:
\[\exists a\in[-0.01,0.01],\ \exists x_{0}\in[-0.1,0.1],\ \exists y_{0}\in[-0.1,0.1],\] \[\exists\theta_{0}\in[-0.01,0.01],\ \forall b_{1}\in[-0.01,0.01],\ \exists t\in[0,0.5],\ z=\varphi(t;x_{0},y_{0},\theta_{0},a,b_{1})\]
Applying Theorem 2.1 we find first an inner-approximation for \(x\) (again, ignoring any condition on \(y\) and \(\theta\)) of \(\varphi\). Its lower bound is:
\[\begin{array}{cccccccc}x_{c}&+\underline{I}_{x,a}&+\underline{I}_{x,x_{0}}&+ \underline{I}_{x,y_{0}}&+\underline{I}_{x,\theta_{0}}&+\overline{O}_{x,b_{1}}& +\underline{I}_{x,t}\\ =0&-0&-0.1&+0&-0&+0.005&+0\end{array}\]
which is equal to -0.095, and its upper bound:
\[\begin{array}{cccccccc}x_{c}&+\overline{I}_{x,a}&+\overline{I}_{x,x_{0}}&+ \overline{I}_{x,y_{0}}&+\overline{I}_{x,\theta_{0}}&+Q_{x,b_{1}}&+\overline{I} _{x,t}\\ 0&+0&+0.1&+0&+0&-0.005&+0.494999982\end{array}\]
which is equal to 0.589999982. Therefore the inner-approximation for \(x\) is equal to \([-0.095,0.589999982]\), given that the conditions for the inner-approximation to be non-void are met. Similarly, we compute an outer-approximation for the \(x\) component of \(\varphi\) and find \([-0.1000019635,0.6050019635]\).
The approximations for the \(y\) and \(\theta\) components of \(\varphi\) are computed similarly, see Appendix, Section J for detailed computation. We obtain for \(y\) the inner-approximation \([-0.1,0.1]\) and over-approximation \([0.1076309,0.1076309]\), and for \(\theta\) the inner-approximation \([-0.01,0.01]\) and over-approximation \([-0.02,0.02]\).
All these results are very close the the ones obtained in Section 1.3 with quantifier elimination1, but are obtained here with a much smaller complexity.
Footnote 1: Note though that the linearization we used for simplifying formulas given to a quantifier elimination tool are slightly over-approximated (especially in the \(y\) component).
## 3 Approximations in the case of vector-valued functions
Outer-approximations in the general case when \(f\) goes from \(\mathbb{R}^{p}\) to \(\mathbb{R}^{m}\) for any strictly positive value of \(m\) are directly obtained by the Cartesian product of the ranges obtained separately by the method of Section 2.2 on each component of \(f\). The case of inner-approximations is more involved, since a Cartesian product of inner-approximations is not in general an inner-approximation.
In this section, we generalize the method of [21] to the case of arbitrary quantified formulas. We begin by a simple example, before stating the result for the general case in Theorem 2.
_Example 6_.: Suppose we want to inner approximate the following set \(R_{\forall\exists\forall\exists}(f)\) for a function \(f\) with two components \(f_{1}\) and \(f_{2}\):
\[R_{\forall\exists\forall\exists}(f)=\{z\ |\ \forall x_{1},\ \exists x_{2},\ \exists x_{3},\ \forall x_{4},\ \exists x_{5},\ \exists x_{6},z=f(x)\}.\]
The main idea is that we can rely on the conjunction of quantified formulas for each component if no variable is existentially quantified for several components. We thus transform if necessary the quantified formula by strengthening them for that objective, which is sound with respect to computing inner-approximations. For example here, we can interpret, for all \(z_{1}\) and \(z_{2}\):
\[\forall x_{1},\ \forall x_{2},\ \exists x_{3},\ \forall x_{4},\ \forall x_{5},\ \exists x_{6},\ z_{1}=f_{1}(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}) \tag{12}\] \[\forall x_{1},\ \forall x_{3},\ \exists x_{2},\ \forall x_{4},\ \forall x_{6},\ \exists x_{5},\ z_{2}=f_{2}(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}) \tag{13}\]
Then we get Skolem functions: \(x_{3}(z_{1},x_{1},x_{2})\) and \(x_{6}(z_{1},x_{1},x_{2},x_{3},x_{4},\)\(x_{5})\) from Equation (12), such that \(z_{1}=f_{1}(x_{1},x_{2},x_{3}(z_{1},x_{1},x_{2}),x_{4},x_{5},x_{6}(z_{1},x_{1}, x_{2},x_{3},\)\(x_{4},x_{5}))\) and \(x_{2}(z_{2},x_{1},x_{3})\) and \(x_{5}(z_{2},x_{1},x_{2},x_{3},x_{4},x_{6})\) from (13), such that \(z_{2}=f_{2}(x_{1},\ x_{2}(z_{2},x_{1},x_{3}),x_{3},x_{4},x_{5}(z_{2},x_{1},x_{2},x_{3},x_{4},x_{6}),x_{6})\). Supposing that \(f_{1}\) and \(f_{2}\) are elementary functions, these Skolem functions can be chosen to be continuous [16, 18]. Consider now functions \(g_{z_{1},z_{2}}:\mathbb{R}^{6}\rightarrow\mathbb{R}^{6}\) defined by
\[g(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})=(x_{1},x_{2}(z_{2},x_{1},x _{3}),x_{3}(z_{1},x_{1},x_{2}),x_{4},\] \[x_{5}(z_{2},x_{1},x_{2},x_{3},x_{4},x_{6}),x_{6}(z_{1},x_{1},x_{2 },x_{3},x_{4},x_{5}))\]
for all \((z_{1},z_{2})\in\boldsymbol{z}_{1}\times\boldsymbol{z}_{2}\). This is a continuous function as composition of continuous functions, from \(\boldsymbol{x}=\boldsymbol{x}_{1}\times\boldsymbol{x}_{2}\times\ldots\times \boldsymbol{x}_{6}\) to itself.
By Brouwer's fixpoint theorem, we have fixpoints \(x_{3}^{\infty}(z_{1},z_{2},\)\(x_{1})\), \(x_{6}^{\infty}(z_{1},z_{2},\)\(x_{1},x_{4})\), \(x_{2}^{\infty}(z_{1},z_{2},x_{1})\) and \(x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4})\), for all values of \(z_{1},z_{2},x_{1},x_{4}\) (\(x_{1}\) and \(x_{4}\) being the existentially quantified input variables of Equation (6)), such that \(x_{3}^{\infty}(z_{1},z_{2},x_{1})=x_{3}(z_{1},x_{1},x_{2}^{\infty}(z_{1},\)\(z_{2},x_{1}))\), \(x_{6}^{\infty}(z_{1},z_{2},x_{1},x_{4})=x_{6}(z_{1},x_{1},x_{2}^{\infty}(z_{1},\)\(z_{2},x_{1}),x_{3}^{\infty}(z_{1},z_{2},x_{1}),\)\(x_{4},x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4}))\), \(x_{2}^{\infty}(z_{1},z_{2},x_{1})=x_{2}(z_{2},x_{1},x_{3}^{\infty}(z_{1},\)\(z_{2},x_{1}))\) and \(x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4})=x_{5}(z_{2},x_{1},x_{2}^{\infty}(z_{1},z_{2},x_{1}, x_{4}),x_{3}^{\infty}(z_{1},z_{2},x_{1},x_{4}),\)\(x_{4},\)\(x_{6}^{\infty}(z_{1},z_{2},x_{1},x_{4}))\). This implies that for all \((z_{1},z_{2})\in\boldsymbol{z}\) and for all \(x_{1}\), \(x_{4}\):
\[z_{1}=f_{1}(x_{1},x_{2}^{\infty}(z_{1},z_{2},x_{1}),x_{3}^{\infty}(z_{1},x_{1},x_{4}),x_{4},x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4}),x_{6}^{\infty}(z_{1},z_{2 },x_{1},x_{4}))\]
\[z_{2}=f_{2}(x_{1},x_{2}^{\infty}(z_{1},z_{2},x_{1}),x_{3}^{\infty}(z_{1},z_{2 },x_{1}),x_{4},x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4}),x_{6}^{\infty}(z_{1},z_{2 },x_{1},x_{4}))\]
allowing to deduce an inner-approximation of \(R_{\forall\exists\forall\exists}(f)\) since:
\[\forall z,\ \forall x_{1},\ \forall x_{4},\ \exists x_{2}=x_{2}^{\infty}(z_{1},z_{2},x _{1}),\ \exists x_{3}=x_{3}^{\infty}(z_{1},z_{2},x_{1}),\]
\[\exists x_{5}=x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4}),\ \exists x_{6}=x_{6}^{\infty}(z_{1},z_{2},x_{1},x_{4}),\ z=f(x_{1},x_{2},x_{3},x_{4 },x_{5},x_{6})\]
is equivalent to: \(\forall z,\ \forall x_{1},\ \exists x_{2}=x_{2}^{\infty}(z_{1},z_{2},x_{1}),\ \exists x_{3}=x_{3}^{\infty}(z_{1},z_{2},x_{1}),\ \forall x_{4},\)
\(\exists x_{5}=x_{5}^{\infty}(z_{1},z_{2},x_{1},x_{4}),\ \exists x_{6}=x_{6}^{\infty}(z_{1},z_{2},x_{1},x_{4}),\ z=f(x_{1},x_{2},x_{3},x_{4 },x_{5},x_{6})\).
In Theorem 2.1, we formalize this for any number of quantifier alternations and dimension for \(\boldsymbol{z}\). The principle is similar to the approach used in [21] for the joint range in the case of \(\forall\exists\) formulas.
As previously, we are going to solve the quantified problem \(R_{\boldsymbol{p}}(f)\) where sets \(J_{A}^{i}=\{k_{2i-1}+1,\ldots k_{2i}\}\) and \(J_{E}^{i}=\{k_{2i}+1,\ldots,k_{2i+1}\}\) for \(i=1,\ldots,n\) define the \(n\) sequences of indices of variables that are universally quantified (\(J_{A}^{i}\)) and existentially quantified (\(J_{E}^{i}\)).
The principle is to choose for each existentially quantified variable \(x_{j}\) a unique component of \(f\) (among the \(m\) ones) that will be used with an existential quantifier as one of the \(m\) scalar quantified problems to solve. In the \(m-1\) remaining quantified problem, \(x_{j}\) will be universally quantified. This choice is described by the functions \(\pi^{i}\) in Theorem 2.2. There are \(n\) such functions, one for each existential block appearing in the quantified problem \(R_{\boldsymbol{p}}(f)\). This is Theorem 3 of [21] generalized to arbitrary alternation of quantifiers \(\forall\exists\).
Theorem 2.2: _Let \(f:\mathbb{R}^{u}\to\mathbb{R}^{m}\) be an elementary function and \(\pi^{i}:\{k_{2i}+1,\ldots,k_{2i+1}\}\to\{1,\ldots,m\}\) for \(i=1,\ldots,n\). Let us note, for all \(i\in\{1,\ldots n\}\), \(j\in\{1,\ldots,m\}\)\(J_{E,z_{j}}^{i}=\{l\in\{k_{2i}+1,\ldots,k_{2i+1}\},\;\pi^{i}(l)=j\}\) and \(J_{A,z_{j}}^{i}=\{k_{2i-1}+1,\ldots,k_{2i}\}\setminus J_{E,z_{i}}\). Consider the following \(m\) quantified problems, \(j\in\{1,\ldots,m\}\):_
\[\forall z_{j}\in\boldsymbol{z}_{j},\;(\forall\boldsymbol{x}_{l} \in[-1,1])_{l\in J_{A,z_{j}}^{1}},\;(\exists\boldsymbol{x}_{l}\in[-1,1])_{l\in J _{E,z_{j}}^{1}},\ldots\\ (\forall\boldsymbol{x}_{l}\in[-1,1])_{l\in J_{A,z_{j}}^{n}},\;( \exists x_{j}\in[-1,1])_{l\in J_{E,z_{j}}^{n}},\;z_{i}=f_{i}(x_{1},\ldots,x_{ k_{2n}})\]
_Then \(\boldsymbol{z}=\boldsymbol{z}_{1}\times\boldsymbol{z}_{2}\times\ldots\times \boldsymbol{z}_{n}\), if non-empty, is an inner-approximation of \(R_{\boldsymbol{p}}(f)\) defined in Equation (2)._
The proof is a generalization of the example given in the beginning of this section, and is detailed in Section 0.K.
Remark 7: It is possible to include skewed boxes that can be much tighter than boxes as in Theorem 2.2, using similar ideas as in [22]. There are also simple heuristics to be used that allows us not to go through the combinatorics of potential choices, for getting the best possible inner-approximation. The sensitivity of the output to variables is computed as part of our algorithm and the best choices of quantifiers are the ones which quantify universally the variables for which there is lower sensitivity, and which quantify existentially the variables for which there is higher sensitivity, giving higher contributions to the inner-approximations.
Example 7: Consider the function \(f=(f_{1},f_{2}):\mathbb{R}^{4}\to\mathbb{R}^{2}\):
\[\begin{array}{l}f_{1}(x_{1},x_{2},x_{3},x_{4})=2+2x_{1}+x_{2}+3x_{3}+x_{4}\\ f_{2}(x_{1},x_{2},x_{3},x_{4})=-1-x_{1}-x_{2}+x_{3}+5x_{4}\end{array}\]
We want to find the disturbance set
\[R_{\exists\forall\exists}(f)=\{z\in\mathbb{R}^{2}|\exists x_{1} \in[-1,1],\;\forall x_{2}\in[-1,1],\;\exists x_{3}\in[-1,1],\\ \exists x_{4}\in[-1,1],\;z=f(x_{1},x_{2},x_{3},x_{4})\} \tag{14}\]
An outer-approximation for \(R_{\exists\forall\exists}(f)\) is found to be \([-3,7]\times[-7,5]\), using a the 1D computation of the previous section, one component at a time.
Now, there are several possible quantified formulas giving a 2D inner-approximation. One of them is,
\[\framebox{$\exists x_{1}$},\ \forall x_{2},\ \forall x_{4},\ \framebox{$\exists x_{3}$},\ z_{1}=f_{1}(x_{1},x_{2},x_{3},x_{4}) \tag{15}\] \[\forall x_{1},\ \forall x_{2},\ \forall x_{3},\framebox{$\exists x_{4}$},\ z_{2}=f_{2}(x_{1},x_{2},x_{3},x_{4}) \tag{16}\]
The conditions of Proposition 1 for obtaining a non-empty inner-approximation are met and we get for Equation (26):
\[\begin{array}{l}[\ z_{1}^{c}-||\Delta_{x_{1}}||+||\Delta_{x_{2},x_{4}}||-|| \Delta_{x_{3}}||,z_{1}^{c}+||\Delta_{x_{1}}||-||\Delta_{x_{2},x_{4}}||+|| \Delta_{x_{3}}||]\\ =[\ 2\ \ \ \ \ \ -2\ \ \ \ \ +1+1\ \ \ \ \ -3,\ \ \ 2\ \ \ +2\ \ \ \ \ -1-1\ \ \ \ +3]\end{array}\]
which is equal to \([-1,5]\), and for Equation (27):
\[\begin{array}{l}[\ z_{2}^{c}+||\Delta_{x_{1},x_{2},x_{4}}||-||\Delta_{x_{3}} ||,\ z_{1}^{c}-||\Delta_{x_{1},x_{2},x_{4}}||+||\Delta_{x_{3}}||]\\ =[\ -1\ \ +1+1+1\ \ \ \ \ -5,\ \ \ -1\ \ -1-1\ \ \ \ +5]\end{array}\]
which is equal to \([-3,1]\). Hence \([-1,5]\times[-3,1]\) is in the set \(R_{\exists\forall\exists}(f)\).
These inner and outer-approximations, together with the exact robust joint range, are depicted in the figure on the right-hand side: we represented some particular points of the image by \(z^{1}\) to \(z^{13}\); the inner and outer boxes represent the inner and outer-approximations \([-1,5]\times[-3,1]\) and \([-3,7]\times[-7,5]\); finally the polyhedron lying in between is the exact robust image. Other possibilities are discussed in Appendix 0.L.
Example 8: (Generalized robust reachability for the Dubbins vehicle). We consider the following problem, which is a slight space relaxation of the original problem solved in 1D in Example 5:
\[R_{\exists\forall\exists}(\varphi)=\{(x,y,\theta)\mid\exists a \in[-0.01,0.01],\ \exists x_{0}\in[-0.1,0.1],\ \exists y_{0}\in[-0.1,0.1],\] \[\exists\theta_{0}\in[-0.01,0.01],\ \forall b_{1}\in[-0.01,0.01],\ \exists t\in[0,0.5],\ \exists \delta_{2}\in[-1.31e^{-4},1.31e^{-4}],\] \[\exists\delta_{3}\in[-0.005,0.005],\ (x,y,\theta)=\varphi(t;x_{0},y_{0}, \theta_{0},a,b_{1})+(0,\delta_{2},\delta_{3})\} \tag{17}\]
where \(\varphi\) is the flow map for the Dubbins vehicle of Example 1. This means we want to characterize precisely which abscissa \(x\) can be reached for some control \(a\), whatever the disturbance \(b_{1}\). We allow here a relaxation in space and will determine an inner-approximation of the sets of ordinate \(y\) and angle \(\theta\) which can be reached with control \(a\) whatever disturbance \(b_{1}\), up to a small tolerance of \(1.309\ 10^{-4}\) for the ordinate and \(0.005\) for \(\theta\).
The outer-approximation for \(R_{\exists\forall\exists}(\varphi)\) is easy to find from the outer-approximations of each component of \(\varphi\) we already computed in Example 5. We just need to add the extra contributions of \(\delta_{2}\) to \(y\) and \(\delta_{3}\) to \(\theta\), giving
\[R_{\exists\forall\exists}(\varphi)\subseteq[-0.10000196,0.60500196]\times[0.1 077618,0.1077618]\times[-0.025,0.025]\]
In order to find an inner-approximation of \(R_{\exists\forall\exists}(\varphi)\), we interpret the following quantified formulas (with the same interval bounds as in Equation (17) for the inputs):
\(\forall a,\ \forall y_{0},\forall\theta_{0},\ \framebox{$\exists x_{0}$},\ \forall b_{1},\ \forall \delta_{2},\ \forall\delta_{3},\ \framebox{$\exists t$},\ \ x=\varphi_{x}(t;x_{0},y_{0}, \theta_{0},a,b_{1})\\ \forall a,\ \forall x_{0},\forall\theta_{0},\ \framebox{$\exists y_{0}$},\ \forall b_{1},\ \forall \delta_{3},\ \forall t,\framebox{$\exists\delta_{2}$},\ \ y=\varphi_{y}(t;x_{0},y_{0}, \theta_{0},a,b_{1})+\delta_{2}\\ \forall x_{0},\ \forall y_{0},\framebox{$\exists\theta_{0}$},\ \forall b_{1},\ \forall \delta_{2},\ \forall t,\framebox{$\exists\delta_{3}$},\ \theta=\varphi_{\theta}(t;x_{0},y_{0}, \theta_{0},a,b_{1})+\delta_{3}\ \text{and find}\\ \left[-0.0949993455,0.5899993275\right]\times\left[-0.0925,0.0925\right]\ \times\left[-0.01,0.01\right]\subseteq R_{\exists\forall \exists}(\varphi).\]
Note that we were not able to obtain an estimate of the solution of this joint quantified problem (translated using the linearisation for \(\varphi\) of Example 3) with Mathematica, it resulted in a timeout.
## 4 Implementation and benchmarks
We implemented the method, including the non-linear case of Theorem 1 and the vector-valued case of Theorem 2 in Julia, using packages LazySets for manipulating boxes (Hyperrectangles) and Symbolics for automatic differentiation.
We ran benchmarks reported in Table 1 on a Macbook Pro 2.3GHz Intel core i9 with 8 cores, measuring timings using the Benchmark Julia package. The colums # vars, dim, # alternations, non-linear, time, inner/sample, outer/sample denote, respectively, for each benchmark, the number of quantified variables, the dimension of the image of the function considered, the number of alternations \(\forall/\exists\), whether the function considered is non-linear or not, the time the analyzer took to compute both the inner and the outer-approximation
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline Benchmark & \# vars & dim & \# alternations & non-linear & time (s) & inner/sample & outer/sample \\ \hline \hline Ex11 & 3 & 1 & 2 & ✓ & 0.29 & 0.33 & 2.12 \\ \hline Ex4 & 4 & 1 & 2 & ✓ & 0.32 & 1 & 1.03 \\ \hline Ex7 & 4 & 2 & 2 & & 0.21 & (0.78,0.40) & (1.30,1.21) \\ \hline Linear-2 & 4 & 1 & 2 & & 0.43 & 1 & 1 \\ \hline Linear-5 & 10 & 1 & 5 & & 0.4 & 1 & 1 \\ \hline Linear-10 & 20 & 1 & 10 & & 0.41 & 1 & 1 \\ \hline Linear-25 & 50 & 1 & 25 & & 0.47 & 1 & 1 \\ \hline Linear-50 & 100 & 1 & 50 & & 0.58 & 1 & 1 \\ \hline Linear-100 & 200 & 1 & 100 & & 0.91 & 1 & 1 \\ \hline Linear-500 & 1000 & 1 & 500 & & 8.1 & 1 & 1 \\ \hline Linear-1000 & 2000 & 1 & 1000 & & 28.25 & 1 & 1 \\ \hline Motion-2 & 7 & 1 & 3 & ✓ & 0.62 & – & – \\ \hline Motion-5 & 14 & 1 & 6 & ✓ & 0.76 & – & – \\ \hline Motion-10 & 24 & 1 & 11 & ✓ & 1.06 & – & – \\ \hline Motion-25 & 54 & 1 & 26 & ✓ & 9.4 & – & – \\ \hline Motion-50 & 104 & 1 & 51 & ✓ & 148.68 & – & – \\ \hline \end{tabular}
\end{table}
Table 1: Benchmark for quantified reachability problems
of the quantified reachability problem, the estimated ratio of the width of the inner-approximation, then the outer-approximation, for each component of the function, with respect to the estimate using sampling2.
Footnote 2: Sampling is too slow and imprecise when the number of variables grows, hence we could not use it in the case of Motion-\(k\), \(k>2\). For Motion-2, it terminates but with at most 30 samples per dimension, and in dimension 7, this is not representative. In the case of Linear-\(k\), the estimate is always one since our method is exact in 1D, for linear functions.
In this table, \(\mathrm{Ex}k\) correspond to Example \(k\) of this paper, Linear-\(k\) are random linear functions on \(2k\) variables, quantified as \(\forall,\ \exists\ k\) times, and Motion-2 to 50 are several instances of a motion planning problem of the same type as the one of Equation 7. Motion-\(k\) consists of the \(x\) component of the same perturbed Dubbins vehicle as modeled in Example 1, see Appendix 0.M, with \(k\) control steps, generating \(3+2k\) variables and \(k+1\) quantifier alternations. These variables are the \(k\) (angular) controls \(a_{i}\), the \(k\) perturbations \(b_{i}\) and the two initial conditions on \(x\) and \(\theta\). The function, from \(\mathbb{R}^{4+2k}\) to \(\mathbb{R}\), that expresses the dynamics at the \(k\)th control step is a sum of \(2k\) sine functions evaluated on sums of 1 to \(k+1\) variables, plus a sum of \(k+1\) variables.
The theoretical complexity of our method, both for inner and for outer-approximation, for a \(n\) dimensional vector-valued quantified problem on \(p\) quantified variables, is of the order of \(n\) times the complexity of a 1D quantified problem on \(p\) quantified variables. Each of these 1D problems has a cost of the order of \(p\) times the cost of the evaluation of the function on a (center) point plus the cost of evaluation of its Jacobian on an interval. In the Linear-\(k\) problem, the cost of evaluation of the function on a point is of the order of \(k\), and for the Jacobian, apart from the cost of the automatic differentiation, it is of the order of \(k\) again. The resolution time can slightly decrease for higher-dimensional problems, which is due to the fact that some of these random problems are found to have empty quantified reachable sets already with few quantifiers. In the Motion-\(k\) problem, which has always a non-empty quantified reachable set, the cost of evaluation of the function on a point is of the order of \(k^{2}\), and for the Jacobian, it is of the order of \(k^{3}\) without the cost of the automatic differentiation.
## 5 Conclusion
In this article, we designed a method for inner and outer-approximating general problems, which is essentially an order 0 method, generalizing mean-value theorems. In future work, we are planning on describing higher order methods, generalizing again the higher order methods of [22]. We will also consider preconditioning and quadrature formulas for general quantified formulas.
Finally, we intend to generalize this work to other kinds of quantified problems where the objective is to find a set \(R\) such that the quantified predicate is \(f(\vec{x}_{1},\vec{x}_{2},\ldots,\vec{x}_{2n})\in R\), and not an equality predicate as in this work. This should be most useful for finding generalized invariant sets, in addition to generalized reachable sets. |
2309.11766 | Dictionary Attack on IMU-based Gait Authentication | We present a novel adversarial model for authentication systems that use gait
patterns recorded by the inertial measurement unit (IMU) built into
smartphones. The attack idea is inspired by and named after the concept of a
dictionary attack on knowledge (PIN or password) based authentication systems.
In particular, this work investigates whether it is possible to build a
dictionary of IMUGait patterns and use it to launch an attack or find an
imitator who can actively reproduce IMUGait patterns that match the target's
IMUGait pattern. Nine physically and demographically diverse individuals walked
at various levels of four predefined controllable and adaptable gait factors
(speed, step length, step width, and thigh-lift), producing 178 unique IMUGait
patterns. Each pattern attacked a wide variety of user authentication models.
The deeper analysis of error rates (before and after the attack) challenges the
belief that authentication systems based on IMUGait patterns are the most
difficult to spoof; further research is needed on adversarial models and
associated countermeasures. | Rajesh Kumar, Can Isik, Chilukuri K. Mohan | 2023-09-21T04:00:21Z | http://arxiv.org/abs/2309.11766v2 | # Dictionary Attack on IMU-based Gait Authentication
###### Abstract.
We present a novel adversarial model for authentication systems that use gait patterns recorded by the inertial measurement unit (IMU) built into smartphones. The attack idea is inspired by and named after the concept of a dictionary attack on knowledge (PIN or password) based authentication systems. In particular, this work investigates whether it is possible to build a dictionary of _IMUG Gait_ patterns and use it to launch an attack or find an iminator who can actively reproduce _IMUG Gait_ patterns that match the target's _IMUG Gait_ pattern. Nine physically and demographically diverse individuals walked at various levels of four predefined controllable and adaptable gait factors (speed, step length, step width, and highlight), producing 178 unique _IMUG Gait_ patterns. Each pattern attacked a wide variety of user authentication models. The deeper analysis of error rates (before and after the attack) challenges the belief that authentication systems based on IMUG Gait patterns are the most difficult to spoof; further research is needed on adversarial models and associated countermeasures.
Authentication, Dictionary Attack, Presentation attack, Gait recognition, IMU sensor, Wearable +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal of Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics A: Mathematical and Physics: A Mathematical and Physics: A: Mathematical and Physics A: Mathematical and Physics: A: Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A: Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A: Mathematical and Physics: A Mathematical and Physics: A: Mathematical and Physics: A Mathematical and Physics: A: Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A: Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics and A Mathematical Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics and A Mathematical Physics: A Mathematical and Physics: A Mathematical and Physics: A Mathematical and Physics: A Physics and Physics: A Mathematical and Physics: A Mathematical and Physics
device design, an attack would be pointless [13, 55]. On the contrary, circumvention by imitation does not require a write-permission to the device and is immune to template protection techniques and encrypted communication channels [49, 50, 53]. Circumvention by imitation might require either stolen biometric samples from the targeted user [54] or a huge database of biometric samples [53]. Neither is it impossible to obtain biometric samples from a targeted user nor download a publicly available database [53, 55]. Most imitation-based circumvention studies reported that IMU-based gait is one of the most difficult traits to imitate [12, 14, 16, 18, 19, 36, 37, 38, 41, 58]. The reason cited by these studies revolved around the inability to teach imitators (even trained actors) to copy someone else's walking pattern due to the imitator's own physiological and psychological boundaries. Purposeful and repeated circumvention of _IMUG Gait_ required for defeating continuous recognition of users is even more difficult [8].
However, studies such as [27, 30, 39, 58] have suggested otherwise, i.e., it is possible to circumvent IMU-based gait recognition if adversaries have access to the resources, they put in the required effort, and if there is strong motivation to do so. Considering the National Institute of Standards and Technology guideline on _effort_ as a factor in circumvention attempts [46], the circumvention of _IMUG Gait_ can be divided into three categories viz. _zero-effort_, _minimal-effort_, and _high-effort_. The basis for categorization is the number of attempts or amount of effort imitators make and the level of training and assistance they receive during the imitation process. The terms _zero-effort_ (or friendly scenario, or random attempt) and _minimal-effort_ (or hostile scenario) have been used in the past studies [14, 16, 18, 19]. The _zero-effort_ imitation refers to the scenarios in which the imitator makes no deliberate attempt to imitate the targeted user. In contrast, the _minimal-effort_ imitation attempt is about choosing the imitators carefully (e.g., trained actors or people of similar physical characteristics or gender as of the target) and deliberately attempting to copy the targeted user's IMU-based gait patterns. The amount of training and the number of imitation attempts in the minimal effort could be limited to walking side by side, watching videos or plots of accelerometer readings, and making not more than a couple of imitation attempts. On the other hand, _high-effort_ mimicry may refer to the cases in which the imitators are trained for days to weeks via statistical feedback in addition to streaming videos (or verbal feedback) [30, 36, 37, 38]. The _high-effort_ mimicry might also refer to designing a robot or assisting human imitators with machines like a treadmill in addition to video-based (or verbal) feedback [27, 30].
The high-effort attacks [27, 30] mainly utilize a feedback loop-based mechanism that requires the attackers to train an individual to mimic another individual. As the name suggests, launching a high-effort attack could be tedious in practice compared to minimal and zero-effort attacks. Nonetheless, designing high-effort attacks has provided insight that this paper exploits to develop a comparatively more practical attack than the feedback loop-based training process. The insight was that individuals' walking patterns at different controllable and adaptable factors that dictate the IMU-based gait patterns form a gait spectrum. The "gait spectrum" consists of various reproducible gait patterns and can be seen as a dictionary of IMU-based gait passwords. We hypothesized that the dictionary of IMU-based gait patterns, if it includes sufficiently enough factors coupled with physical and demographically diverse groups of individuals, could help (1) systematically find imitators who will most likely be able to imitate-making the feedback loop-based imitation process much easier and (2) reproduce the samples that are needed to fool even the continuous verification system, at attacker's will.
To this end, we summarize the key contribution of this paper as follows:
* We created a dictionary of 178 unique _IMUG Gait_ patterns, each containing more than 100 steps of walking data, by recruiting nine physically and demographically diverse individuals who walked at varying levels (degrees) of four gait factors.
* We created 75 baseline models for each of the 55 genuine users utilizing five classifiers and all possible combinations of four sensors available in the IMUs of the smartphone
* We evaluated the impact of the proposed dictionary attack on these baselines at classifier and used levels.
The rest of the paper is organized as follows. Section 2 describes the closely related works and distinguishes them from the presented work, Section 4 lays out the experimental details, Section 5 discusses the results and limitations, and Section 6 concludes the paper with possible future directions for extension 1.
Footnote 1: Codebase: [https://github.com/rajschjinu2006/DictionaryAttackOnIMUG](https://github.com/rajschjinu2006/DictionaryAttackOnIMUG) Gait
## 2. Related Work
Prior studies on _zero-_ and _minimal-_ effort attacks are [14, 16, 18, 19, 41] while those that fall under the _high-effort_ category are [27, 30, 36, 58]. Gafurov et al. [14, 16, 18, 19] conducted a series of experiments to evaluate the security of _IMUG Gait_-based authentication systems against imitation. The study sequence suggested that imitation is not a substantial threat; however, individuals of the same gender or the closest person in the database could be a potential threat. For a wider coverage of zero- and minimal-effort attacks, we refer to the related work section of Kumar et al. [27], which covers [14, 16, 18, 19, 36, 41] in sufficient detail. Here, we summarize previous high-effort attack attempts [27, 30, 36, 37, 38, 41, 58].
The first experiment under the _high-effort_ category was conducted by Stang et al. [58] in 2007. Stang et al. [58] provided plots of accelerometer readings in \(x\), \(y\), and \(z\) dimensions on a big screen and a match score (Pearson's correlation coefficient computed between the resultant acceleration of the imitator and the target) as feedback. The study analyzed five gait templates from one user, collected under different speed and step length settings. Thirteen imitators attempted to match each template fifteen times without prior knowledge of the target's walking patterns. Some imitators surpassed a 50% match, suggesting that gait patterns could be imitated with rigorous training. However, Mjaaland et al. [36, 37, 38] criticized the study's conclusion due to its limited sample size (only one individual) and the reliance on Pearson's correlation coefficient to measure success.
Mjaaland et al. [36, 37, 38] conducted a study to determine whether extensive training could improve an imitator's ability to mimic a target's gait. They used regression analysis to analyze the imitators' learning curves and divided the imitation process into three scenarios: friendly, short-term hostile, and long-term hostile. The friendly scenario served as a baseline and involved collecting regular walking patterns from 50 participants, resulting in 6.2% of Equal Error
Rate (EER)- a point on the Receiver Operating Characteristics point where both false acceptance rate and false rejection rate are the same. In the short-term hostile scenario, one target and six imitators were chosen based on gait stability, Dynamic Time Warping (DTW) distance, and eagerness to participate. Each imitator made five attempts, with feedback provided between sessions. Despite the training, no imitator could circumvent the authentication system, as they could not breach their physiological boundaries and match all the target's traits. Over-training led to more unnatural and mechanical walking patterns. In the long-term hostile scenario, one imitator was selected for six weeks of training. While the imitator showed multiple plateaus, the uncertainty and insufficient data prevented the authors from having any strong conclusions. The study highlighted that statistical feedback was more helpful than visual feedback and that simultaneously concentrating on different gait factors was challenging. The authors concluded that gait imitation is a difficult task.
Muaaz et al. (Muaaz et al., 2017) conducted a study in 2017 to investigate the resilience of _IMUGait_ authentication systems to impersonation attempts. They examined three circumvention scenarios: zero-effort, reenact, and coincide. The zero-effort scenario collected data from 35 participants, with user-specific models trained, considering the rest as impostors. Under the reenact and coincide scenarios, nine of the 35 participants participated in the impersonation process, with five acting as imitators and four as victims. The imitators were trained mine artists skilled in mimicking body motions and language. In the reenact phase, imitators observed and rehearsed the target's gait movements for 10 minutes. In the coincide phase, they received live feedback comparing their gait to the target's. Using Dynamic Time Warping (DTW) distances to measure success, no attacker achieved the threshold that genuine users easily attained. The best attempt matched only 25%, well below the 50% threshold. The system removed 27% of imitation attempts as outliers, concluding that the more imitators tried, the more outlying gait cycles they produced. The results suggested that _IMUGait_ authentication is resilient to impersonation-based circumvention attempts.
Kumar et al. (Kumar et al., 2017; Kumar et al., 2018) focused on producing the sensor readings rather than reproducing a visually similar walk to the target. They observed critical gait factors that can be controlled and adapted and strongly correlate with the features extracted from IMUs readings. They used a treadmill to breach the physiological boundaries of the imitators and developed a feedback-based attack model, a proven mechanism to alter human behavior, to train the imitators. They reported that the attack increased the average false accept rates from 4 to 26% on a dataset of 18 users with just one imitator.
Building on the previous studies and motivated by dictionary attacks on PIN and password-based authentication systems, our work emphasized creating a dictionary of IMU-based gait patterns. The patterns in the dictionary were produced by nine carefully recruited individuals who walked at different speeds, step lengths, step widths, and thigh lifts, producing 178 unique gait patterns, each with a minimum of 100 steps. The attack with the closest sample in the dictionary increased the average false accept rates to 32-40%. Such an increase from 6-14% to 32-40% raises a serious question about the security offered by IMU-based gait authentication.
## 3. The Proposed Attack
We present a novel threat model to evaluate the inherent vulnerabilities of IMU-based gait authentication systems. There are two actors in the attack process: the _target_ users and _attackers_. The _target_ users are legitimate users whose biometric gait data is used to train machine learning classifiers for authentication. _Attackers_, on the other hand, are malicious entities attempting to gain unauthorized access by mimicking the gait patterns of legitimate users. We assume the attacker has a dictionary of pre-recorded gait patterns. The dictionary's keys consist of the details about the imitator and the settings at which the corresponding values were produced. We assume that given a target gait pattern, the attackers would be able to find the closest gait pattern in the dictionary based on a predefined metric and ask the person associated with that gait pattern to reproduce the pattern in real-time by walking at the saved settings, to deceive the system. Our findings (refer to Section 8) show that more than one _attackers_ could trigger false acceptance with real-time physical behavior. We assume the following capabilities for the attacker:
_Knowledge of System Architecture_ The attacker need not know the machine learning algorithms and parameters used in the authentication system.
_Data Collection_ The attacker can acquire a range of gait patterns either by breaching databases or by other illicit means, such as luming users to install an app on their phone for a discount or something similar.
_Device Access_ The attacker will have physical access to the device for the attack period.
The proposed end-to-end attack identifies theoretical vulnerabilities and empirically validates them by simulating real-world attack scenarios. The empirical findings indicate that attackers with these capabilities pose a significant risk to the security of IMU-based gait authentication systems, necessitating the development of countermeasures.
## 4. Design of Experiments
### Genuine dataset
The Genuine dataset consists of data collected from individuals who walked naturally in a 328 feet long and 6.5 feet wide corridor back and forth. The demographic and physical attributes of the participating individuals who (48 of 55) provided their details are in Figure 1. The data was collected through an Android app installed on an Android phone. The phone was always placed in the right pocket of the participant's trousers. The participants freely (without any instruction) walked two to three rounds in the corridor. The participants then repeated the same exercise after a few minutes, hours, or days later, depending on their availability. The idea behind collecting data in separate sessions was to keep the training and testing data separate.
The data collection App was designed to collect data via four specific sensors, namely, _linear_acceleration_, _gyroscope_, _magnetic_field_, and _rotation_vector_ of the Android platform. The _linear_acceleration_ was used to record the phone's acceleration in three physical dimensions (x, y, and z), excluding the force of gravity. The _gyroscope_ captured the angular rotation of the device in radian/second around three physical axes (x, y, and z) of the device. The _magnetic_field_ was
used to record the ambient geomagnetic field for all three physical axes (x, y, z) in Tesla (\(\mu\)). _Rotation_vector_ recorded the device's orientation as a combination of an angle and an axis, in which the device had rotated through an angle \(\theta\) around an axis \((x,y,z)\). The three elements of the rotation vector are \((x\times sin(\theta/2),y\times sin(\theta/2),z\times sin(\theta/2))\), such that the magnitude of the rotation vector is equal to \(sin(\theta/2)\). The direction of the rotation vector is equal to the direction of the axis of rotation. In this case, the reference coordinates include East, North Pole, and Sky directions.
Overall, the Genuine dataset consists of gait patterns collected from fifty-five individuals. Each individual provided around 320 gait cycles in two separate sessions. As a result, the Genuine dataset consisted of 17, 600 (=\(55\times 320\)) gait cycles in total. Since the authentication models were implemented using multi-class classification algorithms, they required samples (or feature vectors) from genuine and impostor classes. Following previous studies (Han et al., 2014; Kumar et al., 2015; Kumar et al., 2016; Kumar et al., 2017; Kumar et al., 2018; Kumar et al., 2019; Kumar et al., 2020), we used samples from users other than the genuine as impostor samples for evaluating the performance of the baseline authentication models (i.e., under _zero-effort_ attacks).
### Dictionary dataset
For creating the _Dictionary dataset_, we recruited nine imitators following IRB approval from the university. The imitators were carefully chosen, keeping their physical characteristics (height, weight, age, waist, and gender) in mind. The aim was to create a spectrum of IMU-based gait patterns as wide as possible. Any pattern on the spectrum can be reproduced on demand. The requirement of the on-demand reproducibility of these patterns motivated us to store the data in the form of a dictionary in which the keys represented the tuple of the imitator id and a set of four values representing the levels of the four gait factors (speed, step length, step-width, and thigh lift) that are easy to control, adapt, and recreate with access to a treadmill. The dictionary values, on the other hand, represented the gait patterns produced for the corresponding keys. The recruited imitators walked at possible (to the imitators) variations of four gait factors (speed, step length, step width, and thigh lift). Six imitators walked at 21 variations, two at 16 variations, and one at 20 variations of the four gait factors on a treadmill totaling to 178 (= \(21\times 6+1\times 20+2\times 16\)) unique gait patterns. The Speed (SP) varied from 1.4 miles per hour (63 centimeters/second) to 3.0 miles per hour (134 centimeters/second) at the interval of 0.2 miles per hour (9 centimeters/second), constituting a total of 9 speed settings. The step length (SL) varied from short to normal, long, and longer. The step width (SW) varied from close, normal, wide, and wider. Similarly, thigh-lift (TL) varied from back, normal, front, and up. Each entry in the dictionary consisted of at least 100 steps summing to at least \(8,900\) (= \(178\times 50\)) gait cycles in the Dictionary-effort dataset. The experimental setup is shown in Figure 2.
### EDA on Dictionary dataset
Exploratory data analysis (EDA) was conducted on the Dictionary dataset to find out (1) how varying the levels of gait factors would impact the dominant features defined by Kumar et al. (2016) for different imitators, (2) whether the gait patterns collected under the same settings were significantly similar, and (3) whether the gait patterns collected at different settings differed from each other differently. For analyzing (1), we used the Pearson correlation coefficient as the feature variables were continuous, and factors were either continuous (e.g., speed) or ordinal (step length, step width, and thigh lift). Figure 3 demonstrates the intrinsic relationship unearthed by the correlation analysis between the gait factors and prominent features. Images in Figure 3 suggest why IMU-based gait biometric is so unique among individuals, as we can see that a change of the same magnitude in any gait factor can affect the feature values differently for different imitators--a motivation to recruit diverse imitators if one wants to increase the chances of success of the dictionary attack.
To investigate (2) and (3), we used histogram intersection, defined as follows,
\(\mathbb{I}(P,Q)=\frac{\sum_{i=0}^{n}min(P_{i}Q_{i})}{\sum_{i=0}^{n}P_{i}}\), where \(P\) and \(Q\) are histograms, each containing \(n\) bins.
Figure 1. Physical characteristics of genuine users and selected imitators. We could gather the physical characteristics of only \(48\) genuine users. Thus, there are only \(48\) blue dots, while nine orange dots indicating impostors are plotted for each characteristic.
Figure 2. Dictionary data collection setup. Chalk and wire markers show different step lengths and width settings. The speed controller is on the dashboard as indicated, and the thigh lift markers are on the limbs of the treadmill (invisible in this view of the picture).
The overlap of probability distribution (histograms) of data produced at the same level was much higher (\(>0.85\)) compared to the data produced at different levels of gait factors for the same imitator (see Figure 4). We observed similar phenomena across the axis, sensors, and imitators with random inspection. This indicates that the _IMUGait_ patterns produced at different settings are different and capture unique regions on the human gait spectrum.
### Baseline _IMUGait_ authentication pipeline
#### 4.4.1. Data segmentation
An authentication system is desired to have high accuracy and low decision latency. One of the ways to ensure quicker decisions is segmenting the data into smaller frames while maintaining distinctiveness. Two widely studied approaches are cycle (Han et al., 2014; Wang et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018) extraction-based segmentation and sliding window-based fixed-length frame (Wang et al., 2015; Wang et al., 2016; Wang et al., 2017) extraction. Studies that use cycle extraction-based approaches use a point-wise comparison of the train and test samples using classical distance measures such as Euclidean distance and _DTW_. On the other hand, studies that use fixed-length-based frame (both overlapping and non-overlapping) extraction schemes use a variety of machine learning classifiers. Thang et al. (2017) specifically investigated both approaches and concluded that the frame plus machine learning-based approach achieved significantly better results (92.7%) than the cycle-based approaches (79.1%) on the same dataset. Besides, Al-Naffakh et al. (2017) compared 29 studies and concluded that the frame-based approach almost always beats the cycle-based approaches. Following the recommendation of (Tang et al., 2017), we applied the sliding window-based frame extraction approach for segmentation in our experiments. The window length and sliding interval dictate the delay in the first and subsequent authentication decisions. The smaller the window size and sliding interval, the quicker the decisions; thus, we used 8 seconds of the window and 4 seconds of sliding interval across the experiments. This process resulted in 43 feature vectors for genuine data and 18 for each of the 178 patterns in the dictionary, on average.
#### 4.4.2. Preprocessing
Although the raw data plots for each user looked smooth, as these sensors are well-calibrated and corrected nowadays, we observed spikes in some places. Therefore, we applied the moving average technique to smooth the sensor readings further to avoid noisy feature values. The process of moving average-based smoothing of a signal is described below:
Let \(X=[x(t_{1}),x(t_{2}),x(t_{3}),\ldots,x(t_{n})]\) be a time-series signal recorded by a sensor at a specified sampling rate. This signal is assumed to contain noise. We aim to smooth this signal to obtain a less noisy version, denoted as \(X^{\prime}\).
The smoothed signal \(X^{\prime}\) is defined as \([x^{\prime}(t_{1}),x^{\prime}(t_{2}),\ldots,x^{\prime}(t_{n-s+1})]\), where \(n\) is the length of the original signal \(X\), and \(s\) is the smoothing window size.
Figure 3. Upper-left: speed, Upper-right: Step-length, Lower-left: Step-width, Lower-right: Thigh-lift. For nine imitators, EDA revealed an interesting relationship between the four gait factors and dominant features (defined by (Wang et al., 2015)). We can observe that adjusting the same gait factor for different imitators affects the feature values differently. This provides a pictorial insight into why _IMUGait_ patterns are unique for different users. For example, we can look at the upper left figure; the impact on the feature named _nop_x_ is the opposite for a change in speed for imitators I7 (correlation coefficient is +0.7) and I8 (correlation coefficient is -0.75). Similarly, in the bottom-right figure, the impact on the feature names _nop_y_ significantly differs for imitators I4 and I7. More than 77% of the correlations were statistically significant at \(\alpha=0.05\), indicating a strong relationship between gait factors and features.
The smoothed value \(x^{\prime}(t_{i})\) at each time \(t_{i}\) is computed as follows:
\[x^{\prime}(t_{i})=\frac{1}{s}\sum_{j=0}^{s-1}x(t_{i+j})\]
Here, \(s\) controls the number of data points taken together to update the \(i_{\text{th}}\) data point in \(X^{\prime}\).
The value of \(s\) is determined based on the sampling rate of the original signal \(X\). After preliminary analysis, we have chosen:
\[s=\left\lceil 0.05\times\text{sampling\_rate}\right\rceil\]
For example, with a sampling rate of 46, \(s\) would be calculated as \(\left\lceil 0.05\times 46\right\rceil=3\).
The smoothing equation is applied for \(i=1,2,\ldots,n-s+1\). The remaining data points were used for the boundary conditions where fewer than \(s\) data points were available.
An unreasonably high value of \(s\) can result in excessive smoothing, which may eliminate important high-frequency components in the signal. On the other hand, the lowest value of \(s\) (i.e., \(s=1\)) will result in no smoothing, leaving the signal unchanged.
#### 4.4.3. Feature extraction and analysis
We extracted time and frequency domain features from each segment (frame). The time domain feature set included arithmetic mean, standard deviation, mean absolute change, mean absolute deviation, skewness, kurtosis, mean energy, the number of mean crossings, the number of peaks, first, second, and third quantiles, length of the longest strike below and above the mean, and bin counts in 16 equally thick bins. The bin counts were inspired by (Zhou et al., 2017; Wang et al., 2018), and it provides different (structural) information compared to aggregate features. In comparison, the frequency-domain features included the first, second, and third quantiles and standard deviations of the Fourier transform's amplitudes. These features have been extensively studied over the years for authentication, are relatively established in the field, and have achieved significantly low error rates (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). This is evident via the performance of our baselines as well (see Figure 5).
Each raw data segment (frame) was translated into a vector of 34 feature values, resulting in 136 features for each sensor as we used the magnitude (computed as \(\sqrt{x^{2}+y^{2}+z^{2}}\)) beside reading on each of the axes (\(x\), \(y\), and \(z\)). Mutual Information (MI) between individual features and class labels (genuine and impostor) was computed for selecting the top 30 features (the ones with the highest MI values). The chosen features varied across users but not significantly, as we aimed to train user-specific authentication models in which, for training each authentication model, we used genuine users' data as genuine samples while the rest of the users' data as impostor samples.
#### 4.4.4. Sensor fusion and classifiers
Previous studies such as (Zhou et al., 2017) have suggested combining different sensors helps reduce error rates and adversarial attacks' impact. We wanted to test what combination of sensors achieves the best error rates while defending
Figure 4. These heatmaps demo the overlap of the data in different frames created from the raw sensor data collected in the same and different settings. The same settings overlap was computed using windows of data collected at different times. In general, we observed higher histogram overlap for the data collected in the same settings of individual gait factors compared to the data collected in different settings of the gait factors in most cases. For example, the upper-left heatmap suggests that the window of data collected at short step length has a higher overlap (0.863) with the data collected at the same setting at a different time compared to the data collected at normal (0.711), long (0.762), and longer (0.665) step lengths. Similarly, the lower-left heatmap suggests that the window of data collected at high-lift back highly overlaps (0.889) with the window of data collected at the same setting but at different time intervals compared to the overlap with the windows of data collected at thigh-lift normal (0.630), front (0.648), and up (0.681). The plotted numbers are the average histogram intersections computed over at least nine windows of \(8\) seconds of data for the accelerometer’s x-axis. Each of the histogram computations used \(80\) bins of equal width. We observed similar phenomena for different axes of the same and different sensors.
itself from the proposed attack the most. Therefore, we studied all possible \((\binom{4}{1}+\binom{4}{2}+\binom{4}{3}+\binom{4}{4})=15\) combinations of four sensors, and reported the results. The combination was conducted at the feature level by concatenating the feature vectors before training and testing the model. The feature vectors from each sensor consisted only of the selected features from individual sensors. Various classification algorithms have been tested on the _IMUGati_ pattern for authentication. Following previous studies, we experimented with five widely applied and successful classifiers (on _IMUGati_ datasets): k-nearest Neighbors (kNN), Support Vector Machine (SVM), Logistic Regression (LReg), Multilayer Perceptrons (MLP), and Random Forest (RanFor), each with distinct learning paradigms (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
#### 4.4.5. Training, Testing, and Evaluation
The train and test setup of the classifiers is detailed as follows. We trained the authentication model for each user separately. The Genuine dataset contained data collected in two separate sessions for every user. The first session was used for training, and the second was used for testing the authentication models.
Formally, let \(\mathbb{U}=\{u_{1},u_{2},...,u_{N}\}\) be a set of \(N\) genuine users in the database. The authentication model for user \(u_{i}\) can be represented as a function \(f(X_{i})\mapsto\{gen,imp\}\), with \(X_{i}\) being the feature matrix consisting of both genuine (created from \(u_{i}\) data and imposor (created from \(\mathbb{U}-u_{i}\)) feature vectors and \(\{gen,imp\}\) representing two possible classes.
Each authentication model was tested for genuine fail rate using the genuine data from the second session and assessed in terms of False Reject Rate (FRR). Likewise, the impostor pass rate was tested using the second session data from \(\mathbb{U}-u_{i}\) and assessed in terms of False Accept Rate (FAR). Additionally, to compare different architectures, we used Half Total Error Rate (HTER), an average of FAR and FRR, recommended by Bengio et al. (2015) to report the performance on the test dataset.
### Class imbalance
While training the binary classifier-based authentication models \(f(X)\mapsto\{gen,imp\}\), we used five randomly selected feature vectors from each possible impostors (i.e., \(|\mathbb{U}-u_{i}|=54\)). As a result, we had 270 (= 54 impostors \(\times\)5) impostor feature vectors against 22 (on average, per user) genuine feature vectors for training each user authentication model. To overcome the class imbalance, we over-sampled the genuine feature vector using Synthetic Minority Oversampling Technique (SMOTE) (Bengio et al., 2015) to match the number of impostor feature vectors. Consequently, each user authentication model has trained on a feature matrix of 540 feature vectors.
### Dictionary attack process
To circumvent the trained authentication model for \(u_{i}\), i.e., \(f(X_{i})\mapsto\{gen,imp\}\), one need to produce \(X_{adb}\) i.e. a set of gait patterns such that \(f(X_{adb})\mapsto\{gen\}\). Previous studies (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019) have shown that it is not easy to generate \(X_{adb}\) except for (Zhu et al., 2017; Wang et al., 2019; Wang et al., 2019). The methods proposed in (Zhu et al., 2017; Wang et al., 2019; Wang et al., 2019) are tedious. This work demonstrates that \(X_{adb}\) can be produced comparatively easily if we build a dictionary of _IMUGati_ patterns. If the entries in the dictionary are more diverse, then it is easier to find \(X_{adb}\) with higher chances of success and reproducibility.
To test the effectiveness of the proposed dictionary, we took an exhaustive approach, as we tried every entry (_IMUGati_ pattern) available in the dictionary to attack \(f(X_{i})\mapsto\{gen,imp\}\) and assessed its robustness. In a more practical scenario, the attacker would have a way of estimating the likelihood of the success of each of the entries in the dictionary and simulate the setup that produced the entry with the highest likelihood of bypassing the system.
We recorded the False Accept Rate (FAR) for the dictionary entry that caused the maximum damage. A deeper investigation revealed that multiple dictionary entries caused significant damage for some sets of users (see Figures 8 and 9). In contrast, some sets of users remained unaffected as none of the dictionary entries could expand false accept rates for those users. We anticipated such a possibility because our dictionary is limited to nine users and has a total of 178 settings only.
## 5. Results and Discussion
### Sensor and classifier level error analysis
The _FAR_ heatmaps presented in Figure 5 suggest that the Dictionary attack increased the average _FAR_ substantially (on an average from 6% to 59%). The average _FAR_ of accelerometer-based models (the best performing individual sensor under the Zero-effort attack scenario) increased from 5% to 56%. A similar trend was observed for the fusion-based models as the average _FAR_of (a+g+m), (a+g+r), and (a+g+m+r)-based models reached 56%, 53%, and 52% from 3%. Figure 5 suggests HTER increase from 10% to 37%. An authentication system with 37% HTER is too weak to be useful for any real-world application scenario.
Different degree of attack impact was observed for different implementations of the authentication system. For example, the _SVM_-based models were the least affected ones as the _FAR_ increased from 5% to 53% followed by _MLP_ from 3% to 56%, _LReg_ from 7% to 62%, _RanFor_ from 9% to 62%, and _kNN_ from 4% to 64%. This was unsurprising as SVM specifically focuses on maximizing the distance between the support vectors. Both FAR and HTER heatmaps suggest that SVM-based implementations showed the maximum resilience to the Dictionary attack followed by _kNN_, _LReg_, _MLP_ and _RanFor_.
From the heatmaps presented in Figure 5, we can safely conclude that the Dictionary-based attack was alarmingly successful. The error heatmaps presented in Figure 5 showed the overall performance of the Dictionary-based circumvention; however, they fail to reveal the attack's impact on individual user authentication models. Therefore, we present the user-level error analysis in the following sections. We restrict our experimentation to the two most resilient implementations of the attack environment, one from the individual sensor category and the other from the fused sensor category.
### User level analysis
This section analyses the impact of the dictionary attack on the individual user authentication model. The FAR heatmaps in Figure 6 suggest that 11 users (_User2_, _User3_, _User9_, _User15_, _User16_, _User19_, _User20_, _User21_, _User22_, _User23_, _and User29_) showed excellent resilience to the Dictionary attack. In comparison, the attack severely impacted the rest 44 users. A deeper analysis suggests that 80% of
the total accelerometer-based user models accepted more than 50% percent impostors under Dictionary-attack.
On the other hand, the FAR heatmaps in Figure 7 suggest that the fusion-based models showed more resilience to the Dictionary attack as 18 (User1, User2, User3, User4, User6, User7, User8, User9, User12, User13, User14, User15, User16, User18, User19, User20, User28, and User29) of the 55 users remained unaffected (the FAR remained below the baseline). The rest (67%) of the fusion-based authentication models were severely (\(\geq\) 50% FAR) impacted by the Dictionary attack.
From this, we conclude that the Dictionary attack is highly effective against individual sensors and fusion-based implementations for many users. At the same time, we observed that the attack did not work on some users. The explanation for this observation lies in the biometric menagerie defined by (Han et al., 2017) and (Koh et al., 2017). The most impacted users belong to the _Lambs_ class, and the least impacted users to the _Doves_ class (Koh et al., 2017). Although time-consuming, the systematic procedure presented in Section 4.2 for dictionary creation can be easily adapted to add more imitators to increase the attack's success.
Further investigation into the demographics of genuine and imitators revealed that the demographic overlap for (1) users with the lowest error rates and imitators and (2) users with the highest error rates and imitators were quite similar. That suggests that the imitators overlapping with the target do not necessarily have any advantage over those who do not overlap with the target users.
### Probability of finding a matching gait password from the dictionary?
The previous result plots emphasized the analysis of the error rates obtained by the entry that achieved the highest increase in the FAR. We were curious to know how other entries in the dictionary far against the user authentication models. What was their success rate? Therefore, we report _FARs_ achieved by each _IMUGait_ entry of the dictionary against the most resilient implementations. Following previous choices, we present and analyze the error rates of _SVM_-based models. Among _SVM_-based models, we chose the accelerometer under the individual sensor category and (a+g+m+r) under the fused-sensor category. Figure 8 presents the _FARs_ under Zero- and Dictionary-effort circumvention scenarios
Figure 5. The mean error rates achieved by different classifiers for different sensor combinations under the zero-effort and dictionary-effort circumvention scenario. The numbers are rounded up to the nearest integer percentage. The first heatmap reports the FAR under a zero-effort attack, followed by the heatmap presenting the FAR under a dictionary-attack scenario, facilitating a glance at the damage caused. The goal of the dictionary attack is to bypass the authentication system, which means it impacts only the FAR while FRR remains as it is under the zero-effort attack. For comparison of the overall impact on the authentication systems, we report Half Total Error Rate (HTER), an average of FAR and FRR and recommended by Bengio et al. (2017) for both zero-effort and dictionary-effort scenarios. Notably, the presented error rates correspond to the dictionary’s most successful entries (entries that caused the maximum increase in the FAR). _The values are sorted on average of HTER._
for accelerometer-based authentication models. Similarly, Figure 9 depicts the _FARs_ obtained under Zero- and Dictionary-effort circumvention scenarios for (a+g+m+r)-based authentication model.
We can see that more than one _IMUG Gait_ pattern in the dictionary has succeeded in increasing the _FAR_ beyond \(zero\_fars\) as marked with green cross (x) symbol (see User4, User8, User9, User13, User27, User28, User11, User15, User39, User48, and User7). Moreover, the _FARs_ obtained by some _IMUG Gait_ pattern from the dictionary are alarmingly high. On the other hand, for some users (User2, User3, User16, User19, User20, User21, User22, User23, User24, User25, and User29), none of the _IMUG Gait_ patterns from the dictionary were able to obtain higher _FARs_ than \(zero\_far\). In other words, 11 of the
Figure 6. The impact of Dictionary-based circumvention attempts on accelerometer-based models implemented using SVM, the most resilient classifier. The FARs and HTERs obtained under Zero- and Dictionary-effort circumvention attempts are presented one after the other for a quick comparison. These results are based on the most successful (the one that achieved the maximum increase in the FAR from the baseline) gait pattern from the dictionary. _Matrix is sorted on average of dictionary FAR_.
Figure 7. The impact of Dictionary-based circumvention attempts on fusion (a+g+m+r)-based models implemented using SVM, the most resilient classifier. The FARs obtained under Zero- and Dictionary-effort circumvention attempts are presented one after the other for a quick comparison. These results are based on the most successful (the one that achieved the maximum increase in the FAR from the baseline) gait pattern from the dictionary. _Matrix is sorted on average of dictionary FAR_.
55 users remained unaffected in the dictionary-effort circumvention environment. The reason behind this is the small number of entries in the dictionary. We posit that much more diverse entries in the dictionary would make the attack more effective and increase the likelihood of finding a highly damaging entry in the dictionary for every user.
### Impact beyond _IMUGait_
Although this work focuses on _IMUGait_, the proposed methods can be extended to other behavioral biometrics. For example, one could (1) define several factors that dictate the characteristics of human swipe gestures, (2) determine several levels of these factors, (3) find a relationship between the factors and the feature extracted from raw sensor readings, and (4) use the relationship to train a human or robot to produce swipe gestures that will be close to that of the target individual. The proposed technique can also train robots to mimic well-defined human behavior.
### Possible countermeasures
Possible countermeasures for the presented attack method could include the fusion of sensor readings collected from different devices, e.g., smartphone, smartwatch, and smart ring (Sund |
2309.16009 | Floer potentials, cluster algebras and quiver representations | We use cluster algebras to interpret Floer potentials of monotone Lagrangian
tori in toric del Pezzo surfaces as cluster characters of quiver
representations. | Peter Albers, Maria Bertozzi, Markus Reineke | 2023-09-27T20:37:03Z | http://arxiv.org/abs/2309.16009v1 | # Floer potentials, cluster algebras and quiver representations
###### Abstract.
We use cluster algebras to interpret Floer potentials of monotone Lagrangian tori in toric del Pezzo surfaces as cluster characters of quiver representations (see Theorem 4.3 for the precise statement).
## 1. Introduction
In the seminal work [11], R. Vianna constructed infinitely many pairwise non Hamiltonian isotopic monotone Lagrangian tori in the complex projective plane. These tori are naturally indexed by Markov triples (more precisely, by the triple of squares), and were constructed inductively in [12] by a geometric mutation procedure along the Markov tree. To distinguish these tori, Vianna uses information on counts of Maslov index \(2\) holomorphic disks contained in them.
This construction was further conceptualized in [7], where it is put into a context of cluster-like mutation operations originating in [3]. Vianna's mutation operation is extended to mutations of so-called Lagrangian seeds in del Pezzo surfaces, the main result being a wall-crossing formula, stating that the Floer potentials of mutated Lagrangian tori are related by an explicit algebraic mutation rule for Laurent polynomials (see Sections 2.1, 2.2 for a summary).
A formally similar mutation invariance is known in the representation-theoretic approach to cluster algebras [4], to be recalled in Sections 2.3, 2.4. Namely, the \(F\)-polynomials of representations of quivers with potentials (or more precisely the cluster characters), encoding Euler characteristics of Grassmannians of subrepresentations, are related by the mutation rule in cluster algebras when the representations are mutated.
This formal similarity, and experiments for the first three Vianna tori, led the authors to the prediction that, to any of the monotone Lagrangian tori in toric del Pezzo constructed in [7], one can associate a representation of a quiver with potential, whose cluster character corresponds naturally to the Floer potential of the torus.
Indeed, this turns out to be feasible. Inspired by the setup of [6], we construct (see Section 3) a comparison map from two-variable Laurent polynomials to cluster algebras, which is compatible with the two algebraic mutation rules mentioned above. We state and prove an appropriate version of mutation invariance of cluster characters in Section 2.4.
After realizing the "initial" Landau-Ginzburg seeds of [7] as cluster characters in Section 4 (under a mild additional hypothesis for the del Pezzos of degree \(6\) and \(7\), see the proof of Lemma 4.1), the compatibilities of the various mutations then
Introduction
Let \(\mathbb{C}[z_{1}^{\pm},z_{2}^{\pm}]\) be a polynomial of degree \(n\) and let \(\mu_{v}(z_{1},z_{2})\) be the set of all polynomials of degree \(n\). The _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\) is the _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\), where \(z_{1}\) is the _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\). The _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\) is the _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\), where \(z_{1}\) is the _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\). The _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\) is the _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\). The _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\) is the _multiplicity_ of \(\mu_{v}(z_{1},z_{2})\).
Note that the operation of mutation of seeds is not involutive, since
\[\mu_{-v}\mu_{v}w=w+\{w,v\}v\]
defines a transvection, and similarly
\[\mu_{-v}(\mu_{v}(z^{w}))=\mu_{-v}(z^{w}(1+z^{v^{\perp}})^{-(v,w)})=\]
\[=z^{w}(1+z^{-v^{\perp}})^{(v,w)}(1+z^{v^{\perp}})^{-(v,w)}=z^{w-(v^{\perp},w)v^ {\perp}}=z^{w+\{v,w\}v^{\perp}}.\]
**Theorem 2.2**.: _[_3_]_ _Any mutation of an LG seed is again an LG seed._
In other words, an LG seed stays an LG seed under arbitrary sequences of mutations.
To see the concrete examples of LG seeds we will be interested in, we reproduce the list of LG seeds corresponding to toric del Pezzo surfaces from [7].
**Theorem 2.3**.: _Any of the following combinations of toric del Pezzo surfaces, potentials and directions constitutes an LG seed:_
\begin{tabular}{|c|l|} \hline _del Pezzo_ & _potential, directions_ \\ \hline \hline \(\mathbb{C}P^{2}\) & \(z_{1}+z_{2}+z_{1}^{-1}z_{2}^{-1}\)_,_ \\ \hline & \((1,1),(-2,1),(1,-2)\) \\ \hline \(\mathbb{C}P^{1}\times\mathbb{C}P^{1}\) & \(z_{1}+z_{2}+z_{1}^{-1}+z_{2}^{-1}\)_,_ \\ \hline & \((1,1),(1,-1),(-1,1),(-1,-1)\) \\ \hline \(\mathrm{Bl}_{1}\mathbb{C}P^{2}\) & \(z_{1}+z_{2}+z_{1}^{-1}z_{2}^{-1}+z_{1}z_{2}\)_,_ \\ \hline & \((-2,1),(1,-2),(1,0),(0,1)\) \\ \hline \(\mathrm{Bl}_{2}\mathbb{C}P^{2}\) & \(z_{1}+z_{2}+z_{1}^{-1}+z_{2}^{-1}+z_{1}^{-1}z_{2}^{-1}\)_,_ \\ \hline & \((1,-1),(-1,1),(-1,0),(0,-1),(1,1)\) \\ \hline \(\mathrm{Bl}_{3}\mathbb{C}P^{2}\) & \(z_{1}+z_{2}+z_{1}^{-1}+z_{2}^{-1}+z_{1}z_{2}+z_{1}^{-1}z_{2}^{-1}\)_,_ \\ \hline & \(\pm(1,-1),\pm(1,0),\pm(0,1)\) \\ \hline \end{tabular}
In the following, we take the above LG seeds as initial ones, and consider all of their mutations:
**Definition 2.4**.: _Let \(X\) be a toric del Pezzo surface, and let_
\[\mathbf{s}(X)=(W(X),v_{1},\ldots,v_{n}),\]
_be the corresponding LG seed in the above table. For \(\mathbf{i}=(i_{1},\ldots,i_{N})\) a sequence of indices in \(\{1,\ldots,n\}\) without repetitions, define_
\[s_{\mathbf{i}}(X)=\mu_{i_{N}}\cdots\mu_{i_{1}}\mathbf{s}(X)\]
_as the corresponding mutation of \(\mathbf{s}(X)\), and denote by_
\[W_{\mathbf{i}}(X)=\mu_{i_{N}}\cdots\mu_{i_{1}}W(X)\]
_its potential._
### Floer potentials
To define Floer potentials and their mutation, we continue to follow [7]. We continue to denote by \(X\) a toric del Pezzo surface viewed as Kaehler manifold. For a monotone Lagrangian submanifold \(L\subset X\), we define its Floer potential (equivalently, disk or Landau-Ginzburg potential)
\[W_{L}:\mathrm{Hom}(H_{1}(L,\mathbb{Z}),\mathbb{C}^{*})\to\mathbb{C}\]
as the count of Maslov index \(2\) holomorphic disks in \(X\) with boundary on \(L\), weighted by the holonomy of a local system on \(L\):
\[W_{L}(\rho)=\sum_{\delta}n_{\delta}\rho(\partial\delta),\]
where the sum ranges over classes \(\delta\in H_{2}(X,L)\) of Maslov index \(2\), the number \(n_{\delta}\) is the count of holomorphic disks in class \(\delta\), and \(\rho(\partial\delta)\) is the holonomy along the boundary loop of the disk of the local system determined by \(\rho\). More precisely, \(n_{\delta}\) is the degree of the map "evaluation at the marked point"
\[\mathcal{M}(L,\delta)\to L\]
on the moduli space of holomorphic disks with boundary on \(L\) of Maslov index \(2\) with a single marked point on the boundary of the disk representing the class \(\delta\) (with respect to a generic almost complex structure). Identifying
\[H_{1}(L,\mathbb{Z})/\text{torsion}\simeq\mathbb{Z}^{2}\]
by chosing a basis, \(\partial\delta\) becomes an element \((l_{1},l_{2})\) of \(\mathbb{Z}^{2}\). Monotonicity of \(L\) and Gromov compactness then imply that \(W_{L}\), viewed as a function \((\mathbb{C}^{*})^{2}\to\mathbb{C}\) (using the previous identifications), is a finite sum of terms \(n_{\delta}\mathbf{z}^{\partial\delta}\) (where \(\mathbf{z}^{\partial\delta}=z_{1}^{l_{1}}z_{2}^{l_{2}}\)) over the classes \(\delta\) of Maslov index \(2\), and can thus be viewed as a Laurent polynomial in \(\mathbb{C}[z_{1}^{\pm},z_{2}^{\pm}]\),
\[W_{L}=\sum_{\delta}n_{\delta}\mathbf{z}^{\partial\delta}.\]
For a monotone Lagrange torus \(L\subset X\) and a Lagrangian disk \(D\subset X\) with boundary on \(L\) fulfilling suitable transversality conditions, there is an operation of mutation of \(L\) in direction \(D\), producing a mutated Lagrangian torus \(\mu_{D}L\) and a mutated holomorphic disk \(\mu_{D}D\). This operation is involutive up to a Hamiltonian isotopy.
A Lagrangian seed \((L,\{D_{i}\}_{i})\) in \(X\) consists of a monotone Lagrangian torus \(L\) and embedded Lagrangian holomorphic disks fulfilling suitable transversality conditions. The above mutation procedure can be extended to provide mutations
\[\mu_{i}(L,\{D_{j}\}_{j})=(\mu_{D_{i}}L,\{\mu_{D_{i}}D_{j}\}_{j})\]
of Lagrangian seeds in arbitrary directions.
To a Lagrangian seed \((L,\{D_{i}\}_{i})\), an LG seed
\[(W_{L},([\partial D_{i}]^{\perp})_{i})\]
can be assigned via the isomorphism \(H_{1}(L,\mathbb{Z})/\text{torsion}\simeq\mathbb{Z}^{2}\).
**Theorem 2.5**.: _The assignment of an LG seed to a Lagrangian seed is compatible with the mutation operations. Moreover, there exists an initial Lagrangian seed \((L(X),\{D_{i}(X)\}_{i})\) realizing the LG seeds \(\mathbf{s}(X)\) of Theorem 2.3._
We can then define iterated mutated Lagrangian seeds
\[(L_{\mathbf{i}}(X),\{D_{j}^{\mathbf{i}}(X)\}_{j})=\mu_{i_{N}}\cdots\mu_{i_{1} }(L(X),\{D_{i}(X)\}_{i})\]
for any sequence \(\mathbf{i}=(i_{1},\ldots,i_{N})\) without repetitions. By the previous theorem, we thus have
**Corollary 2.6**.: _For all sequences \(\mathbf{i}\) as above, we have an equality of LG seeds_
\[(W_{L_{\mathbf{i}}(X)},\{[\partial D_{j}^{\mathbf{i}}(X)]\}_{j})=\mathbf{s}_{ \mathbf{i}}(X).\]
### Cluster algebras
We now turn to the purely algebraic part and recall some cluster algebra terminology following [5]. Let \(B=(b_{i,j})\) be a skew-symmetric \(n\times n\)-matrix. We define the mutation \(\mu_{i}B=B^{\prime}=(b^{\prime}_{j,k})\) in direction \(i=1,\ldots,n\) of \(B\) by
\[b^{\prime}_{j,k}=b_{j,k}\text{ if }j=i\text{ or }k=i,\]
\[b^{\prime}_{j,k}=b_{j,k}+[b_{j,i}]_{+}[b_{i,k}]_{+}-[-b_{j,i}]_{+}[-b_{i,k}]_{ +}\text{ if }j\neq i\neq k.\]
We define the mutation of \(x\)-variables as the automorphism of the rational function field \(\mathbb{C}(x_{1},\ldots,x_{n})\) given by \(\mu_{i}(x_{j})=x^{\prime}_{j}\) for
\[x^{\prime}_{i}={x_{i}}^{-1}\prod_{j}{x_{j}}^{[b_{i,j}]_{+}}(1+\prod_{j}{x_{j}} ^{-b_{i,j}})\text{ and }x^{\prime}_{j}=x_{j},\,j\neq i,\]
a variant of the more common
\[x^{\prime}_{i}=x^{-1}_{i}(\prod_{j}x^{[b_{i,j}]_{+}}_{j}+\prod_{j}x^{[b_{j,i} ]_{+}}_{j}).\]
We define the \(y\)-variables by
\[y_{i}=\prod_{j}x^{b_{j,i}}_{j},\]
and similarly for \(y^{\prime}_{i}\), so that the previous mutation rule reads
\[x^{\prime}_{i}=x^{-1}_{i}\prod_{j}{x_{j}}^{[b_{i,j}]_{+}}(1+y_{i}).\]
The mutation of \(y\)-variables is then given by
\[y^{\prime}_{i}={y_{i}}^{-1}\text{ and }y^{\prime}_{j}=y_{j}{y_{i}}^{[b_{i,j}]_{+}} (1+y_{i})^{-b_{i,j}},\,j\neq i.\]
### Representations of quivers with potentials
Turning to the quiver theoretic approach to cluster algebras and mutation, we introduce representations of quivers with potential following [4]. Given a matrix \(B\) as in the previous subsection, we associate to it a quiver \(Q=Q(B)\) with vertices \(i=1,\ldots,n\), such that
\[\text{ for all }i,j\in Q_{0},\text{ there are }[-b_{i,j}]_{+}\text{ arrows from }i\text{ to }j.\]
Note that \(Q(B)\) has, by definition, no loops or two-cycles. We have
\[Q(\mu_{i}B)=\mu_{i}Q(B),\]
where the mutation \(\mu_{i}Q\) of a quiver at a vertex \(i\) is defined by reversing all arrows incident with \(i\), adding an arrow
\[[\beta\alpha]:j\to k\]
for every pair of arrows
\[j\overset{\alpha}{\to}i\overset{\beta}{\to}k,\]
and deleting all resulting two-cycles.
We denote by \(\widehat{\mathbb{C}Q}\) the completed (with respect to path length) path algebra of \(Q\), and by \(\mathfrak{m}\) its maximal ideal. The factor \(\widehat{\mathbb{C}Q}/[\widehat{\mathbb{C}Q},\widehat{\mathbb{C}Q}]\) by (additive) commutators is spanned by cyclic equivalence classes of formal sums of oriented cycles. An element \(S\) in this space is called a potential for \(Q\), for which cyclic derivatives \(\partial_{\alpha}S\) along the arrows \(\alpha\) in \(Q\) are defined, as well as double cyclic derivatives \(\partial_{\beta\alpha}S\). We will never need their explicit definition in the following and just refer to [4, (4.3),(4.4)].
For a sufficiently general (non-degenerate in the terminology of [4]) potential \(S\) on
\(Q\), one can naturally define a mutation \(\mu_{i}S\) as a potential on \(\mu_{i}Q\).
We consider representations \(V\) of \(Q\) which satisfy the Jacobian relations that all \(\partial_{\alpha}S\) are represented by zero in \(V\), thus
\[V_{\partial_{\alpha}S}=0,\]
which we call representations of the quiver with potential \((Q,S)\). Moreover, we require all such representations to be nilpotent, that is, every oriented cycle \(\omega\) is represented by a nilpotent endomorphism \(V_{\omega}\).
Slightly more generally, we consider decorated representations \((V,V^{-})\), consisting of a representation \(V\) of \((Q,S)\), together with vector spaces \(V_{i}^{-}\) for \(i=1,\ldots,n\). As a special case, we consider the negative simple decorated representation \(S_{i}^{-}\): it consists of the zero quiver representation, and the additional vector spaces are given by \((S_{i}^{-})_{j}^{-}=0\) for \(j\neq i\) and \((S_{i}^{-})_{i}^{-}=\mathbb{C}\).
For every (decorated) representation \(V\) of \((Q,S)\) (the potential \(S\) still assumed to be non-degenerate) and all \(i\in Q_{0}\), a mutation \(\mu_{i}V\) is defined in [4], which is a (decorated) representation of \((\mu_{i}Q,\mu_{i}S)\). The operation of mutation is involutive up to so-called right equivalence, that is, \(\mu_{i}\mu_{i}V\) is isomorphic to \(\varphi^{*}V\) for some automorphism \(\varphi\) of \(\widehat{\mathbb{C}Q}\) fixing \(S\).
### \(F\)-polynomials and cluster characters
For a (decorated) representation \(V\) of a quiver \(Q\) and a dimension vector \(\mathbf{e}\) with entries \(e_{i}\leq\dim V_{i}\) for all \(i\) we define the quiver Grassmannian \(\operatorname{Gr}_{\mathbf{e}}(V)\) as the complex projective variety of \(\mathbf{e}\)-dimensional subrepresentations of \(V\), viewed for example as a Zariski-closed subset of the product \(\prod_{i}\operatorname{Gr}_{e_{i}}(V_{i})\) of ordinary Grassmannians. We define the \(F\)-polynomial of \(V\) encoding the topological Euler characteristic \(\chi(\operatorname{Gr}_{\mathbf{e}}(V))\) of all quiver Grassmannians:
\[F_{V}(u)=\sum_{\mathbf{e}}\chi(\operatorname{Gr}_{\mathbf{e}}(V))\prod_{i}u_{ i}^{e_{i}}\in\mathbb{C}[u_{1},\ldots,u_{n}].\]
We define the \(\mathbf{g}\)- and \(\mathbf{h}\)-vectors of \(V\):
Fixing a vertex \(i\) of \(Q\), any decorated representation \(V\) gives rise to a triangle of linear maps
where
\[V_{i}^{\mathrm{in}}=\bigoplus_{\alpha:j\to i}V_{j},\;V_{i}^{\mathrm{out}}= \bigoplus_{\beta:i\to j}V_{j},\]
\[\alpha_{i}=(V_{\alpha})_{\alpha:j\to i},\;\beta_{i}=(V_{\beta})_{\beta:i\to j},\]
and the \((\alpha,\beta)\)-component of \(\gamma_{i}\) is given by
\[(\gamma_{i})_{(\alpha,\beta)}=V_{\partial_{\beta\alpha}S}.\]
We then define
\[h_{i}(V)=-\dim\operatorname{Ker}(\beta_{i})\]
and
\[g_{i}(V)=\dim\operatorname{Ker}(\gamma_{i})-\dim V_{i}+\dim V_{i}^{-}.\]
We have the following relation between \(\mathbf{g}\)- and \(\mathbf{h}\)-vectors:
\[g_{i}(V)=h_{i}(V)-h_{i}(\mu_{i}V).\]
The \(\mathbf{g}\)-vectors fulfill the mutation rule
\[g_{i}(\mu_{i}V)=-g_{i}(V),\;g_{j}(\mu_{i}V)=g_{j}(V)+[b_{j,i}]_{+}g_{i}(V)-b_{j, i}h_{i}(V),\,i\neq j.\]
We have the following mutation rule for \(F\)-polynomials:
**Lemma 2.7**.: _[_4_, Lemma 5.2]_ _We have_
\[(1+y_{i})^{h_{i}(V)}F_{V}(y)=(1+y_{i}^{\prime})^{h_{i}(\mu_{i}(V))}F_{\mu_{i}V }(y^{\prime}).\]
The following is the central definition for our purposes, originating in [2], and written in this form in [9, Remark 5.2]:
**Definition 2.8**.: _We define the cluster character of \(V\) by_
\[\operatorname{CC}_{V}(x)=x^{\mathbf{g}(V)}F_{V}(y)\in\mathbb{C}[x_{1}^{\pm}, \dots,x_{n}^{\pm}].\]
For two decorated representations \(V\) and \(W\) of \(Q\), we have
\[F_{V\oplus W}(x)=F_{V}(x)\cdot F_{W}(x)\]
and
\[\mathbf{g}(V\oplus W)=\mathbf{g}(V)+\mathbf{g}(W),\]
so that
\[\operatorname{CC}_{V\oplus W}(x)=\operatorname{CC}_{V}(x)\cdot\operatorname{ CC}_{W}(x).\]
It is thus natural to extend cluster characters to virtual representations, that is, elements of the split Grothendieck group \(K_{0}^{\oplus}\) of the category of decorated representations. Namely, every element of this group can be written as a difference \([V]-[W]\) of isomorphism classes of representations, and we define
\[\operatorname{CC}_{[V]-[W]}(x)=\frac{\operatorname{CC}_{V}(x)}{\operatorname{ CC}_{W}(x)}\in\mathbb{C}(x_{1},\dots,x_{n}).\]
Note also that the mutation operation \(\mu_{i}\) naturally extends to virtual representations.
**Lemma 2.9**.: _The cluster character of virtual representations is invariant under mutation, that is,_
\[\operatorname{CC}_{V}(x)=\operatorname{CC}_{\mu_{i}V}(x^{\prime}).\]
This lemma is essentially [8, Corollary 4.14]. Due to different conventions in notation, we give a complete proof here. We can assume \(V\) to be a decorated representation. Using the mutation rule for \(F\)-polynomials, the relation between \(\mathbf{g}\)- and \(\mathbf{h}\)-vectors and the definition of \(y\)-variables, we have
\[x^{\mathbf{g}(V)}F_{V}(y)=x^{\mathbf{g}(V)}(1+y_{i})^{-h_{i}(V)}(1+y_{i})^{h_{ i}(V)}F_{V}(y)=\]
\[=x^{\mathbf{g}(V)}(1+y_{i})^{-h_{i}(V)}(1+y_{i}^{-1})^{h_{i}(\mu_{i}V)}F_{\mu_ {i}V}(y^{\prime})=\]
\[=x^{\mathbf{g}(V)}(1+y_{i})^{-h_{i}(V)+h_{i}(\mu_{i}V)}y_{i}^{-h_{i}(\mu_{i}V) }F_{\mu_{i}V}(y^{\prime})=\]
\[=x^{\mathbf{g}(V)}(1+y_{i})^{-g_{i}(V)}\prod_{j\neq i}x_{j}^{-b_{j,i}h_{i}(\mu _{i}V)}F_{\mu_{i}V}(y^{\prime})=\]
\[=x_{i}^{g_{i}(V)}\prod_{j\neq i}x_{j}^{g_{j}(V)-b_{j,i}h_{i}(\mu_{i}V)}(1+y_{i })^{-g_{i}(V)}F_{\mu_{i}V}(y^{\prime}).\]
We rewrite the exponent of \(x_{j}\) using the mutation rule for \(\mathbf{g}\)-vectors as
\[g_{j}(V)-b_{j,i}h_{i}(\mu_{i}V)=g_{j}(V)-b_{j,i}h_{i}(V)+b_{j,i}g_{i}(V)=\]
\[=g_{j}(\mu_{i}V)-[b_{j,i}]_{+}g_{i}(V)+b_{j,i}g_{i}(V)=\]
\[=g_{j}(\mu_{i}V)-[b_{i,j}]_{+}g_{i}(V)\]
and thus \(x^{\mathbf{g}(V)}F_{V}(y)=\)
\[=x_{i}^{g_{i}(V)}\prod_{j\neq i}x_{j}^{g_{j}(\mu_{i}V)-[b_{i,j}]_{+}g_{i}(V)}( 1+y_{i})^{-g_{i}(V)}F_{\mu_{i}V}(y^{\prime})=\]
\[=(x_{i}^{-1}\prod_{j\neq i}x_{j}^{[b_{i,j}]_{+}}(1+y_{i}))^{g_{i}(\mu_{i}V)} \prod_{j\neq i}x_{j}^{g_{j}(\mu_{i}V)}F_{\mu_{i}V}(y^{\prime})=\]
\[=(x^{\prime})^{\mathbf{g}(\mu_{i}V)}F_{\mu_{i}V}(y^{\prime}),\]
proving the lemma.
## 3. LG seeds and cluster algebras
To an LG seed \(\mathbf{s}=(W,v_{1},\ldots,v_{n})\) we associate the skew-symmetric matrix \(B=B(\mathbf{s})=(b_{i,j})\) given by
\[b_{i,j}=\{v_{i},v_{j}\}.\]
Note that it has rank at most two. This definition is compatible with mutation, that is,
\[B(\mu_{i}(\mathbf{s}))=\mu_{i}(B(\mathbf{s})).\]
Namely, we have, in the notation of Definition 2.1:
\[\{v_{i}^{\prime},v_{k}^{\prime}\}=\{-v_{i},v_{k}+[\{v_{k},v_{i}\}]_{+}v_{i}\}= -\{v_{i},v_{k}\},\]
for \(i\neq k\), and similarly for \(\{v_{j}^{\prime},v_{i}^{\prime}\}\) and \(j\neq i\), whereas, for \(j\neq i\neq k\), we have
\[\{v_{j}^{\prime},v_{k}^{\prime}\}=\{v_{j}+[\{v_{j},v_{i}\}]_{+}v_{i},v_{k}+[\{ v_{k},v_{i}\}]_{+}v_{i}\}=\]
\[=\{v_{j},v_{k}\}+[\{v_{k},v_{i}\}]_{+}\{v_{j},v_{i}\}+[\{v_{j},v_{i}\}]_{+}\{ v_{i},v_{k}\}=\]
\[=b_{j,k}+[b_{j,i}]_{+}b_{i,k}+b_{j,i}[b_{k,i}]_{+}=b_{j,k}+[b_{j,i}]_{+}[b_{i, k}]_{+}-[-b_{j,i}]_{+}[-b_{i,k}]_{+}.\]
Inspired by [6], we define
**Definition 3.1**.: _The comparison map_
\[\Phi=\Phi_{\mathbf{s}}:\mathbb{C}[z_{1}^{\pm},z_{2}^{\pm}]\to\mathbb{C}[x_{1 }^{\pm},\ldots,x_{n}^{\pm}]\]
_is defined by_
\[\Phi_{\mathbf{s}}(z^{v})=\prod_{i}x_{i}^{-(v,v_{i})}.\]
**Lemma 3.2**.: _The comparison map \(\Phi_{\mathbf{s}}\) is compatible with the mutations of Sections 2.1 and 2.3, respectively, that is, we have_
\[\mu_{i}\circ\Phi_{\mathbf{s}}=\Phi_{\mu_{i}\mathbf{s}}\circ\mu_{i}.\]
Namely, we have
\[\mu_{i}(\Phi_{\mathbf{s}}(z^{v}))=\mu_{i}(\prod_{j}x_{j}^{-(v,v_{j})})=\]
\[=({x_{i}}^{-1}\prod_{j}{x_{j}}^{[b_{i,j}]_{+}}(1+y_{i}))^{-(v,v_{i})}\prod_{j\neq i }{x_{j}}^{-(v,v_{j})}=\]
\[={x_{i}}^{(v,v_{i})}\prod_{j\neq i}{x_{j}}^{-(v,v_{j})-[b_{i,j}]_{+}(v,v_{i})}( 1+y_{i})^{-(v,v_{i})}.\]
On the other hand, we have
\[\Phi_{\mu_{i}\mathbf{s}}(\mu_{i}(z^{v}))=\Phi_{\mu_{i}\mathbf{s}} (z^{v}(1+z^{v_{i}^{\perp}})^{-(v_{i},v)})=\] \[=\prod_{j}{x_{j}}^{-(v,v_{j}^{\prime})}(1+\prod_{j}{x_{j}}^{-(v_{i }^{\perp},v_{j}^{\prime})})^{-(v_{i},v)}.\]
Now
\[\prod_{j}{x_{j}}^{-(v_{i}^{\perp},v_{j}^{\prime})}=\prod_{j}{x_{j}}^{\{v_{i}, v_{j}^{\prime}\}}=\prod_{j}{x_{j}}^{b_{i,j}}=y_{i}^{-1},\]
so that
\[(1+\prod_{j}{x_{j}}^{-(v_{i}^{\perp},v_{j}^{\prime})})^{-(v_{i},v)}=(1+{y_{i}} ^{-1})^{-(v_{i},v)}={y_{i}}^{(v_{i},v)}(1+y_{i})^{-(v_{i},v)},\]
and
\[\prod_{j}{x_{j}}^{-(v,v_{j}^{\prime})}={x_{i}}^{(v,v_{i})}\prod_{j\neq i}{x_{j }}^{-(v,v_{j})-[b_{j,i}]_{+}(v,v_{i})},\]
which combines to \(\Phi_{\mu_{i}\mathbf{s}}(\mu_{i}(z^{v}))=\)
\[={x_{i}}^{(v,v_{i})}\prod_{j\neq i}{x_{j}}^{-(v,v_{j})-[b_{j,i}]_{+}(v,v_{i})} y_{i}}(1+y_{i})^{-(v_{i},v)}=\]
\[={x_{i}}^{(v,v_{i})}\prod_{j\neq i}{x_{j}}^{-(v,v_{j})-[b_{j,i}]_{+}(v,v_{i})} (1+y_{i})^{-(v_{i},v)}=\]
\[=\mu_{i}(\Phi_{\mathbf{s}}(z^{v}))\]
as claimed.
## 4. Floer potentials as \(F\)-polynomials
For each of the LG seeds of Theorem 2.3, we will now describe the quiver corresponding to the \(B\)-matrix induced by the seed:
\[\left[\begin{array}{rrrr}0&3&-3\\ -3&0&3\\ 3&-3&0\end{array}\right],\ \left[\begin{array}{rrrr}0&-2&2&0\\ 2&0&0&-2\\ -2&0&0&2\\ 0&2&-2&0\end{array}\right],\ \left[\begin{array}{rrrr}0&3&-1&-2\\ -3&0&2&1\\ 1&-2&0&1\\ 2&-1&-1&0\end{array}\right],\]
\[\left[\begin{array}{rrrr}0&0&-1&-1&2\\ 0&0&1&1&-2\\ 1&-1&0&1&-1\\ 1&-1&-1&0&1\\ -2&2&1&-1&0\end{array}\right],\ \left[\begin{array}{rrrrrr}0&0&1&-1&1&-1\\ 0&0&-1&1&-1&1\\ -1&1&0&0&1&-1\\ 1&-1&0&0&-1&1\\ -1&1&-1&1&0&0\\ 1&-1&1&-1&0&0\end{array}\right].\]
For each of these quivers \(Q(X)\), associated to a toric del Pezzo surface \(X\), we choose a non-degenerate potential \(S(X)\). For any sequence \(\mathbf{i}=(i_{1},\ldots,i_{N})\) of vertices in \(Q(X)\) without repetitions, we define
\[(Q_{\mathbf{i}}(X),S_{\mathbf{i}}(X))=\mu_{i_{N}}\cdots\mu_{i_{1}}(Q(X),S(X)).\]
Since the mutation operations on LG seeds, skew-symmetric matrices and quivers, respectively, are compatible, we have
\[Q(B(\mathbf{s}_{\mathbf{i}}(X)))=Q_{\mathbf{i}}(X).\]
For example, for the quiver associated to \(X=\mathbb{C}P^{2}\), the quivers \(Q_{\mathbf{i}}(X)\) are (all) Markov quivers, that is, triangles with \(3a\), \(3b\), \(3c\) parallel arrows, respectively, where \((a,b,c)\) is a Markov triple.
We now claim:
**Lemma 4.1**.: _For every toric del Pezzo surface \(X\), there is a virtual representation \(P(X)\) of \((Q(X),S(X))\) of the form \(P(X)=[V]-[(S_{i}^{-})^{c}]\) for nonnegative \(c\), such that_
\[\operatorname{CC}_{P(X)}(x)=\Phi_{\mathbf{s}(X)}(W(X)).\]
_In other words, the LG potential \(W(X)\in\mathbb{C}[z_{1}^{\pm},z_{2}^{\pm}]\) associated to \(X\) in Theorem 2.3 corresponds to the cluster character \(\operatorname{CC}_{P(X)}(x)\) of \(P(X)\), which belongs to \(\mathbb{C}[x_{1}^{\pm},\ldots,x_{n}^{\pm}]\), under the map \(\Phi_{\mathbf{s}(X)}\)._
Note that the way they are depicted, for every of the above quivers there is a canonical _outer_ cycle containing every vertex once. We fix a vertex \(i_{0}\) of \(Q\) arbitrarily.
We then define a representation \(V\) as follows: we have \(V_{i_{0}}\)=0 and \(V_{i}=\mathbb{C}\) for all \(i\neq i_{0}\). Moreover, we have \(V_{i}^{-}=0\) for all \(i\). We have \(V_{\alpha}=0\) whenever \(\alpha\) does not belong to the outer cycle, and \(V_{\alpha}\neq 0\) whenever \(\alpha\) belongs to the outer cycle and is not incident with \(i_{0}\). Recalling the maps \(\gamma_{i}\) used in the definition of \(\mathbf{g}\)-vectors, we moreover define \(c=\dim\operatorname{Ker}(\gamma_{i_{0}})+1\).
We claim that (under mild hypotheses on the non-degenerate potential in the cases \(X=\operatorname{Bl}_{2}\mathbb{C}P^{2}\) and \(\operatorname{Bl}_{3}\mathbb{C}P^{2}\)) the above \(V\) defines a representation of the quiver with potential \((Q(X),S(X))\), and that \(P(X)=[V]-[(S_{1}^{-})^{c}]\) fulfills the assumption of the lemma. We fix, for example, \(i_{0}=1\).
In the case \(X=\mathbb{C}P^{2}\), the representation \(V\) clearly fulfills the Jacobian relations since there are no consecutive arrows supporting non-zero maps in \(V\). We have
\[y_{1}=x_{2}^{-3}x_{3}^{3},\;y_{2}=x_{1}^{3}x_{3}^{-3},\;y_{3}=x_{1}^{-3}x_{2}^ {3},\]
and thus
\[F_{P}(y)=F_{V}(y)=1+y_{2}+y_{2}y_{3}=1+x_{1}^{3}x_{3}^{-3}+x_{2}^{3}x_{3}^{-3}.\]
The \(\gamma\)-maps are of the form \(0=\gamma_{2}:\mathbb{C}^{3}\to 0\) and \(0=\gamma_{3}:0\to\mathbb{C}^{3}\), so that
\[\mathbf{g}_{V}=(\dim\operatorname{Ker}(\gamma_{1}),2,-1),\]
and thus
\[\mathbf{g}_{P}=(-1,2,-1),\]
yielding
\[\operatorname{CC}_{P}(x)=x_{1}^{-1}x_{2}^{2}x_{3}^{-1}(1+x_{1}^{3}x_{3}^{-3}+ x_{2}^{3}x_{3}^{-3})=x_{1}^{-1}x_{2}^{2}x_{3}^{-1}+x_{1}^{-1}x_{2}^{-1}x_{3}^{2 }+x_{1}^{2}x_{2}^{-1}x_{3}^{-1}.\]
This indeed coincides with \(\Phi(z_{1}+z_{2}+z_{1}^{-1}z_{2}^{-1})\) since, by definition,
\[\Phi(z_{1})=x_{1}^{-1}x_{2}^{2}x_{3}^{-1},\ \Phi(z_{2})=x_{1}^{-1}x_{2}^{-1}x_{3} ^{2}.\]
The verification in the cases \(X=\mathbb{C}P^{1}\times\mathbb{C}P^{1}\) and \(X=\mathrm{Bl}_{1}\mathbb{C}P^{2}\) is entirely similar.
For the remaining two cases, we first depict representations of the relevant quivers:
In the case \(X=\mathrm{Bl}_{2}\mathbb{C}P^{2}\), there are two new phenomena to be discussed. First of all, _we assume that the potential \(S\) does not involve the outer \(5\)-cycle with non-zero coefficient_ (otherwise \(V\) had to fulfill a Jacobian relation which might force the compositions \(M_{1}^{2}M_{1}^{3}M_{1}^{4}\) and \(M_{2}^{2}M_{1}^{3}M_{1}^{4}\) to be proportional). Moreover, in computing the \(\mathbf{g}\)-vector, we will see why it is necessary to represent the arrows outside the outer cycle by zero. Namely, this forces the \(\gamma\)-maps \(\gamma_{2}:\mathbb{C}^{2}\to\mathbb{C}^{2}\) and \(\gamma_{3}:\mathbb{C}^{2}\to\mathbb{C}\) to be zero, whereas \(\gamma_{4}:\mathbb{C}^{2}\to\mathbb{C}\) is non-zero since it involves a linear combination of \(M_{1}^{2}\) and \(M_{2}^{2}\) which we can, in general, assume to be non-zero. The resulting \(\mathbf{g}\)-vector is thus
\[\mathbf{g}_{V}=(\dim\operatorname{Ker}(\gamma_{1}),1,1,0,-1),\]
and thus
\[\mathbf{g}_{P}=(-1,1,1,0,-1).\]
The rest of the verification is again straight-forward, so that both \(\mathrm{CC}_{P}(x)\) and \(\Phi(W(x))\) coincide with
\[x_{1}^{-1}x_{2}x_{3}x_{5}^{-1}+x_{1}x_{2}^{-1}x_{4}x_{5}^{-1}+x_{1}x_{2}^{-1}x _{3}^{-1}x_{5}+x_{1}^{-1}x_{2}x_{4}^{-1}x_{5}+x_{3}^{-1}x_{4}^{-1}x_{5}^{2}.\]
The verification in the final case \(X=\mathrm{Bl}_{3}\mathbb{C}P^{2}\) is then again similar: _we assume that the potential \(S\) does not involve the \(5\)-cycle through all vertices except \(1\) with non-zero coefficient_ (which could otherwise force the composition \(M_{1}^{5}M_{1}^{2}M_{1}^{4}M_{1}^{6}\) to
be zero, resulting in the wrong \(F\)-polynomial of \(V\)). This finishes the proof of the lemma.
We can now mutate the virtual representation \(P(X)\) to all mutated quivers with potentials:
**Definition 4.2**.: _For every sequence of vertices \(\mathbf{i}=(i_{1},\ldots,i_{N})\) without repetition, define \(P_{\mathbf{i}}(X)\) as the representation of \((Q_{\mathbf{i}}(X),S_{\mathbf{i}}(X))\) given by_
\[P_{\mathbf{i}}(X)=\mu_{i_{N}}\cdots\mu_{i_{1}}P(X).\]
The main result of this paper now follows immediately:
**Theorem 4.3**.: _Every Floer potential of a monotone Lagrangian torus \(L_{\mathbf{i}}(X)\) appearing in a Lagrangian seed with associated LG seed \(s_{\mathbf{i}}(X)\) corresponds to the cluster character \(\mathrm{CC}_{P_{\mathbf{i}}(X)}(x)\) of the virtual representation \(P_{\mathbf{i}}(X)\) of \((Q_{\mathbf{i}}(X),S_{\mathbf{i}}(X))\) under the comparison map \(\Phi_{\mathbf{s}_{\mathbf{i}}(X)}\), that is,_
\[\Phi_{\mathbf{s}_{\mathbf{i}}(X)}(W_{L_{\mathbf{i}}(X)})=\mathrm{CC}_{P_{ \mathbf{i}}(X)}(x).\]
Indeed, the claimed equality holds for the empty sequence \(\mathbf{i}\), the right hand side is mutation invariant by Lemma 2.9 and the definition of the \(P_{\mathbf{i}}(X)\), and the left hand side is mutation invariant by Lemma 3.2, the definition of the \(W_{\mathbf{i}}(X)\), and the mutation compatibility of Floer potentials of Theorem 2.5.
**Remark:** The authors expect that, using [10], explicit (graded) potentials for the above five quivers \(Q(X)\) can be constructed, so that the representations \(P_{\mathbf{i}}(X)\), and conjecturally also their cluster characters, can be determined more explicitly as graded representations of the \(Q_{\mathbf{i}}(X)\).
|
2301.13753 | Dynamic Scheduled Sampling with Imitation Loss for Neural Text
Generation | State-of-the-art neural text generation models are typically trained to
maximize the likelihood of each token in the ground-truth sequence conditioned
on the previous target tokens. However, during inference, the model needs to
make a prediction conditioned on the tokens generated by itself. This
train-test discrepancy is referred to as exposure bias. Scheduled sampling is a
curriculum learning strategy that gradually exposes the model to its own
predictions during training to mitigate this bias. Most of the proposed
approaches design a scheduler based on training steps, which generally requires
careful tuning depending on the training setup. In this work, we introduce
Dynamic Scheduled Sampling with Imitation Loss (DySI), which maintains the
schedule based solely on the training time accuracy, while enhancing the
curriculum learning by introducing an imitation loss, which attempts to make
the behavior of the decoder indistinguishable from the behavior of a
teacher-forced decoder. DySI is universally applicable across training setups
with minimal tuning. Extensive experiments and analysis show that DySI not only
achieves notable improvements on standard machine translation benchmarks, but
also significantly improves the robustness of other text generation models. | Xiang Lin, Prathyusha Jwalapuram, Shafiq Joty | 2023-01-31T16:41:06Z | http://arxiv.org/abs/2301.13753v1 | # Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation
###### Abstract
State-of-the-art neural text generation models are typically trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens. However, during inference, the model needs to make a prediction conditioned on the tokens generated by itself. This train-test discrepancy is referred to as _exposure bias_. Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias. Most of the proposed approaches design a scheduler based on training steps, which generally requires careful tuning depending on the training setup. In this work, we introduce **D**ynamic **S**cheduled Sampling with **I**mitation **L**oss (DrySI), which maintains the schedule based solely on the training time accuracy, while enhancing the curriculum learning by introducing an imitation loss, which attempts to make the behavior of the decoder indistinguishable from the behavior of a teacher-forced decoder. DrySI is universally applicable across training setups with minimal tuning. Extensive experiments and analysis show that DrySI not only achieves notable improvements on standard machine translation benchmarks, but also significantly improves the robustness of other text generation models.
Machine Learning, ICML
## 1 Introduction
Advances in deep learning have led to great achievements in neural text generation tasks including machine translation (Vaswani et al., 2017; Wu et al., 2019), summarization (Zhang et al., 2019; Lewis et al., 2020) and language modeling (Radford et al., 2019; Brown et al., 2020). The dominant approach to date generates the output sequence with a decoder in an autoregressive manner (Bahdanau et al., 2014; Vaswani et al., 2017). To realize the autoregressive formulation, most of the text generation models are trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens with Maximum Likelihood Estimation (MLE). In particular, _Teacher Forcing_(Williams and Zipser, 1989) has been the de facto strategy to help stabilize and speed up the training, where the decoder takes the ground-truth token from the previous time step as the conditioning input for generating the next token. At inference time, however, the decoder does not have access to the previous ground-truth tokens when it is predicting the next token. Thus, the decoder has to instead make a prediction conditioned on the tokens generated by itself so far, resulting in a train-test discrepancy, often referred to as _exposure bias_(Bengio et al., 2015). This discrepancy can lead to error accumulation over time steps as the model might encounter unexpected (though not necessarily wrong) tokens that it has never been exposed to during training.
The methods proposed to combat the exposure bias problem can be primarily categorized into two groups: _Non-MLE-based_ approaches (Goyal et al., 2016; Yu et al., 2017; Lin et al., 2017; Nie et al., 2019) and _MLE-based_ approaches (Bengio et al., 2015; Song et al., 2021; Liu et al., 2021b). Most non-MLE-based approaches take advantage of generative adversarial networks (Goodfellow et al., 2014) and/or reinforcement learning methods to avoid teacher forcing. However, the advantages of these approaches often come with the price of training instability and difficulty, and empirically they still struggle to outperform the MLE baseline (He et al., 2021). On the other hand, MLE-based approaches typically apply curriculum learning (Bengio et al., 2009) strategy to gently bridge the gap between training and inference. These methods often consist of a scheduler, _e.g.,_ based on training steps, which controls the extent to which the model should be exposed to its own predictions during training. Intuitively, the model should be exposed to more of its own outputs as the training proceeds.
MLE-based approaches are inherently more efficient and parallelizable as the models do not need to generate the full sequence in inference mode to compute the training loss. Also, MLE has been the mainstream method for training deep neural models. Our work in this paper thus con
cerns MLE-based training. Bengio et al. (2015) propose _scheduled sampling_ to alleviate exposure bias, where the decoder uses the ground-truth previous token as input with probability \(\epsilon\), and uses its own prediction with probability \((1-\epsilon)\). The probability \(\epsilon\) is controlled by a scheduler to decay based on the training steps. Such a curriculum learning strategy allows the model to use ground-truth previous tokens at the initial stage of the training and gradually exposes the model to more and more of its own predictions. Zhang et al. (2019) modify the scheduled sampling of Bengio et al. (2015) by allowing the model to sample from a set of oracle tokens (_e.g.,_ synonym of the target token) as the previous token to simulate the model's output at inference time.
Song et al. (2021) incorporate an error correction mechanism with two decoders. A query stream decoder having access to only positional information first predicts intermediate results, which is then corrected by a content stream decoder. The inference requires running through both decoders, which lowers the efficiency. Liu et al. (2021) propose to use a scheduler based on both training and decoding steps. Intuitively, the later decoding steps usually have higher error rates during inference due to error accumulation. Therefore, the model output should be sampled as input with a higher chance for the later decoding steps during training.
A schedule (linear or nonlinear) based on training steps usually requires careful design for the specific problem setup as different batch sizes may lead to different training speeds and different tasks may have different convergence rates. In this work, we introduce **D**ynamic **S**cheduled Sampling with **I**mitation loss (DySI). First, we propose a scheduler that solely depends on training time accuracy. By tracking training progress though accuracy, we avoid having to perform a costly heuristic search to find a suitable scheduler for each different problem setup. In addition, we use an _imitation loss_ to enforce the condition that the generative behavior should match teacher-forced behavior as closely as possible, a core idea in professor forcing (Goyal et al., 2016). Our imitation loss uses the decoder in teacher-forcing mode as the expert to regularize/guide the decoder's behavior when it takes self-generated tokens as input. To the best of our knowledge, this is novel for an MLE-based setup.
We first conduct experiments on machine translation (MT) to demonstrate how our approach performs in various aspects such as generalization and degeneration. Results show that training with DySI achieves notable improvements on standard MT benchmarks. More importantly, we introduce a novel framework for evaluating the robustness of a language model (LM) when exposed to erroneous or toxic context, using auto-completion as a test bed. Analysis shows DySI yields a significantly more robust LM across various kinds of perturbations and against toxic text generation, and overall it produces better quality text.
## 2 Background
**Text generation.** Typical neural text generation models use an autoregressive factorization of the joint probability over the target sequence. An autoregressive decoder trained with maximum likelihood estimation (MLE) learns to assign a probability to a target sequence \(\mathbf{y}=(y_{1},\cdots,y_{T})\) containing \(T\) tokens by factorizing the joint probability using the chain rule:
\[\mathcal{L}_{\text{MLE}}=-\sum_{t=1}^{T}\log P(y_{t}|\mathbf{y}_{<t},\mathbf{x}), \tag{1}\]
where \(\mathbf{x}\) is a source input for conditional text generation (_e.g.,_ machine translation) and \(\emptyset\) for unconditional generation (_e.g.,_ language modeling), and \(\mathbf{y}_{<t}=(y_{1},\ldots,y_{t-1})\) denotes tokens before the current step \(t\). To train autoregressive models, teacher forcing (Williams and Zipser, 1989) is commonly used for faster convergence and training stability. In this method, ground-truth tokens from the previous steps are used as input to predict the current token \(y_{t}\). However, it also causes the train-test discrepancy or _exposure bias_ as the target tokens are not available at inference time.
**Scheduled sampling.** Scheduled sampling (Bengio et al., 2015) is a curriculum learning approach to alleviate exposure bias. Instead of conditioning only on the ground-truth context, the model conditions on a sequence \(\hat{\mathbf{y}}_{<t}\) that mixes tokens from the ground-truth sequence \(\mathbf{y}_{<t}\) and model's previous predictions \(\tilde{\mathbf{y}}_{<t}\). Essentially, at each decoding step, a ground-truth token is used as input with a probability of \(\epsilon\) and the previous prediction is used with a probability of \((1-\epsilon)\) as:
\[\hat{y}_{t}=\begin{cases}y_{t}&\text{with probability}\ \epsilon\\ \tilde{y}_{t}&\text{with probability}\ (1-\epsilon)\end{cases} \tag{2}\]
The method maintains a decay schedule for the probability \(\epsilon\) based on training steps such that the model is exposed to more self-generated tokens at the later stage of the training. The model is trained with the standard MLE loss:
\[\mathcal{L}_{\text{SS}}=-\sum_{t=1}^{T}\log P(y_{t}|\hat{\mathbf{y}}_{<t},\mathbf{x}). \tag{3}\]
## 3 Methodology
Figure 1 shows an illustration of DySI. Different from traditional MLE training, it consists of a **dynamic scheduler** and an **imitation** module. The decoder is first run in teacher-forcing mode (henceforth referred to as _teacher-forced decoder_) to obtain an expert distribution, and later
in operative mode (henceforth referred to as _operative decoder_) during training. The dynamic scheduler determines the sequence containing target and model-generated tokens that is provided as input to the operative decoder to perform training. The imitation loss constrains the operative decoder behavior to match the teacher-forced decoder's behavior.
### Dynamic Scheduled Sampling
Most of the proposed variants of scheduled sampling differ in the way they perform the _sampling_, _i.e.,_ they use different sampling strategies to decide which decoding positions should take the model-generated previous token as input. For example, Bengio et al. (2015); Zhang et al. (2019) uniformly sample the decoding positions from a sequence with a probability (1 - \(\epsilon\)), where \(\epsilon\) decays with training steps. Liu et al. (2021) propose to select the positions where the model has high prediction confidence, _e.g.,_\(p(y_{t})>0.9\), while Liu et al. (2021) propose to sample the positions based on both training and decoding steps with a joint probability distribution function. Instead of proposing a new sampling method, we propose a new **scheduler** that does not rely on training steps, but instead uses the model's performance directly to keep track of the training progress.
Training progress can highly depend on the task, dataset, and experiment setup. For instance, Vaswani et al. (2017) report good performance on WMT'14 En-Fr translation task with \(\approx\) 300K updates, while Ott et al. (2018) need only \(\approx\) 90K updates to get better results with the same model due to a larger batch size. A scheduler based on training steps will inevitably require heuristic-based tuning for different experimental conditions, which could be expensive. Moreover, such a scheduler makes the assumption that all the training instances in a batch have the same training progress.
In light of this, we propose to use _training time accuracy_ for the schedule, as training accuracy gives more direct feedback about the learning progress. As shown in Figure 1, given a target sequence \(\mathbf{y}=(y_{1},\ldots,y_{T})\), we first run the teacher-forced decoder to obtain a sequence of distributions over the vocabulary, \((P_{1},\ldots,P_{T})\). We then (greedily) sample the distributions to obtain the predictions \(\tilde{\mathbf{y}}=(\tilde{y}_{1},\ldots,\tilde{y}_{T})\) where \(\tilde{y}_{t}=\arg\max(P_{t})\ \forall t\in[1,\ldots,T]\). We can compute the training time accuracy for a sequence as \(\textit{Acc}(\mathbf{y},\tilde{\mathbf{y}})=(\sum_{t=1}^{T}\mathbb{1}(y_{t}=\tilde{y} _{t}))/T\). The scheduler then decides the number of positions (\(N\)) in the ground-truth sequence to be replaced with tokens generated by the teacher-forced decoder as:
\[N\sim\beta\cdot\mathcal{U}(0,\textit{Acc}(\mathbf{y},\tilde{\mathbf{y}})\cdot T) \tag{4}\]
where \(\mathcal{U}\) denotes a uniform distribution and \(\beta\in[0,1]\) is a hyper-parameter that provides further control on the sampling strength in addition to the inherent dynamic control according to training accuracy. Notice that \(N\) changes dynamically based on the training accuracy of each instance and is agnostic to training steps. As a sampling strategy, we choose \(N\) positions in the sequence randomly and uniformly. A random selection as opposed to selection based on high confidence avoids _confirmation bias_(Tarvainen and Valpola, 2017) where the model accumulates its own errors, and potentially exposes the model to more varied input-output samples, which in turn helps the _behavior cloning_, as discussed in the next section. We view our method, dynamic scheduled sampling and imitation loss, as a unified approach where the the random selection in sampling also contributes to the imitation process. We further discuss the relation between our method and other work that does not require a scheduler based on training steps in Appendix F.
Ultimately, the output of the dynamic scheduler is a sequence \(\hat{\mathbf{y}}\) that mixes the tokens generated by the teacher-forced decoder (_i.e.,_ tokens from \(\tilde{\mathbf{y}}\)) and the tokens from the ground-truth sequence \(\mathbf{y}\). This sequence \(\hat{\mathbf{y}}\) is then used as the input for training the decoder in its operative mode. Note that with a Transformer architecture (Vaswani et al., 2017), all the decoding steps can be trained in parallel (within a layer) by masking out the future tokens.
### Imitation Loss
In order to mitigate exposure bias entirely, the decoder should behave indistinguishably whether it is fed with a ground-truth token or a self-generated token as input. Such a property would allow the model to generalize beyond the context it sees in training. This has been proven effective by Goyal et al. (2016), where they borrow the idea of GAN (Goodfellow et al., 2014) to use a discriminator to distinguish between the hidden states of the decoder in teacher-forcing mode and inference mode. However, putting the decoder in inference mode during training makes the training slow as it requires sampling the full sequence in an autoregressive manner (_i.e.,_ no parallelization). In addition, training GANs for texts can be challenging as it requires the generator and the discriminator to be on par with each other.
We instead propose to close the gap between the teacher-forced decoder behavior and the operative decoder behavior in the MLE-based setup. To match the operative decoder behavior to the teacher-forced decoder behavior, we bring the intuition of _imitation learning_. In particular, the operative decoder can be seen as a _learner_, which tries to imitate the behavior of an _expert_ at each decoding step. This is also known as behaviour cloning.
**Expert.** As shown in Figure 1, the expert, in our case, is the teacher-forced decoder, which provides demonstrations
to the learner in the form of sequences of distributions over actions (tokens in the vocabulary). At each decoding step \(t\), the expert takes the previous target token \(y_{t-1}\) as the observation, and maps its state \(s_{t}\) to the action distribution based on its policy \(\pi_{\text{tf}}(s_{t})\in\mathbb{R}^{[\mathbb{V}]}\), where \(\mathbb{V}\) is the vocabulary and the subscript tf stands for teacher-forcing. More formally,
\[\pi_{\text{tf}}(s_{t})=\operatorname{softmax}(s_{t})=P_{\theta_{\text{tf}}}(y_ {t}|\mathbf{y}_{<t},\mathbf{x}) \tag{5}\]
The expert-generated action distribution is regarded as the supervision signal to guide the learner.
**Learner.** The learner is the decoder running in operative mode in Figure 1. Unlike the expert, it will not always take previous ground-truth tokens as input, instead it will also use the predicted tokens from the expert (see \(\operatorname{arg\,max}\) in Figure 1) according to the dynamic schedule (SS3.1). Specifically, for an observed sequence \(\hat{\mathbf{y}}\) comprising of ground-truth and model-generated tokens, the learner generates an action distribution at every step as:
\[\pi_{\text{op}}(s^{\prime}_{t})=P_{\theta_{\text{op}}}(y_{t}|\hat{\mathbf{y}}_{<t },\mathbf{x}) \tag{6}\]
where op denotes the operative decoder. Notice that the predicted tokens in \(\hat{\mathbf{y}}\) provide new demonstrations (unseen in the original training data) for the learner comprising states that it may experience during inference. Since the learner and expert share the same parameters, it also simulates the mistakes that the learner may make during inference. Overall, once trained, the operative decoder is expected to behave more robustly under different decoding conditions.
**Learning from expert demonstration.** To match the learner's policy with the expert's, we minimize the Kullback-Leibler divergence (Kullback and Leibler, 1951) (\(D_{\text{KL}}\)) between the two policies to guide the operative decoder behavior so that it better matches the teacher-forced behavior, considering the latter fixed:
\[\begin{split}\mathcal{L}_{\text{LL}}(\theta_{\text{op}})& =\sum_{t=1}^{T}D_{\text{KL}}(\pi_{\text{tf}}(s_{t})||\pi_{\text{op }}(s^{\prime}_{t}))\\ &=\sum_{t=1}^{T}D_{\text{KL}}(P_{\theta_{\text{tf}}}(y_{t}|\mathbf{ y}_{<t},\mathbf{x})||P_{\theta_{\text{op}}}(y_{t}|\hat{\mathbf{y}}_{<t},\mathbf{x}))\end{split} \tag{7}\]
Imposing an auxiliary loss to learn from the output distribution of the teacher-forced decoder has another advantage. Although a teacher-forced decoder may fail to predict the exact target token in some positions, it (after being trained enough) often assigns a higher probability mass to the plausible translations in the context, such as synonyms of the target token (Li and Lu, 2021). Arguably, the soft output distribution contains much more information compared to the one-hot target, which helps the learning of the operative decoder (Furlanello et al., 2018). In Appendix E, we justify why we call the above learning process imitation rather than knowledge distillation (Hinton et al., 2015).
### Overall Training Objective
The generation model is trained with a combination of an MLE loss and the imitation loss:
\[\begin{split}\mathcal{L}(\theta_{\text{op}})&=-\sum _{t=1}^{T}\underbrace{\log P_{\theta_{\text{op}}}(y_{t}|\hat{\mathbf{y}}_{<t},\bm {x})}_{\text{MLE}}\\ &+\alpha\sum_{t=1}^{T}\underbrace{D_{\text{KL}}(P_{\theta_{\text {tf}}}(y_{t}|\mathbf{y}_{<t},\mathbf{x})||P_{\theta_{\text{op}}}(y_{t}|\hat{\mathbf{y}}_{< t},\mathbf{x}))}_{\text{Imitation}}\end{split} \tag{8}\]
where \(\alpha\) is a hyper-parameter to control the relative weight and \(\theta_{\text{tf}}=\theta_{\text{op}}\).
## 4 Experiments on Machine Translation
In this section, we evaluate our approach on MT to align with previous work (Liu et al., 2021;b). We extend our
Figure 1: Illustration of Dynamic Scheduled Sampling with Imitation Loss (D\(\operatorname{YSI}\)). The teacher-forced decoder allows the computation of training accuracy that is directly used in our dynamic scheduler, and it also provides the expert distribution as a supervision signal to the operative decoder. The decoder parameters only get updated when it is in operative mode.
evaluation to the robustness of language models with a novel setup in SS5.
### Experimental Settings
**Datasets & metric.** We evaluate our model on two standard neural machine translation (NMT) benchmarks: WMT'14 English-German (En\(\rightarrow\)De) and English-French (En\(\rightarrow\)Fr). The training datasets contain about 4.5M and 35M sentence pairs respectively. We use newstest2013 and newstest2014 as the validation and test sets respectively. The sentences are encoded with joint Byte-Pair Encoding (BPE) (Sennrich et al., 2016) with 40K operations. For performance measure, following previous work, we report the tokenized BLEU (Papineni et al., 2002). We also report other popular translation metrics, _e.g.,_ SacreBLEU (Post, 2018), in Table 7. We use the Transformer _big_(Vaswani et al., 2017) as our backbone NMT model (refered to simply as Transformer henceforth). The learning rate warms up to a peak of \(0.0008\) with \(4,000\) steps, and then decays with the inverse square-root schedule. The value of \(\alpha\) in Eq. 8 and \(\beta\) in Eq. 4 are set to \(0.5\) on both datasets. We use \(0.1\) label-smoothing in training and a beam size of \(5\) at inference. We train the models with DySI and teacher forcing for the same number of updates. Appendix A gives further details about the setup and hyper-parameters.
We also compare against other MLE-based approaches: DynamicConv (Wu et al., 2019), Error Correction (Song et al., 2021), SS-Steps (Liu et al., 2021) and Scheduled Sampling (Bengio et al., 2015). For SS-Steps, we run their publicly available code using the provided optimized parameters since that produced the best results. For Scheduled Sampling, we adopted the _Exponential_ and _Linear_ decay schemes from (Bengio et al., 2015) and tuned the hyper-parameters based on the validation set performance. Note that the scheduled sampling (Bengio et al., 2015) that was originally proposed for a recurrent architecture samples from the model generated tokens step by step, which is highly inefficient for training large models due to its inability to parallelize. The current paradigm, where the teacher-forced outputs are sampled as previous outputs, can be seen as an approximation and has been widely used in Transformer-based models (Duckworth et al., 2019; Mihaylova Martins, 2019; Liu et al., 2021; 2019).
### Translation Performance
We present the tokenized BLEU scores on WMT newstest2014 in Table 2. We can observe that our method, _i.e.,_ Transformer _big_ trained with DySI achieves 0.9 and 0.5 BLEU improvement on En\(\rightarrow\)De and En\(\rightarrow\)Fr, respectively, over the the same model trained with standard teacher forcing. Our method also outperforms other MLE-based approaches that deal with exposure bias in NMT, such as Scheduled Sampling and SS-Steps. We also see that training with only Dynamic Scheduled Sampling performs better than or on par with Scheduled Sampling while requiring significantly less tuning, which demonstrates its contribution. We additionally present NMT performance in other popular metrics, such as detokenized SacreBLEU (Post, 2018) and BLEURT (Sellam et al., 2020) in Appendix B.
### Ablation Study
**Effect of \(\beta\) and \(\alpha\).** We conduct ablation studies for the two hyper-parameters in DySI, _i.e.,_\(\beta\) in Eq. 4 and \(\alpha\) in Eq. 8, to understand how they impact the performance. Intuitively, the larger \(\beta\) is, the more positions will be sampled and thus the operative decoder gets to see more of its own predictions during training. Table 1 shows that the performance on the validation set is generally robust to different values of \(\beta\) as long as it is larger than a certain value, _e.g.,_ 0.25. With a fairly small value of \(\beta\), _e.g.,_ 0, the model deteriorates to a standard teacher forcing as the scheduler will not sample any positions for using model-generated tokens. On the other hand, \(\alpha\) controls the extent to which the operative decoder should imitate the teacher-forced behavior in addition to the original one-hot target. When \(\alpha=0\), DySI simply becomes another variant of vanilla scheduled sampling. From Table 1, we observe that when \(\alpha\) is small, there is a clear gap between model performance and the best result on the validation set, which renders it necessary to include the imitation loss to further boost the performance.
We also study two different training initialization strategies in Appendix D.
### Analysis
As translation performance alone may not be sufficient to understand how DySI helps mitigate the exposure bias problem, we conduct further analysis to get more insights.
**Multiple reference translation.** We examine how the model performs against multiple semantically equivalent references. Particularly, we use the dataset from Ott et al. (2018), which contains 10 additional human translations for 500 sentences from the WMT'14 En\(\rightarrow\)De testset. We present the results in Table 3. Oracle score computes BLEU for every hypothesis _w.r.t._ its best matching reference and
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\beta\) & 0.0 & 0.25 & 0.5 & 0.75 & 1.0 \\ \hline
**BLEU** & 26.70 & 27.05 & 27.14 & 27.11 & 27.06 \\ \hline \(\alpha\) & 0.0 & 0.25 & 0.5 & 0.75 & 1.0 \\ \hline
**BLEU** & 26.84 & 26.97 & 27.14 & 27.06 & 27.09 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation study of \(\alpha\) and \(\beta\) on WMT’14 En\(\rightarrow\)De development set when the other is set to the default 0.5.
averages it over all hypotheses. The oracle scores in Table 3 indicate that the best scores that the models can achieve _w.r.t._ multiple references are comparable. However, the higher corpus score, which measures the BLEU score with all the human references, means that our model has potentially produced more diverse translations, thus having higher coverage over different translations.
We conjecture that DySI prevents the model from being over-confident and makes the prediction distribution more spread out such that the model tends to use diverse words. To confirm this property, we compute the entropy of the model generation distribution over the vocabulary as \(-\sum_{w\in\mathcal{V}}P(y_{t}=w)\log P(y_{t}=w)\), and average it over all the decoding steps. The entropy values over the WMT'14 En\(\rightarrow\)De testset for our model and the baseline are \(2.22\) and \(1.79\), respectively, confirming our hypothesis.
Previous work Ott et al. (2018) points out that excessive spread of the generation distribution may be an indication of over-smoothing, which could happen due to the application of label smoothing. However, unlike the standard label smoothing, where all the classes get smoothed uniformly, the imitation loss in our approach allows the model to learn from the expert distribution through \(D_{\text{KL}}\). Learning from a soft target distribution can be seen as an **adaptive** version of label smoothing, which in turn keeps improving for better model regularization and calibration Muller et al. (2019).
**Robustness.** As exposure bias is closely related to generalization, we test if DySI can lead to improvement under a distribution shift. For this, we use the models trained on the WMT'14 dataset (news domain) to perform zero-shot translation on IWSLT'14 En\(\rightarrow\)De testset Cettolo et al. (2012), consisting of transcribed and translated TED talks (spoken language text). In addition, we use the WMT'19 MT Robustness dataset Michel and Neubig (2018) (En\(\rightarrow\)Fr) to investigate how DySI performs both with a domain shift and non-standard, noisy text. As shown in Table 4, consistent improvements on both tasks indicate that the model trained with DySI is able to deliver more robust performance compared to the baseline.
## 5 Towards Robust Text Generation
LMs are typically trained with an autoregressive generation objective with _teacher forcing_. It has been observed that even with large pre-trained LM, high frequency tokens largely dominate generated text Welleck et al. (2020); Holtzman et al. (2020); Lin et al. (2021). Repetition, both at a single token and at higher \(n\)-gram levels, is a well known problem in neural text generation. _Oversensitivity_ Jia and Liang (2017) is another issue with LMs, in which models produce significantly different outputs for very similar inputs, even when the changes preserve semantics Ribeiro et al. (2018). Another major problem in putting LMs in production is that they can produce racist, sexist and other kinds of toxic language Gehman et al. (2020); Cohen et al. (2022); Ouyang et al. (2022).
While previous work on exposure bias emphasizes on improving task-specific performance, such as BLEU for MT, we provide a new perspective of the problem, where we evaluate how mitigating exposure bias can improve the robustness of pre-trained LMs in terms of repetition, oversensitivity and toxicity.
### Robustness to Perturbations
We conduct perturbation experiments to test if training with DySI can produce a model that is more robust to repetition and oversensitivity errors. We fine-tune GPT-2 Radford et al. (2019) on the WikiText-103 Merity et al. (2017) training set with MLE loss (standard teacher forcing), Scheduled Sampling (SS) and DySI. We prompt the trained models with texts extracted from the WikiText-103
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **WMT’19 Rob.** & **IWSLT** \\ \hline Transformer & 37.6 & 29.2 \\ DySI & 38.3 & 29.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot translation performance on **WMT’19 Robustness En\(\rightarrow\)Fr task and IWLST’14 En\(\rightarrow\)De test set.**
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Models** & **En\(\rightarrow\)De** & **En\(\rightarrow\)Fr** \\ \hline Transformer Ott et al. (2018) & 29.3 & 43.2 \\ DynamicConv Wu et al. (2019) & 29.7 & 43.2 \\ Error Correction Song et al. (2021) & 29.2 & - \\ SS-Steps Liu et al. (2021) & 29.6 & 42.8 \\
**Our Implementation** & & \\ Transformer & 29.2 & 42.9 \\ + Scheduled Sampling Bengio et al. (2015) & 29.5 & 43.0 \\ + Dynamic SS & 29.6 & 43.0 \\ + DySI & **30.1*** & **43.4*** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Tokenized BLEU scores on newstest2014 for WMT’14 En\(\rightarrow\)De and En\(\rightarrow\)Fr translation tasks. * denotes significantly better than Scheduled Sampling with \(p<0.005\).
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **Single Ref.** & \multicolumn{2}{c}{**Multiple Ref.**} \\ \cline{3-3} & **Corpus** & **Oracle** \\ \hline Transformer & 28.6 & 74.0 & **83.4** \\ DySI & **29.4** & **74.8** & **83.4** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Corpus BLEU and Oracle Sentence BLEU on WMT’14 En\(\rightarrow\)De test set with 10 additional references.
test set to perform the auto-completion task. We then perturb these input prompts in an effort to instigate the models to commit repetition errors.
We report Mauve(Pillutla et al., 2021) scores to compare the machine generated texts with human produced text. Mauve calculates the area under divergence curves for two text distributions and produces a score1, where a higher number indicates closer text distributions. Mauve has been shown to have high correlations with human judgments for open-ended text generation. We use the n-gram repetition ratio difference to evaluate the variations that the perturbations cause. The \(n\)-gram repetition ratio measures how unique the \(n\)-grams in a given text are, where a higher score indicates higher repetition and lower diversity. We report the difference between the \(n\)-gram repetition ratios of two texts for various \(n\), which indicates if a given text is more repetitive _w.r.t._ to another. A robust model should produce a diverse, yet consistent in repetition distribution, output even when the prompt is perturbed.
Footnote 1: We report scores scaled between 0-100 for readability.
#### 5.1.1 Auto-completion
Fine-tuning LM.We fine-tune GPT-2 Medium on WikiText-103(Merity et al., 2017) with a maximum sequence length of 300 tokens and an initial learning rate of \(0.0005\). Each model is trained for a maximum of 35K iterations and evaluated based on the perplexity on the validation set after every 1K iterations. The perplexity scores on the corresponding test set for MLE, SS and DySI are
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Perturbation** & **Perturbed Prompt** & **LM Generations** \\ \hline Last Word & By 2012, she was was & **MLE:** was, the Australian National **E**questrian **Champion, the **A**ustralian National **C**hampion \\ Repetition & was was was was was & DvSI: to competition were underway, was placed third \\ \hline \(n\)-gram & travel with the Doctor & **MLE:** and the Doctor starting with " The Bells **and the Doctor** **b**eginning with " The Bells **and the Doctor** **and the Doctor** **with "The Bells** **bySI:**". It was announced that the third series would be “Clara " \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples showing the robustness of GPT-2 trained with DySI to various perturbations. DvSI is significantly robust to repetition, even in cases of extreme perturbation that causes the baseline to fail irreparably. Text highlighted in red indicates changes due to perturbation, and orange indicates repetition induced by perturbation. (condensed for brevity; best viewed in color).
Figure 2: **First row** shows results for Last Word Repetition. The x-axis plots the number of times \(m=3,5,7,10\) that the last word is repeated. **Second row** shows results for \(n\)-gram Repetition. The x-axis plots the various \(n\)-gram sizes that we repeat.
13.4, 14.3 and 13.9, respectively.
**Prompts for auto-completion.** We use the test set from WikiText-103, which is extracted from a set of verified good and featured articles on Wikipedia. We extract the paragraphs from the document and use the first 50 words as the input prompt to the trained language models. The models need to generate a continuation of 200 BPE tokens based on the prompt. We apply nucleus sampling (Holtzman et al., 2020) (\(p=0.8\)) as the decoding strategy since it leads to high-quality and human-like generated text. In addition, we find that text generated by DySI is more close to human-written text (see Appendix G).
**Comparison to human.** We compare the texts generated by the baseline MLE and the DySI model to the original human continuations using Mauve. We sample continuation text three times from each model and report the average and standard deviation. MLE achieves a Mauve score of \(71.88\pm 9.48\), SS achieves a Mauve score of \(72.46\pm 2.76\), while DySI achieves a score of \(73.08\pm 3.64\). DySI has a higher Mauve score, showing that it produces text that is closer to human-written text. In addition, the baseline MLE model has a significantly higher standard deviation compared to SS and DySI, showing that models trained with methods that alleviate exposure bias are also more consistent across multiple samplings.
#### 5.1.2 Perturbation Experiments
We use various strategies to perturb the prompts and compare the \(n\)-gram repetition ratio differences of model outputs for the perturbed prompts to the model outputs for the original prompts. For these experiments, we sample each model twice for both the original and perturbed prompts, and report the average of all 4 combinations. We include a comparison of their Mauve scores in Appendix G.
**Last word repetition.** To test the robustness of the models to repetition, we repeat the last word of the prompt \(m=3,5,7,10\) times, and plot the difference in 1, 2, and 3-gram repetition ratios of the generated text with respect to the text generated for the original prompt. From Figure 2 (row 1), we see that the repetition ratios increase significantly with \(m\) for the baseline MLE and the SS models. However, DySI is much more robust. It produces a significantly lower repetition ratio difference.
\(n\)**-gram repetition.** In this setup, we repeat the last \(n\) words of the prompt to test whether DySI is also robust to repetitions involving longer sequences of words instead of only a single repeated word. We experiment with repeating the last \(n=3,5,7,10\) words and plot the 1, 2, and 3-gram repetition ratio difference of the generated text _w.r.t._ the text generated for the original prompt. Interestingly, we see in Figure 2 (row 2) that repeating a longer sequence of words leads to an increase in the repetition ratios for higher order \(n\)-grams for the MLE baseline. In contrast to both MLE and SS, DySI maintains a low repetition ratio difference.
Table 5 shows examples of the outputs generated by the baseline MLE and DySI models for various perturbed prompts. DySI produces reasonable outputs even with extreme perturbations, and is remarkably robust to repetition perturbations. He et al. (2021) show that LMs trained with teacher forcing have the ability to self-recover. Our experiments demonstrate that DySI can significantly enhance the model's self-recovery ability from erroneous generation, especially for repetition errors. We also conduct experiments to trigger oversensitivity by perturbing prompts through word replacement. The full experiment setup and results are given in Appendix G. Overall, we find that LMs are generally robust to this, with both SS and DySI doing better than MLE.
### Robustness to Toxic Generation
For this experiment, we use the same models that are fine-tuned on WikiText-103 with different training objectives as stated in SS5.1.1. In order to examine the tendency of generating toxic text, we use the RealToxicityPrompts dataset (Gehman et al., 2020). Similar to the previous experiment, given the prompts, the models are used to generate a continuation of 200 BPE tokens with nucleus sampling (\(p\)=0.8). To compensate for the effect of random sampling, we sample 10 texts from the LMs for each given prompt. Then, we use PerspectiveAPI2 to evaluate the toxicity of the generated text. Specifically, we follow Qian et al. (2022) in using the "challenging" subset from RealToxicityPrompts, which consists of \(203\) prompts (toxicity \(<0.5\)) that constantly lead to toxic LM output. Reducing exposure bias makes the models less likely to cater to the toxic context, thus producing less toxic continuation.
Footnote 2: [https://www.perspectiveapi.com/](https://www.perspectiveapi.com/)
We present the results in Table 6. We observe that neither SS nor DS can improve the robustness of LMs to the toxic prompts over the MLE baseline. However, when given the same set of toxic prompts, the model trained with DySI generates text with notably lower toxicity scores. The results demonstrate that even without a specific approach designed for detoxification, a better training objective that re
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **MLE** & **SS** & **DS** & **DySI** \\ \hline
**Toxicity** & \(0.358_{\pm 0.012}\) & \(0.359_{\pm 0.011}\) & \(0.356_{\pm 0.013}\) & \(0.286_{\pm 0.010}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: The mean and standard deviation of toxicity scores of generated text for different approaches. **SS** and **DS** denote the original Scheduled Sampling and Dynamic Scheduled Sampling, respectively.
duces exposure bias can still yield a model that is more robust to toxic inputs.
## 6 Conclusion
We have introduced Dynamic Scheduled Sampling with Imitation Loss (DySI) to combat one of the most well-known problems in autoregressive text generation, exposure bias. It consists of a dynamic sampling scheduler, which keeps track of training progress based on the training accuracy, and an imitation loss, which enables the model to learn from the expert's behavior. DySI achieves consistent improvement on several translation tasks and experiments. Furthermore, extensive analysis demonstrates that it can yield a significantly more robust text generation model.
|
2309.08237 | Morse superpotentials and blow-ups of surfaces | We study the Landau-Ginzburg mirror of toric/non-toric blowups of (possibly
non-Fano) toric surfaces arising from SYZ mirror symmetry. Through the
framework of tropical geometry, we provide an effective method for identifying
the precise locations of critical points of the superpotential, and further
show their non-degeneracy for generic parameters. Moreover, we prove that the
number of geometric critical points equals the rank of cohomology of the
surface, which leads to its closed-string mirror symmetry due to Bayer's
earlier result. | Hansol Hong, Hyunbin Kim | 2023-09-15T08:18:40Z | http://arxiv.org/abs/2309.08237v1 | # Morse Superpotentials and Blowups of Surfaces
###### Abstract.
We study the Landau-Ginzburg mirror of toric/non-toric blowups of (possibly non-Fano) toric surfaces arising from SYZ mirror symmetry. Through the framework of tropical geometry, we provide an effective method for identifying the precise locations of critical points of the superpotential, and further show their non-degeneracy for generic parameters. Moreover, we prove that the number of geometric critical points equals the rank of cohomology of the surface, which leads to its closed-string mirror symmetry due to Bayer's earlier result.
###### Contents
* 1 Introduction
* 2 The SYZ mirror of a log Calabi-Yau surface
* 3 Critical points of Laurent Polynomials
* 4 Singularities of mirror Landau-Ginzburg potentials
* 5 Homological Mirror Symmetry
* A Estimates for the second-order expansion of W
## 1. Introduction
There have been extensive studies on the effects of birational changes of the space on various quantum invariants. In particular, it appears to be a general phenomenon that a blowup at a point for a generic parameter introduces an additional 'independent' factor to the existing invariants prior to the blowup, such as a semi-simple field factor to the quantum cohomology [1] or a (semi-)orthogonal summand consisting of a single object in a semi-orthogonal decomposition of the derived category [2], [3], etc.
In this paper, we investigate how blowups affect Lagrangian Floer theory of SYZ fibers in a class of complex surfaces, while establishing related mirror symmetry statements. Our focus centers on complex surfaces obtained via blowups on toric surfaces at distinct points, allowing for blowup centers in generic positions of the toric divisor (not fixed by the torus action). Such surfaces are, of course, no longer toric, and often referred to as the non-toric blowups of toric surfaces. They are important building blocks of log Calabi-Yau surfaces as they serve as toric models [1], and recently their mirror symmetry questions were intensively studied, for e.g., by [10]. Amongst them, the approach of [1] pursues the Strominger-Yau-Zaslow (SYZ in short) aspects of the mirror construction of log Calabi-Yau surfaces in the realm of Lagrangian Floer theory, which will be taken as the basic geometric setup of this paper.
More specifically, [1] demonstrates that the complement of an anticanonical divisor in a non-toric blowup \(X\) admits a special Lagrangian fibration away from a small neighborhood of the nodal fibers. The presence of singular fibers gives rise to complicated
###### Abstract
We consider the _quasi-linear_ (quasi-linear) _quasi-linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _(quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _ _linear_ (quasi-linear) _ _linear_ (quasi-linear) _ _ _linear_ (quasi-linear) _ _ _linear_ (quasi-linear) _ _linear_ (quasi-linear
of the blowup centers. The critical points in this case can be easily counted using the classical theorem of Kushnirenko (see Theorem 3.2). In [1, Section 7], the locations of critical points of \(W\) were calculated through direct computation for degree \(5\) del Pezzo surface, that is one of the simplest Fano cases involving non-toric blowups.
Our main theorem is the following:
**Theorem I** (Theorem 4.22).: _Let \(X\) be a non-toric blowup of a toric surface \(X_{\Sigma}\), and let \(W^{\mathfrak{b}}\) be the the bulk-deformed potential of \(X\), defined on the \(SYZ\) mirror \(\tilde{Y}\) of the anticanonical divisor complement \(Y:=X\backslash D\). Then for a generic Kahler form and \(\mathfrak{b}\in H^{even}(X;\Lambda_{+})\), we have_
\[\operatorname{Jac}(W^{\mathfrak{b}})=QH^{\ast}_{\mathfrak{b}}(X)\]
_where the Jacobian ideal ring on left hand side is semi-simple, or equivalently, \(W^{\mathfrak{b}}\) is Morse. Here \(\mathfrak{b}\) is required to be the pull-back of a torus-invariant cycle in \(X_{\Sigma}\) when \(X\) is beyond semi-Fano._
One of the novel points is that we cover non-Fano cases, where the complete formula of the potential is not available due to nontrivial sphere-bubble contributions. In Proposition 4.2, we carefully estimate the energy of such bubbled-off contributions, and show that they cannot serve as (symplectic-)energy minimizers at a geometric critical point of \(W\). This observation is crucial since, under certain non-degeneracy conditions, the energy minimizers almost completely determine the critical point information of the potential. In the same spirit, for a generic Kahler form, the terms with nontrivial bulk-parameters fall within the higher-energy part of the potential, and do not influence the critical point calculation. We remark that for some special Kahler forms, however, the locations of critical points can depend on bulk-parameters, resulting in a _continuum of critical points_, as pointed out in [11]. See Subsection 4.6 for a detailed example.
When non-degeneracy of leading terms is not guaranteed, the analysis becomes much harder, and handling this issue takes up the most delicate point of our argument. In this case, the leading (first order) term calculation is not enough to determine the critical points, and we have to further expand the potential to the second (or higher) order and carry out a complicated energy estimate on the expansion to find out critical points that are stable under the blowups. A detailed proof is given in Appendix A.
The remaining part of the proof is the tropical geometric visualization of the critical behavior of the leading terms of \(W\) followed by an induction on the energy of holomorphic disks, an idea that essentially goes back to [11, 12, 13] and [14]. The advantage of tropical geometry, under non-degeneracy conditions, is that it confines all critical points to vertices of the tropicalization of the leading term potential (Corollary 4.3). At these vertices, the problem reduces to applying the Kushnirenko Theorem to a handful of monomials. If non-degeneracy assumptions fail, critical points can occur over edges of the tropicalization, which requires a further analysis on the next order terms as mentioned above. In either cases, the tropicalization can be easily calculated through a simple combinatorial procedure on the Newton polytope.
The effect of an individual point-blowup can be singled out from **Theorem I**. In Sections 4.3 and 4.4, we carefully extract and keep track of geometric critical points as we perform toric/non-toric blowups, while demonstrating that all such critical points are Morse critical points. In particular, the following is a parallel statement (upon mirror symmetry) to the result of Bayer [1, Theorem 3.1.1] in our particular geometric situation. The toric blowup case below is particularly related to [10], yet we allow \(X\) to be non-Fano, and hence \(W\) can be significantly different from the Hori-Vafa potential, accordingly.
**Theorem II, III** (Corollary 4.13, Proposition 4.16).: _Let \((X_{\Sigma},D_{\Sigma})\) be a toric surface with the mirror potential \(W_{\Sigma}\). Consider the toric surface \(X_{\widetilde{\Sigma}}\) obtained by taking a sequence of toric blowups of \(X_{\Sigma}\), and the surface \(X\) obtained by a sequence of non-toric blowups of \(X_{\Sigma}\). Denote by \(W_{\widetilde{\Sigma}}\) and \(W\) for the mirror potential of \(X_{\widetilde{\Sigma}}\) and \(X\), respectively. Then generically:_
1. _There exists_ \(r>0\) _depending on_ \(\omega\) _such that if the sizes of exceptional divisors are smaller than_ \(r\)_, then_ \(W_{\widetilde{\Sigma}}\) _has as many new non-degenerate critical points as the number of exceptional divisors for generic parameters. Moreover, every critical point of_ \(W_{\Sigma}\) _are extended to that of_ \(W_{\widetilde{\Sigma}}\) _in a valuation-preserving manner. If_ \(W_{\Sigma}\) _is Morse, then_ \(W_{\widetilde{\Sigma}}\) _is also Morse._
2. _Each non-toric blowup point gives rise to a unique new geometric critical point of_ \(W\) _which is non-degenerate. If the blowup point lies in_ \(D_{\Sigma,j+1}\)_, then the corresponding new critical point is located near the corner_ \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\)_, while every critical point of_ \(W_{\Sigma}\) _are extended to that of_ \(W\) _in a valuation-preserving manner. If_ \(W_{\Sigma}\) _is Morse, then_ \(W\) _is also Morse._
_Analogous statements of (1) and (2) also hold for the bulk-deformed potential \(W^{\mathfrak{b}}\) (in place of \(W_{\widetilde{\Sigma}}\) and \(W\), respectively) for a generic bulk parameter \(\mathfrak{b}\) given as in_ **Theorem I.**__
The first part of the statement concerns only toric geometry, and can be shown alternatively by appealing to the toric mirror symmetry of [10] and related facts about the quantum cohomology of \(X_{\Sigma}\). Notice that the above theorem implies that the mirror potential \(W\) (and \(W^{\mathfrak{b}}\)) of \(X\) is always Morse for generic parameters.
In view of the tropicalization of \(W\) (or that of its leading order terms), the new critical point lies over the vertex that is created when one of the unbounded edge of the tropicalization of \(W_{\Sigma}\) branches into multiple edges as in Figure 12.
Finally, we explore homological mirror symmetry aspects of non-toric blowups of toric surfaces. Due to lack of generation result for the Fukaya category in our geometric context, we consider the sub-Fukaya category \(\mathcal{F}_{0}(X)\) generated by Lagrangian torus fibers of the SYZ fibration on \(X\). Conjecturally, critical fibers from non-toric blowups should generate the corresponding eigenvalue components (with respect to \(c_{1}(X)\star-\)) of the genuine Fukaya category, based on [21, Corollary 1.12] which is currently only valid under the monotone assumption. In fact, we shall see that critical points of \(W\) have mutually distinct critical values, generically.
\(\mathcal{F}_{0}(X)\) admits the following simple structural decomposition. First of all, we see that the torus fibers corresponding to critical points of \(W\) have nontrivial Floer cohomologies by the same argument as in [10, 10]. Therefore these fibers (together with suitable \(\Lambda_{U}\)-local systems) form nontrivial objects in \(\mathcal{F}_{0}(X)\). In particular, these fibers cannot be displaced from themselves by any Hamiltonian diffeomorphism, and we can explicitly locate all non-displaceable fibers from the tropicalization of \(W\) (Section 3). We additionally show in Lemma 5.2 that two different critical fibers have trivial Floer cohomology between them (this fact must be already well-known), and that the endomorphism of a critical fiber is quasi-isomorphic to the Clifford algebra associated with the Hessian of \(W\) at the corresponding point. The latter part uses the similar argument to [10].
**Remark 1.1**.: _For the purpose of locating a non-displaceable fiber, it is enough to find a single bulk-parameter \(\mathfrak{b}\) that gives a nontrivial Floer cohomology, rather than generic \(\mathfrak{b}\). For instance, this is the reason why no special treatment is needed in [10, Theorem 4.7] for the degenerate case whereas we deal with it separately in Proposition 4.14._
In view of **Theorem II, III**, we obtain a new object of \(\mathcal{F}_{0}(X)\) each time we perform a non-toric blowup, and this new Lagrangian brane appears at a specific location. By comparing local models, we believe this object corresponds to what is called the _exceptional brane_ in [20]. Interestingly, depending on the location of blowup center, the new brane can be supported on the nodal fiber (its small perturbation), regarded as an immersed Lagrangian \(S^{2}\) boundary-deformed by immersed generators, which reveals necessity of including immersed branes in the Fukaya category. See 5.3 for more details.
On the other hand, the mirror \(B\)-model category has exactly the same description as above. By **Theorem I**, we have that the singularity category of \(W\) decomposes into skyscraper sheaves, or more precisely into their images in the quotient category by perfect complexes. It is well-known that their morphism spaces in the singularity category show the same feature (see [11], [12]) for e.g.), which leads to its equivalence to \(\mathcal{F}_{0}(X)\).
**Theorem 1.2** (Theorem 5.5).: _Let \(X_{\Sigma}\) be a toric surface, and \(X\) a non-toric blowup of \(X_{\Sigma}\). Suppose that every exceptional divisor from non-toric blowups has small enough symplectic volume. If \(\mathcal{F}_{0}(X)\) denotes the sub-Fukaya category generated by torus fibers in \(X\), then there is an equivalence between \(D^{b}\mathcal{F}_{0}(X)\) and \(\oplus_{\lambda}D^{b}_{sing}(W^{-1}(\lambda))(\cong MF(W))\) for generic Kahler forms._
_Both categories admit orthogonal decomposition with respect to critical (potential) values \(\lambda\), and each summand in the decomposition is generated by the skyscraper sheaf at the unique critical point whose value is \(\lambda\) or its corresponding \(SYZ\) fiber._
The other direction of homological symmetry was studied in the work of Hacking-Keating [13].
Along the way, we develop a technique for analyzing critical points of a Laurent series \(W(z)\) over \(\Lambda\), which involves two steps: (i) localizing the series at a certain valuation level \(val(z)=c\) by classifying possible energy minimizing terms, and (ii) inductively solving the critical point equation order-by-order. For step (ii), we may need to look at the second order terms (or higher, depending on the non-degeneracy of first order terms), which is a new feature that has not been addressed in existing literature. We believe that this method can be applied in more general situations including the mirror potentials induced from almost-toric fibrations as well as higher dimensional toric manifolds, which will be left for future investigations.
The paper is organized as follows. In Section 2, SYZ mirror symmetry of (non-toric) blowup of toric surfaces will be reviewed. Along the way, basic geometric setup will be provided. In Section 3, we look into the tropicalization of a given Laurent polynomial \(W\) over \(\Lambda\) and its connection with critical points of \(W\). In Section 4, we analyze how critical points of the mirror potential change under blowups and prove our main theorem. Finally, homological mirror symmetry aspects of blowups will be studied in Section 5.
### Notations
Throughout we use the following notations:
\[\Lambda :=\left\{\sum_{i=0}^{\infty}c_{i}T^{\lambda_{i}}:c_{i}\in\mathbf{C},\,\lim_{i\to\infty}\lambda_{i}=\infty\right\},\] \[\Lambda_{0} :=\left\{\sum_{i=0}^{\infty}c_{i}T^{\lambda_{i}}\in\Lambda: \lambda_{i}\geq 0\right\},\] \[\Lambda_{+} :=\left\{\sum_{i=0}^{\infty}c_{i}T^{\lambda_{i}}\in\Lambda: \lambda_{i}>0\right\},\] \[\Lambda_{U} :=\mathbf{C}^{*}\oplus\Lambda_{+}.\]
We define \(\mathit{val}:\Lambda\to\mathbb{R}\) by
\[\mathit{val}:\sum_{i=0}^{\infty}c_{i}T^{\lambda_{i}}\mapsto\min_{i}\{\lambda_ {i}:i=0,1,2,\cdots\}.\]
For given a Laurent polynomial (or series) \(W=\sum_{v\in\mathbb{Z}^{n}}\alpha_{v}z^{v}\), we write
\[\operatorname{supp}W:=\{\,v\in\mathbb{Z}^{n}\,\mid\,\alpha_{v} \neq 0\,\},\] \[\Delta_{W}:=\text{the Newton polygon of }W.\]
For a toric manifold \(X_{\Sigma}\) constructed from a fan \(\Sigma\), we denote the generators of \(1\)-cones of \(\Sigma\) by
\[\nu_{1},\cdots,\nu_{N}\]
and their corresponding toric divisors by
\[D_{\Sigma}:=D_{\Sigma,1}\cup\cdots\cup D_{\Sigma,N}.\]
We write \(\Delta_{\Sigma}\) for its moment polytope. Finally, for a disk class \(\beta\in H_{2}(X_{\Sigma},L)\), we define
\[\delta(\beta):=\omega(\beta)\text{ for }L\text{ the moment fiber over the origin of the moment polytope.}\]
## Acknowledgement
We thank Yoosik Kim, Yu-Shen Lin, Matt Young, Arend Bayer for valuable discussions. The work of the first named author is supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1C1C1A01008261 and No.2020R1A5A1016126).
## 2. The SYZ mirror of a log Calabi-Yau surface
Our main object is to study the singularity information of the Laurent series that emerges as the mirror of a non-toric blowup of toric surfaces. Non-toric blowup is a crucial step in the toric model of a log Calabi-Yau surface, which enables us to understand its geometric structure through toric geometry. It is known that any log Calabi-Yau surface is isomorphic to some non-toric blowup, with potential modifications of a boundary (anticanonical) divisor. This allows us to follow Auroux's program [1] and construct a special Lagrangian fibration on the anticanonical divisor complement of the surface, where the associated holomorphic disk counting leads to the LG model mirror to the surface. In this section, we will review the construction of this LG mirror. In particular, the SYZ mirror can be calculated entirely by tropical geometric terms due to [1].
### Lagrangian Floer theory
We begin with a more general situation of Lagrangian torus fibration. Consider a special Lagrangian fibration \(\varphi:X\backslash D\to B\) on the complement \(X\backslash D\) of a anticanonical divisor \(D\subset X\). Then \(B\) admits two distinguished system of coordinates as follows. The complex affine coordinates on \(B\) are given by
\[u_{i}(u):=\int_{C_{i}}\operatorname{Im}\Omega\qquad i=1,\cdots,n\]
at \(u\in B\) where \(C_{i}\) is a \(n\)-dimensional chain swept out by \(f_{i}\) along a path from the fixed reference point \(u_{0}\) in \(B\) to \(u\in L\), and \(\{e_{1},\cdots,e_{n}\}\) is a chosen basis of \(H_{n-1}(L_{u};\mathbb{Z})=H^{1}(L_{u};\mathbb{Z})\). On the other hand, the symplectic affine coordinates are given by
\[x_{i}(u)=\int_{A_{i}}\omega\qquad i=1,\cdots,n \tag{2.1}\]
where \(A_{i}\) is a cylinder analogously obtained from a chosen basis \(\{f_{1},\cdots,f_{n}\}\) of \(H_{1}(L_{u};\mathbb{Z})\). For convenience, we assume \(\{f_{i}\}\) and \(\{e_{j}\}\) are dual to each other, \((f_{i},e_{j})=\delta_{ij}\).
Following [1], the SYZ mirror of \(X\) is given by first taking dual torus fibration \(\tilde{Y}^{\mathbb{C}}\) of \(\varphi\) and equipping it with a Laurent series \(W^{\mathbb{C}}\) called the _potential_, determined by the count of holomorphic disks that intersect \(D\). Identifying \(\tilde{Y}^{\mathbb{C}}\) as
\[\tilde{Y}^{\mathbb{C}}=\{(L_{u}:=\varphi^{-1}(u),\nabla):u\in B,\nabla\in \operatorname{Hom}(H_{1}(L_{u}),\operatorname{U}(1))\}.\]
The potential \(W^{\mathbb{C}}:\tilde{Y}^{\mathbb{C}}\to\mathbb{C}\) can be then written as
\[W^{\mathbb{C}}(L_{u},\nabla)=\sum_{\beta\in\pi_{2}(X,L_{u}),\mu(\beta)=2}N_{ \beta}e^{-\int_{\beta}\omega}hol_{\partial\beta}\nabla, \tag{2.2}\]
where \(N_{\beta}\) is the number of holomorphic disks bounding \(L_{u}\) in class \(\beta\) (passing through a generic point of \(L_{u}\)). Assuming Fano condition, \(N_{\beta}\) counts disks intersecting \(D\) exactly once, and (2.2) turns out to be a finite sum. In general, one needs to introduce the non-Archimedean valuation ring \(\Lambda\), and substitute \(T=e^{-1}\), i.e.,
\[W(L_{u},\nabla)=\sum_{\beta\in\pi_{2}(X,L_{u}),\mu(\beta)=2}N_{\beta}T^{\int _{\beta}\omega}hol_{\partial\beta}\nabla, \tag{2.3}\]
which always converges in the \(T\)-adic topology. Introducing local coordinates
\[z_{i}=T^{x_{i}}hol_{f_{i}}\nabla, \tag{2.4}\]
the potential \(W\) can be written as
\[W(z_{1},\cdots,z_{n})=\sum N_{\beta}T^{\int_{A_{\partial\beta}}\omega}z^{ \beta},\]
where \(z^{\gamma}:=z_{1}^{(\gamma,\varepsilon_{1})}\cdots z_{n}^{(\gamma,\varepsilon _{n})}\) for \(\gamma\in H_{1}(L_{u};\mathbb{Z})\), and \(T^{\int_{A_{\partial\beta}}\omega}\) is the flux between \(L_{u_{0}}\) and \(L_{u}\) defined similarly to (2.1).
Alternatively, one can obtain \(W\) via family Floer theory of the SYZ fibration. In this case, \(W\) is constructed as a gluing of the fiberwise Lagrangian Floer potentials \(W_{u}\) defined implicitly by \(\sum_{k}m_{k}(b,\cdots,b)=W_{u}(b)\cdot[L_{u}]\). Here, \(b=\sum y_{i}e_{i}\) for a chosen basis \(\{e_{i}\}\) of \(H^{1}(L,\mathbb{Z})\) and \(y_{i}(\in\Lambda_{0})\) are subject to weakly unobstructedness condition. This is related to the previous formulation (2.2) by
\[W_{u}(\underline{z}_{1},\cdots,\underline{z}_{n})=W(L_{u},\nabla^{(\underline {z}_{1},\cdots,\underline{z}_{n})}) \tag{2.5}\]
for \((\underline{z}_{1},\cdots,\underline{z}_{n}):=(e^{y_{1}},\cdots,e^{y_{n}})\in\Lambda _{U}\) where \(\nabla^{(\underline{z}_{1},\cdots,\underline{z}_{n})}\in\operatorname{Hom}(H_{1} (L_{u}),\Lambda_{U})\) is a flat connection having \(\underline{z}_{i}\) as a holonomy along the Poincare dual of \(e_{i}\) (\(W\) in (2.3) is extended to allow \(\Lambda_{U}\)-connections, which does not create any problem).
In the Family-Floer perspective, the mirror \(\tilde{Y}\) should be replaced by the rigid analytic variety \(\tilde{Y}\). It is still fibered over \(B\), but with \(T^{n}\)-fibers in \(\tilde{Y}\) replaced by \((\Lambda_{U})^{n}\). The previous complex coordinates are extended so that \((L_{u},\nabla^{(\underline{z}_{1},\cdots,\underline{z}_{n})})\) has coordinates on \(\tilde{Y}\) given as \(z_{i}=T^{x_{i}(i)}\underline{z}_{i}\). We will denote the resulting fibration (SYZ-dual to \(X\backslash D\to B\)) by \(\tilde{\varphi}:\tilde{Y}\to B\). We will mostly write \(\omega l\) for \(\tilde{\varphi}\) in the main application, since \(\tilde{\varphi}\) can be identified with the restriction of the map \(\omega l:(\Lambda^{\times})^{n}\to\mathbb{R}^{n}\) when written in terms of coordinates \(z_{i}\). See [20] for a detailed construction of the rigid analytic mirror in the realm of family Floer theory.
**Example 2.1**.: _Let \(X_{\Sigma}\) be an \(n\)-dimensional toric Fano manifold with the toric fan \(\Sigma\) whose primitive rays are generated by integral vectors \(\nu_{1},\cdots,\nu_{N}\), and \(\Delta_{\Sigma}\) the moment polytope of \(X_{\Sigma}\) (which determines the Kahler form on \(X_{\Sigma}\)) given as_
\[\langle x,\nu_{i}\rangle\geqslant-\lambda_{i}\quad i=1,\cdots,N.\]
_For later use, we denote by \(D_{\Sigma,i}\) the (irreducible) toric divisor associated with \(\nu_{i}\). In this case, the mirror \(\tilde{Y}\) can be identified as a subset of \(\Lambda^{n}\) consisting of elements whose valuations lie in the interior \(B\) of \(\Delta\), and the dual fibration can be identified with \(\omega:\tilde{Y}\to B\)._
_There is a one-to-one correspondence between the free \(\mathbb{Z}\)-module generated by \(\{\nu_{1},\cdots,\nu_{N}\}\) and \(H_{2}(X_{\Sigma},L)\) for \(L\) a toric fiber, and hence we have the corresponding basis \(\{\beta_{\nu_{1}},\cdots,\beta_{\nu_{2}}\}\) of \(H_{2}(X_{\Sigma},L)\). Their boundaries are precisely given as \(\nu_{1},\cdots,\nu_{N}\) in \(H_{1}(L)\). The classification result of [1] tells us that each \(\beta_{\nu_{i}}\) can be represented by a Maslov index \(2\) holomorphic disk intersecting \(D_{\Sigma,i}\) exactly once. Consequently, each primitive ray (hence each toric divisor) contributes a monomial \(z^{\tilde{\varphi}_{\nu_{i}}}=z^{\nu_{i}}\) to the potential \(W_{\Sigma}\) on \(\tilde{Y}\) according to the classification of Maslov 2 disks. (Here, \(\tilde{\varphi}_{\nu_{i}}\) is viewed as a class in \(H_{1}(L,\mathbb{Z})\) for a toric fiber \(L\).) These disks are usually called the **basic disks**. If \(X_{\Sigma}\) is semi-Fano, then there can be additional sphere bubble contributions [13], and not much is known about the precise computation of the potential beyond this case._
In the presence of singular fibers, the count \(N_{\beta}\) in (2.2) shows a certain discontinuity, which results in the wall-crossing of \(W(L_{u},\nabla)\). In our main applications below, walls are given as the union of affine lines or rays in the base \(B\) with respect to the complex affine structure, and \(N_{\beta}\) remains constant as long as \(u\) does not go across one of these walls. When going across the wall, \(W\) is changed by a certain cluster-type transformation whose rough shape appears in (2.6). In general, a wall appears as an affine line segment in \(B\) of an integral slope \(\gamma\) such that if \(u\) is on this wall, \(L_{u}\) bounds Maslov \(0\) disks whose boundary class is \(\gamma\). (More precisely, the algebraic count of such disks is nonzero for \(L_{u}\).)
In Family Floer perspective, having discontinuity of \(W\) accounts for gluing different local charts of the mirror rigid analytic variety by nontrivial coordinate transitions which link local \(W_{u}\)'s. The transition map can be actually computed by comparing Fukaya \(A_{\infty}\)-algebras (especially their weak bounding cochains) of two nearby fibers \(L_{u}\) and \(L_{u^{\prime}}\) in the adjacent charts via pseudo-isotopies (as known as _Fukaya's trick_). It induces an \(A_{\infty}\)-quasi isomorphism between \(CF(L_{u},L_{u})\) and \(CF(L_{u^{\prime}},L_{u^{\prime}})\), which determines the coordinate change between \(W_{u}\) and \(W_{u^{\prime}}\) given in the form of
\[W_{u}(z_{1},\cdots,z_{n})=W_{u^{\prime}}(z_{1}^{\prime},\cdots,z_{n}^{\prime}), \qquad z_{i}^{\prime}=T^{e_{i}}z_{i}(1+f_{i})\ i=1,\cdots,n \tag{2.6}\]
where \(f_{i}\cong 0\) modulo \(\Lambda_{+}\) and \(\epsilon_{i}\) is a flux that limits to zero as \(u\) and \(u^{\prime}\) get closer to each other. We will have to allow a negative (but arbitrarily close to \(0\)) valuation of \(f_{i}\) in the
main application, which is a source of a few interesting features. For instance, see Remark 2.3 and, in that regard, 5.3 in addition.
### Wall-crossing in \(\dim_{\mathbb{C}}X=2\)
Now we suppose \(\dim_{\mathbb{C}}X=2\). Given a generic nodal fiber (topologically a once-pinched torus) of \(X\backslash D\to B\), we describe more concretely the shape of the associated wall structure and its wall-crossing transformation. Let \(L_{0}=\varphi^{-1}(u_{0})\) denote the nodal fiber, and consider the vanishing cycle2 in \(H_{1}(L_{u};\mathbb{Z})\) for \(L_{u}\) a nearby fiber. The corresponding vanishing thimble produces a holomorphic disk of Maslov \(0\). Let us denote its class by \(\beta_{0}\). Torus fibers bounding this \(\beta_{0}\)-disk are aligned in an affine line with respect to complex affine coordinates emanating from \(u_{0}\) (see for instance, [15, Proposition 5.6]) which we call an _initial wall_ or _initial ray_. Maslov index \(2\) disks can be glued with this to produce new Maslov index \(2\) disks which results in nontrivial change of the potential. Note that the affine structure has a nontrivial monodromy around the singular fiber, and one usually chooses a branch-cut (some infinite ray starting from \(u_{0}\)) in \(B\), across which the affine coordinates jump by this monodromy.
Footnote 2: It becomes a genuine vanishing cycle in Picard-Lefschetz theory after the hyperkähler rotation.
When we go across the initial wall positively as in Figure 1, the two potentials (or the corresponding local mirror charts) are related by the coordinate change
\[z^{\gamma}\mapsto z^{\gamma}(1+f(z^{\beta_{0}}))^{\langle\zeta\beta_{0},\gamma\rangle} \tag{2.7}\]
where \(f\) can be formulated in terms open Gromov-Witten invariants [15].
Initial walls form building blocks of the entire wall structure on \(B\). In fact, removing arbitrarily small neighborhoods \(B_{sing}\) of branch-cuts, the union of all the walls can be identified as the minimal consistent scattering diagram containing these rays drawn on
\[B_{reg}:=B\backslash B_{sing}.\]
Here, the scattering diagram is the set of walls (affine rays with integral slopes) coupled with the wall-crossing transformations (of the form (2.7)), and the diagram is called _consistent_ if for any loop in \(B_{reg}\), the composition of all wall-crossing transformations for the intersecting walls is the identity. We will denote by \(\mathfrak{D}\) the consistent scattering diagram on \(B_{reg}\) consisting of walls. It is known that the additional rays (those other than initial rays) in \(\mathfrak{D}\) are produced by colliding of initial rays which are completely determined by consistency. We refer readers to [1] for more details about the scattering diagram and its appearance in SYZ mirror construction.
It is more convenient to understand the wall structure in terms of tropical geometry. In fact, points \(u\in B_{reg}\) on the wall can also be characterized as possible ends of Maslov \(0\) tropical disks in \(B_{reg}\) which are defined (in our particular geometric situation) as follows.
Figure 1.
**Definition 2.2**.: _A Maslov \(0\) tropical disk in \(B_{reg}\) with end at \(u\in B_{reg}\) is the image of a continuous map \(h:T\to B_{reg}\) away from the union of branch-cuts satisfying the following. Denote by \(T_{0}\) and \(T_{1}\) the sets of vertices and edges of \(T\), respectively._
1. \(T\) _is a rooted tree with a unique root_ \(x\) _and_ \(h(x)=u\)_;_
2. \(h|_{e}\) _embeds_ \(e\in T_{1}\) _onto an affine line segment with integral slope (with respect to complex affine coordinates);_
3. \(h\) _maps a leaf (that belongs to a finite edge) to one of singular fibers_ \(u_{0}\)_;_
4. _there exists a weight function_ \(w:T_{1}\to\mathbb{Z}_{>0}\) _such that at any_ \(v\in T_{0}\)_, one has the balancing condition_ \[\sum_{i}w(e_{i})v(e_{i})=0\] (2.8) _where the sum is taken over all edges incident to_ \(v\)_, and_ \(v(e_{i})\) _denotes the primitive vector along_ \(h(e_{i})\) _pointing away from_ \(v\)_._
_To a tropical disks, one can assign a relative homology class_
\[\beta:=\sum_{x_{i}\text{ a leaf}}\beta_{h(x_{i})}\in H_{2}(X\backslash D,L) \tag{2.9}\]
_where \(\beta_{h(x_{i})}\) is the class of vanishing thimbles for the singular fiber \(L_{h(x_{i})}\). (Hence, one can also make sense of a boundary class\((=\partial\beta\in H_{1}(L_{u}))\) of the tropical disk.)_
Notice that the initial walls explained above are precisely the simplest kind of Maslov \(0\) tropical disks. Moreover, if there exist two Maslov \(0\) tropical disks ending at \(u\in B_{reg}\), then they can glue together to produce a new class of disks. Namely, we take the union of these two disks possibly with nontrivial multiplicities and add an additional edge emanating from \(u\) which is determined by (2.8). This is precisely the reason (in tropical side) we obtain a consistent scattering diagram from the wall structure.
#### 2.2.1. Extension of local coordinates to the gluing of local mirrors
Consider two small regions \(U_{1}\) and \(U_{2}\) on opposite sides of a single wall in \(B_{reg}\), which do not contain any walls themselves. Suppose \(U_{1}\) and \(U_{2}\) above are both away from a neighborhood of the branch-cuts. Denote by \(W_{1}\) and \(W_{2}\) the local mirror potentials defined on these regions. Each \(W_{i}\) can be described as a single convergent Laurent series on \(U_{i}\times(\Lambda_{U})^{2}\) after trivializing the fibration (that induces coordinates (2.4)). We can choose a basis \(\{f_{1},f_{2}\}\) of \(H_{1}(L_{u};\mathbb{Z})\) for \(u\in U_{1}\cup U_{2}\) such that the associated coordinates \((z_{1},z_{2})\) and \((z_{1}^{\prime},z_{2}^{\prime})\) on respectively \(U_{1}\) and \(U_{2}\) make the wall-crossing transformation in the simple form
\[z_{1}^{\prime}=T^{\langle f_{1},u-u^{\prime}\rangle}z_{1},\quad z_{2}^{\prime }=T^{\langle f_{2},u-u^{\prime}\rangle}z_{2}(1+f(z_{1})) \tag{2.10}\]
compared at \(u(=val(z_{1},z_{2}))\in U_{1}\) and \(u^{\prime}(=val(z_{1}^{\prime},z_{2}^{\prime}))\in U_{2}\), where \(f\) is a power series appearing in the wall-crossing transformation (2.6). (For e.g., choose \(f_{1}\) in the basis to be the class of \(\partial\beta_{0}\) where \(\beta_{0}\) is the class of Maslov zero disks responsible for the wall between \(U_{1}\) and \(U_{2}\).)
Notice that if \(f\) in (2.10) belongs to \(\Lambda_{+}\), then the transformation does not create any extra energy (the exponent of \(T\)) except the flux between different points in \(B\) which only indicates the difference of the locations of \(u\) and \(u^{\prime}\). Therefore it is possible to extend the
coordinates \(z_{1},z_{2}\) throughout the second chart \(U_{2}\) simply by solving \(z_{i}^{\prime}=z_{i}(1+f_{i}(z))\) in \(z_{i}\) as long as the solution \(z_{i}\) is well-defined (i.e convergent as power series in \(z_{i}^{\prime}\) over \(\Lambda\)).
**Remark 2.3**.: _In the actual application, \(f_{i}\) in (2.10) can be of negative valuation (see, for e.g., (2.12)), and it is possible for a certain \(z_{1}\) that \(\text{val}(z_{2}^{\prime})\) and \(\text{val}(z_{2})\) are significantly different. For this reason, this analytic continuation from \(U_{1}\) cannot cover some codimension \(1\) subset in \(U_{2}\), and vice versa._
More concretely, we define \(\bar{z}_{i}\) on \(U_{1}\cup U_{2}\) in such a way that \(\bar{z}_{1}=T^{\hat{\mathbb{J}}_{A_{1}}\omega}hol_{f_{i}}\nabla(=z_{1}=z_{1}^{ \prime})\) everywhere on \(U_{1}\cup U_{2}\) and \(\bar{z}_{2}=z_{2}\) on \(U_{1}\), but \(\bar{z}_{2}=z_{2}^{\prime}(1+f(z_{1}^{\prime}))^{-1}\) on \(U_{2}\). In these coordinates, one only needs to keep the expression \(W_{1}\) to describe the mirror over \(U_{1}\cup U_{2}\) since \(W_{2}(z_{1}^{\prime},z_{2}^{\prime})=W_{1}(\bar{z}_{1},\bar{z}_{2})\) by definition of \(\bar{z}_{i}\) as long as the right hand side converges.
### Log CY surfaces and toric models
Our main interest is the mirror LG model for the complex surface \(X\) obtained by a non-toric blowup of a toric surface \(X_{\Sigma}\), that is, we allow the blowup center to consist of generic points in a toric divisor \(D_{\Sigma}\). \(X\) forms a log Calabi-Yau pair together with \(D\) the proper transform of \(D_{\Sigma}\). More generally, by [1, Proposition 1.3], any log Calabi-Yau surface \((X^{\prime},D^{\prime})\) can be represented as a blowdown of such a surface \(X\) where the blowdown this time contracts divisors in \(D\). Namely, for any \((X^{\prime},D^{\prime})\), one can find a diagram
(2.11)
where \(\pi\) is the non-toric blowup, and \(\pi^{\prime}\) is a blowup of \(X\) along nodal points in \(D^{\prime}\) (\(\pi^{\prime}\) is often called a toric blowup). (2.11) is called a toric model of \((X^{\prime},D^{\prime})\).
Throughout, we assume that the symplectic size of all the exceptional divisors are small enough, so they are located near infinity of \(B\) when written in complex affine coordinates. Thus the class of the symplectic form \(\omega=\omega_{X}\) on \(X\) is given as \([\omega]=[\pi^{*}\omega_{X_{\Sigma}}-\sum\epsilon_{i}E_{i}]\) where \(E_{i}\)'s are exceptional classes and \(0<\epsilon_{i}\ll 1\). As before, we choose a small neighborhood of each branch cut near \(E_{i}\) and consider its complement \(B_{reg}\). In [1], the first author constructed a Lagrangian fibration on the divisor complement, \(X\backslash D=X^{\prime}\backslash D^{\prime}\to B\), which is special on \(B_{reg}\), and showed that the resulting scattering diagram on \(B_{reg}\) agrees with the one in [10] induced by some algebraic curve counting. Torus fibers are automatically weakly-unobstructed if \(D\) is positive for degree reason, and it is the case for general \(D\) by Lemma 2.9, below.
If we take a non-toric blowup at a generic point in \(D_{\Sigma}\) of the toric model, then the fibration is obtained by gluing in the local model appearing in [10] to the Lagrangian fibration pulled-back from \(X_{\Sigma}\) away from exceptional divisors. Each point in the blowup center corresponds to one nodal fiber in the torus fibration on the blowup. If \(z_{1}\) denotes the coordinate induced from the boundary class \(\partial\beta_{0}\) of the associated Maslov \(0\) disk, then the wall crossing formula is given as
\[\begin{array}{l}z_{1}=z_{1}^{\prime}(1+T^{-\epsilon}z_{2}^{\prime})\\ z_{2}=z_{2}^{\prime}\end{array} \tag{2.12}\]
where \(-\epsilon\) in the formula accounts for the difference between \(\pi^{*}\omega_{X_{\Sigma}}\) and \(\omega\).
Each exceptional divisor shares one point with the associated nodal fiber, and hence all the nodal fibers are sitting close to infinity as well. The nontrivial monodromy of the affine structure near the nodal fiber (singularity of the affine structure) can be pushed to
a single branch cut, which we take to be the ray from the singular fiber toward infinity. See Figure 1.
### The mirror LG model for \((X,d)\) and geometric critical points
In the situation of Proposition 2.8, if \(u\) remains in a chamber \(R\) (a connected component of \(B_{reg}\backslash\)union of walls), then \(W_{u}\) varies continuously, and hence gives a well-defined function on \(\dot{Y}_{R}:=val^{-1}(R)\) where \(val:\dot{Y}\to B_{reg}\) is a dual torus fibration in rigid analytic setting appearing in 2.1.3 We obtain a global function by gluing these local pieces via wall-crossing transformations (2.6).
Footnote 3: Strictly speaking, one has to take \(\dot{Y}\) in this context to be the family Floer mirror of \(\varphi^{-1}(B_{reg})(\subset X)\to B_{reg}\).
In view of discussion in 2.2.1, it is enough to consider some fixed expression of \(W_{R}\) valid only over some chamber \(R\), but with an enlarged domain. The expression (2.14) a priori defines a function on \((\Lambda^{*})^{2}\) (as long as it converges). Note that the coordinates \(z_{1},z_{2}\) are actually global as we have removed small neighborhoods of branch-cuts. Indeed, the image of the embedding of \(B_{reg}\) into \(\mathbb{R}^{2}\) by (global) symplectic affine coordinates \((val(z_{1}),val(z_{2}))\) converges to the moment polytope \(\Delta_{\Sigma}\) as \(B_{sing}\) shrinks. This naturally leads to the following definition.
**Definition 2.4**.: _The critical point of \(W_{R}\) considered as a function on the maximal domain convergence is called_ **geometric** _if its valuation lies in \(B_{reg}\approx\Delta_{\Sigma}\)._
It is possible that geometric critical points sit over the codimension 1 subset not covered by the analytic continuation (Remark 2.3). We can handle this problem in two different ways. First, we can choose the location of the blowup center cleverly so that there exists a chamber which contains every geometric critical points (Lemma 4.17). Alternatively, we can add an extra chart coming from Floer deformation theory of nodal fibers as in 5.3. The latter works only for the semi-Fano case.
In practice, we will work with some special chamber \(R=R_{0}\) defined as follows. Recall that the wall structure associated to the torus fibration on \(X\backslash D\) is given as the minimal consistent scattering digram that contains initial rays determined by Maslov zero disks emanating from singular fibers. For convenience, we will always choose locations of points in the blowup center (for \(X\to X_{\Sigma}\)) to be near the corners of \(D_{\Sigma}\), or more precisely right after the corner when traveling around \(D_{\Sigma}\) counterclockwise. If a point in the blowup center lies in an irreducible component \(D_{\Sigma,i}\) of \(D_{\Sigma}\), then we require its valuation to be close enough to the corner \(D_{\Sigma,i}\cap D_{\Sigma,i-1}\). In this case, the corresponding initial ray (the wall) in the scattering diagram is parallel to \(v_{i}\). See Figure 1(b).
We claim that with this choice of a blowup center, there always exists an open chamber \(R_{0}\) (a connected component of \(B_{reg}\) minus walls that has a nontrivial area) in the scattering diagram that is surrounded by initial rays.
**Lemma 2.5**.: _Let \(p_{i}\) be the blowup point in \(D_{\Sigma,i}\) farthest from the corner of \(D_{\Sigma}\) that comes right before \(D_{\Sigma,i}\) when traveling \(D_{\Sigma}\) counterclockwise. Denote by \(R_{0}\) the region in \(B\) enclosed by initial rays for \(p_{i}\)'s (see Figure 1(b)). Then no point in the interior of \(R_{0}\) bounds a Maslov 0 tropical disk in \(X\backslash D\)._
Proof.: We auxiliary orient each edge of Maslov 0 tropical disks in such a way that it points toward the end. We claim that for any edge of a Maslov 0, \(R_{0}\) lies on the right side of the line extending the edge (with respect to the orientation of the line chosen as above). To see the claim, we proceed with the induction on the number of vertices of tropical disks. Initial rays clearly satisfy this condition.
Suppose now a Maslov \(0\) tropical disk \(h:T\to B_{reg}\) is given, and consider the first vertex \(v\) that we meet when starting from \(u\) walking in the reverse orientation. Removing the edge \(e_{0}\) of \(h(T)\) incident to the end \(u\), one obtains two Maslov \(0\) tropical disks intersecting at \(v\), both of which should satisfy the induction hypothesis. We then only need to check the condition for the removed edge \(e_{0}\) of \(h(T)\), which is obvious from the balancing condition (2.8).
Suppose now that the divisors \(D_{\Sigma}\), \(D\) and \(D^{\prime}\) appearing in (2.11) do not contain any rational curve with a negative Chern number. (Obviously, it is enough to check this for \(D\).) Note that the disk counting (the number \(N_{\beta}\) for each disk class \(\beta\)) remains constant on \(R_{0}\). The corresponding local mirror is \(\tilde{Y}_{R_{0}}\) fibered over \(R_{0}\) with \((\Lambda_{U})^{2}\)-fibers, and the count of Maslov index \(2\) holomorphic disk defines an analytic function on \(\tilde{Y}_{R_{0}}\). We then globalize this local LG model on \(\tilde{Y}_{R_{0}}\) by the trick mentioned in 2.2.1. We shall give the tropical description of this in short.
**Remark 2.6**.: _If the boundary divisor has spheres with negative Chern numbers, there could be additional Maslov \(0\) disks in \(X\) obtained as stable disks consisting of higher Maslov disks attached with negative sphere bubbles. Hence, it is possible to have strictly more walls than those intrinsic to \(X\backslash D\). In this case, we can only guarantee that the coefficients of certain leading order terms of \(W\) remain constant over \(R_{0}\), which is enough for our purpose of analyzing critical points._
### Tropical description of the Landau-Ginzburg mirror
Suppose now that \(D\) is positive i.e., \((X,D)\) being Fano. We write \(D_{i}\) for the proper transform of \(D_{\Sigma,i}\). In this case, the mirror potential can be computed tropically as shown in [1]. We briefly review the definition of tropical disks, especially which accounts for the potential. The count of tropical disks matches that of holomorphic disks due to the correspondence between tropical and holomorphic disks established therein. Let us first focus on \((X,D)\) together with its associated special Lagrangian fibration on \(X\backslash D\) explained above.
**Definition 2.7**.: _A tropical disk in \(B\) with end at \(u\in B_{reg}\) is the image of a continuous map \(h:T\to B_{reg}\) away from the union of branch-cuts that satisfies conditions (i) - (iv) in Definition 2.2, but additionally allowing an edge \(e\in T_{1}\) to be unbounded subject to the condition below. (Alternatively such \(e\) may also be viewed as an edge incident to a leaf sitting at infinity.)_
Figure 2.
_;_
5. _if_ \(e\in T_{1}\) _is unbounded, then_ \(h(e)\) _is an affine ray in_ \(B_{reg}\) _(with respect to complex affine coordinates) which approaches infinity along_ \(-\nu_{i}\) _where_ \(\nu_{i}\) _is a primitive generator of the fan of_ \(\widehat{X}\)_._
_Finally, the Maslov index of a tropical disk is defined as the number of unbounded edges._
Fix a point \(u\), or equivalently the fiber \(L_{u}\). It is automatically weakly-unobstructed by degree reason. Let us first look at a basic Maslov 2 disk bounding \(L_{u}\), that is, a disk intersecting \(D\) exactly once. Therefore, it corresponds to a tropical disk which has a unique unbounded edge \(e\) with \(h(e)\) an affine ray in \(B_{reg}\) approaching infinity along the direction perpendicular to \(D\). Indeed a basic disk does project to such an affine ray in the base (when written in the complex affine coordinates). By definition, any unbounded edge of a tropical disk has its corresponding basic Maslov 2 disk, and hence the Maslov index of a tropical disk is a tropical interpretation of the index formula which equates the Maslov index of a holomorphic disk and the intersection number of the anticanonical divisor [1].
Suppose a tropical disk \(h:T\to B_{reg}\) has unbounded edges \(e_{1},\cdots,e_{l}\) that are perpendicular to \(D_{i_{1}},\cdots,D_{i_{l}}\) (proper transforms of \(D_{\Sigma,i_{1}},\cdots,D_{\Sigma,i_{l}}\)), and its finite leaves map to singular fibers at \(x_{1},\cdots,x_{l^{\prime}}\). Then one can assigns a relative homology class to this tropical curve (which will be also referred to as the class of \(h(T)\)) given by
\[\beta=\sum_{a=1}^{l}\widetilde{\beta_{\nu_{i}}}+\sum_{b=1}^{l^{\prime}}\beta_ {h(x_{b})}\in H_{2}(X,L) \tag{2.13}\]
where \(\widetilde{\beta_{\nu_{i}}}\) denotes the proper transform of the class \(\beta_{\nu_{i}}\) of the basic Maslov 2 disk that hits \(D_{\Sigma,i}\) exactly once away from the blowup center, and \(\beta_{h(x_{b})}\) is the class of the vanishing thimble as in (2.9). \(\beta_{h(x_{b})}\) and \(\beta_{h(x_{b^{\prime}})}\) give the same class if singular fibers at \(x_{b}\) and \(x_{b}^{\prime}\) arise from blowup points lying in the same toric divisor.
Observe that the intersection of \(h(T)\) and the walls in the scattering diagram \(\mathfrak{D}\) is a disjoint union of Maslov 0 tropical disks, say \(h_{i}:T_{i}\to B\) for \(i=1,2,\cdots,l^{\prime\prime}\). In fact, \(h(T)\) joins \(h_{1}(T_{1}),\cdots,h_{l^{\prime\prime}}(T_{l^{\prime\prime}})\) and initial rays \(e_{1},\cdots,e_{l}\), and the balancing condition determines the remaining part of \(h(T)\). It is easy to see that the sum of classes of \(h_{i}(T_{i})\) equals the second summand of (2.13). First introduced in [10], the complement \(h(T)\backslash\cup_{i}h_{i}(T_{i})\) is usually called a _broken line_, from which one can completely recover \(h(T)\) itself by the above discussion. It is a piecewise linear curve in the scattering diagram, whose non-smooth points lie in walls. At a wall, it can bend towards the direction that is given as a positive multiple of the primitive direction of the wall (or the corresponding Maslov 0 tropical disk). See Figure 3. The class of a broken line is defined as the class of its associated Maslov 2 tropical disk. For a more formal definition of the broken line, see for e.g., [10, Definition 4.2].
In summary:
**Proposition 2.8**.: _Suppose \((X,D)\) is a log Calabi-Yau surface, and each irreducible component of \(D\) has a positive Chern number. If \(X\backslash D\) carries a special Lagrangian fibration with at worst nodal fibers, then the potential \(W_{u}\) at \(u\in B_{reg}\) can be calculated by counting broken lines with their ends at \(u\). More precisely,_
\[W_{u}=\sum_{\beta\in H_{2}(X,L_{u})}N_{\beta}^{\text{trop}}T^{\omega(\beta)}z ^{\hat{\sigma}\beta} \tag{2.14}\]
_where \(N_{\beta}^{\text{trop}}\) is the number of broken lines in class \(\beta\)._
The mirror potential for a general log Calabi-Yau surface \((X^{\prime},D^{\prime})\) fitting into the toric model (2.11) can be obtained from \(W_{u}\) (2.14) above by removing \(\beta\) that nontrivially intersects divisors in \(D\) contracted under \(\pi^{\prime}\).
One can extend the discussion above to semi-Fano situation which further allows Chern number zero spheres. We refer readers to [1] for this generalization. Notice that a holomorphic disk attached with such a sphere bubble contributes the same monomial as the disk itself, but with a coefficient that has higher energy (valuation).
### Non-Fano situation
When the divisor \(D\) has negative Chern number spheres, the weakly-unobstructedness of torus fibers are no longer guaranteed merely by degree consideration. One can connect \((X,D)\) with another toric surface by inductively moving the blowup point to corners, and make use of the strategy of [10, 11]. Hence [21] produces the mirror LG model defined on some rigid-analytic domain.
**Lemma 2.9**.: _A smooth Lagrangian torus fiber \(L\) is weakly unobstructed. More precisely, for any \(b\in H^{1}(L;\Lambda_{+})\) and \(\mathfrak{b}\in H^{even}(X;\Lambda_{+})\) pulled-back from a torus invariant cycle in \(X_{\Sigma}\), we have_
\[m_{0}^{\mathfrak{b}}(1)+m_{1}^{\mathfrak{b}}(b)+m_{2}^{\mathfrak{b}}(b,b)+ \cdots=W^{\mathfrak{b}}(b)\cdot[L] \tag{2.15}\]
_for some \(W^{\mathfrak{b}}(b)\in\Lambda\)._
Proof.: We use induction on the number of points in the blowup center \(C\) for the non-toric blowup \(\pi:\widetilde{X}\to X_{\Sigma}\). If \(C\) is empty, that is, if the surface is the toric surface \(X_{\Sigma}\) itself, then the fiber is simply the fiber of the moment map, and is shown to be weakly unobstructed by [10, 11] and [1, Corollary 6.6] using torus-invariant Kuranishi perturbation.
Suppose now that the statement holds when the blowup center (for the non-toric blowup) consists of less than \(k+1\) points, and consider \(\pi:\widetilde{X}\to X_{\Sigma}\) which is the blowup of a toric surface \(X_{\Sigma}\) at \(C\) consisting of \(k+1\) generic points in the interior of toric divisors. Choose any point \(p\in X_{\Sigma}\) in \(C\). There exists a symplectomorphism \(F\) sending \(p\) to a torus fixed point, say \(p^{\prime}\), in \(X_{\Sigma}\) supported away from \(\pi(L)\). Denote by the (toric) blowup of \(X_{\Sigma}\) at \(p^{\prime}\) by \(X_{\Sigma^{\prime}}\), and the non-toric blowup of \(X_{\Sigma^{\prime}}\) at \(C\backslash\{p\}\) (or, more precisely, its image in \(X_{\Sigma}^{\prime}\)) by \(X^{\prime}\). We require that the exceptional divisor in \(X_{\Sigma^{\prime}}\) associated with \(p^{\prime}\) has the same symplectic size as \(\pi^{-1}(p)\). By construction, the inverse image \(L^{\prime}\) of \(L\) in \(X^{\prime}\) is a Lagrangian
Figure 3. Broken Lines
torus fiber of the SYZ fibration on \(X^{\prime}\). Then we have the following commutative diagram
where \(\tilde{F}\) is a symplectomorphism that lifts \(F\). Clearly \(F\) maps \(L^{\prime}\) to \(L\), and hence, it suffices to prove the weakly unobstructedness of \(L^{\prime}\), which follows from the induction hypothesis.
**Remark 2.10**.: _We speculate the analogous is true for nodal fibers i.e. any immersed generators form weak bounding cochains for any bulk-parameters. This is the case when \(X\) is semi-Fano by degree reason. Notice that the left hand side of (2.15) can have terms in degree other than \(0\) only when there exist contributions from negative Maslov index disks._
## 3. Critical points of Laurent Polynomials
In this section, we look into the problem of finding the number of critical points of a given Laurent polynomial. In fact, a well-defined combinatorial formula already exists, given in terms of Newton polytopes, due to Kushnirenko [13](or see Theorem 3.2). However, we require a more refined version of this formula, as we wish to further estimate the non-Archimedean valuations of critical points. For this purpose, we will study the tropicalization of the Laurent polynomial, and eventually prove that the critical points must sit over vertices of the resulting tropical curve (Proposition 3.9). This is essentially known from the work of Gonzalez-Woodward [11], although not phrased in the realm of tropical geometry. Our local argument is also based on the energy induction thereof. On the other hand, our mirror potential has been calculated tropically as shown in [1], and hence the approach here fits more into our geometric setup.
### The Kushnirenko Theorem
We first briefly review the classical result of Kushnirenko, which determines the number of critical points of a given Laurent polynomial. Let \(W\in\Bbbk[z_{1}^{\pm},\ldots,z_{n}^{\pm}]\) be a Laurent polynomial with coefficients in some closed field \(\Bbbk\) of characteristic \(0\) (we will take \(\Bbbk=\Lambda\) in our geometric setup). Identifying the lattice \(\mathbb{Z}^{n}\) with Laurent monomials, i.e. \(z^{v}=z_{1}^{v_{1}}\cdots z_{n}^{v_{n}}\) for \(v\in\mathbb{Z}^{n}\), we write
\[W=\sum_{v\in\mathbb{Z}^{n}}\alpha_{v}z^{v}.\]
Recall that the _Newton polytope_\(\Delta_{W}\) of \(W\) is defined as the convex hull of the support of \(W\), \(\operatorname{supp}W:=\{\,v\in\mathbb{Z}^{n}\,\mid\,\alpha_{v}\neq 0\,\}\).
**Definition 3.1**.: _A Laurent polynomial \(W\) is said to be_ **convenient** _if the point \(0\in\mathbb{R}^{n}\) does not belong to any supporting plane of all \(d\)-dimensional faces of \(\Delta_{W}\) for \(1\leqslant d\leqslant n-1\)._
Note that when \(n=2\), a Laurent polynomial \(W\) is convenient if and only if two adjacent vertices \(v_{1},v_{2}\in\partial\Delta_{W}\) do not lie on a line through the origin.
For any closed subset \(F\) of \(\mathbb{R}^{n}\), we write \(W_{F}:=\sum_{v\in F\cap\mathbb{Z}^{n}}\alpha_{v}z^{v}\). A Laurent polynomial \(P\) is _non-degenerate_ if for any closed face \(F\) of \(\Delta_{P}\), the system
\[\left(z_{1}\frac{\partial W_{F}}{\partial z_{1}}\right)_{F}=\cdots=\left(z_{n }\frac{\partial W_{F}}{\partial x_{n}}\right)_{F}=0\]
has no solution in \((\Bbbk^{\times})^{n}\). In this case, the Newton polytope \(\Delta_{W}\) is also called non-degenerate.
**Theorem 3.2**.: _[_10_, Theorem III]_ _Let \(\Bbbk\) be an algebraically closed field with characteristic 0. If a convenient Laurent polynomial \(W\) is non-degenerate,_
\[|\mathrm{Crit}(W)|=n!V_{n}(\Delta_{W})\]
_where \(|\mathrm{Crit}(W)|\) is the number of critical points of \(P\) counted with multiplicity, and \(V_{n}(\Delta_{W})\) is the \(n\)-dimensional volume of the Newton polytope \(\Delta_{W}\)._
We are mainly interested in the case \(n=2\). In this case, it is not difficult to see that once \(P\) is convenient, it is non-degenerate for generic choice of coefficients. Analogous statement should be true in arbitrary dimension since \(\left(z_{1}\frac{\partial f}{\partial z_{1}}\right)_{F}=\cdots=\left(z_{n} \frac{\partial f}{\partial x_{n}}\right)_{F}=0\) is overdetermined.
**Example 3.3**.: _Consider \(W(z)=az_{1}z_{2}+bz_{1}z_{2}^{2}+cz_{1}^{2}z_{2}^{2}\) for some \(a,b,c\in\Bbbk\). \(W\) has no critical points for generic \(a,b,c\), whereas \(\Delta_{W}\) has a positive volume. Indeed, \(P\) is not convenient due to the two terms \(z_{1}z_{2}\) and \(z_{1}^{2}z_{2}^{2}\) that are colinear. It has infinitely many critical points for \(b=0\), on the other hand._
### Tropicalization of Laurent Polynomials and the Duality Theorem
We recall some basics from tropical geometry (mostly without proof). Readers are referred to any expository article in tropical geometry for more details, for e.g., see [1]. Throughout, we fix our base field as the Novikov field \(\Lambda\). Let us take a Laurent polynomial \(W(z_{1},\cdots,z_{n})\) over \(\Lambda\), and write it as \(W=\sum_{i=1}^{N}a_{i}z^{v_{i}}\) for \(v_{i}\in\mathbb{Z}^{n}\). Then its tropicalization is a polyhedral complex in \(\mathbb{R}^{n}\) (e.g., a piecewise linear graph on \(\mathbb{R}\) when \(n=2\)) defined as follows:
**Definition 3.4**.: _The_ **tropicalization** _of a Laurent polynomial \(W=\sum_{i=1}^{N}a_{i}z^{v_{i}}\) in \(\Lambda[z_{1}^{\pm},\cdots,z_{n}^{\pm}]\) is a subset of \(\mathbb{R}^{n}\) given as the corner locus of the piecewise linear function_
\[\tau_{W}:\mathbb{R}^{n}\to\mathbb{R},\ \ \ \ (x_{1},\cdots,x_{n})\mapsto\min_{i} \Big{\{}\lambda_{i}+\langle v_{i}\,,\,(x_{1},\cdots,x_{n})\rangle\,\mid\,\,i \in\mathrm{supp}\,W\,\Big{\}}\]
_where \(\lambda_{i}=\text{val}(a_{i})\) and \(\mathrm{supp}\,W:=\{v_{i}\in\mathbb{Z}^{n}\mid a_{i}\neq 0\}\). We denote the tropicalization of \(W\) by \(\mathrm{Trop}(W)\)._
The tropicalization \(\mathrm{Trop}(W)\) consists of polygonal faces of different dimensions. For generic coefficients (hence generic \(\lambda_{i}\)), every \((n-1)\)-dimension face of \(\mathrm{Trop}(W)\) is the locus where exactly two linear functions \(\lambda_{i}+\langle v_{i},(x_{1},\cdots,x_{n})\rangle\) coincide. We assume that this is always the case from now on. One can assign a _weight_\(w_{E}\) to each \((n-1)\)-face in the following way. Suppose \(\lambda_{i}+\langle v_{i},(x_{1},\cdots,x_{n})\rangle\) and \(\lambda_{j}+\langle v_{j},(x_{1},\cdots,x_{n})\rangle\) agree along \(E\). Then \(w_{E}\) is a maximal positive integer for which \(\frac{v_{i}-v_{j}}{w_{E}}\) is an integer vector.
If a \((n-2)\)-face \(V\) of \(\mathrm{Trop}(W)\) is incident to edges \((n-1)\)-faces \(E_{i_{1}},\cdots,E_{i_{k}}\), one has the _balancing condition_
\[\sum_{l=1}^{k}w_{i_{l}}v_{i_{l}}=0,\]
where \(w_{i_{l}}\) is the weight of \(E_{i_{l}}\) and \(v_{i_{l}}\) is the primitive integer vector perpendicular to \(E_{i_{1}}\) such that \(x+\delta v_{i_{l}}\) belongs to \(E_{i_{l}}\) for \(x\in E_{i_{l}}\) and a small enough positive \(\delta\). When \(n=2\), \(v_{i_{l}}\) is simply the outward primitive vector along the edge \(E_{i_{l}}\) (pointing away from the vertex \(V\)).
**Remark 3.5**.: _The tropicalization of a complex hypersurface \(\mathcal{C}\) in \((\mathbb{C}^{\times})^{n}\) can be obtained as a degeneration of tropical amoebas of a family of hypersurfaces \(\mathcal{C}_{t}\) parametrized by (real powers of) \(t\);_
\[\mathrm{Trop}(\mathcal{C}):=\lim_{t\to\infty}Log_{t}(\mathcal{C}_{t})\]
_where \(\mathrm{Log}_{t}:(\mathbb{C}^{*})^{n}\to\mathbb{R}^{n}\) is given by \((z_{1},\cdots,z_{n})\mapsto(\log_{t}|z_{1}|,\cdots,\log_{t}|z_{n}|)\). Formally, our Novikov variable \(T\) is related with \(t\) by \(t=1/T\), and hence, the limit \(t\to\infty\) corresponds to the large structure limit \(\omega\to\infty\) in the sense of substitution \(T=e^{-\omega}\)._
There is a convenient way for determining the combinatorial type of the tropicalization \(\mathrm{Trop}(W)\) using duality between \(\mathrm{Trop}(W)\) and the Newton polytope \(\Delta_{W}\). Let \(\psi_{W}:\Delta_{W}\to\mathbb{R}\) be defined as the _maximal_ piecewise linear function satisfying
\[\psi_{W}(v_{i})=\lambda_{i}(=val(a_{i})), \tag{3.1}\]
which is completely determined by \(W\). The domains of linearity of \(\psi_{W}\) determines a subdivision \(\mathcal{S}_{W}\) of \(\Delta_{W}\) into lattice polygons. \(\mathcal{S}_{W}\) is usually refered to as the _Newton subdivision_ of the Newton polytope \(\Delta_{W}\). Note that the intersection of two cells in the subdivision must occur along their common lower dimensional cell.
The tropicalization \(\mathrm{Trop}(W)\) is constructed as the dual complex of the polyhedral decomposition \(\mathcal{S}_{W}\) of \(\Delta_{W}\). Specifically, each top-dimensional cell of \(\mathcal{S}_{W}\) corresponds to a vertex in \(\mathrm{Trop}(W)\). Two vertices of \(\mathrm{Trop}(W)\) are joined by an edge, say \(E\), whenever their corresponding top cells in \(\mathcal{S}_{W}\) intersect along a shared \((n-1)\)-dimensional cell, say \(E^{*}\), with the edge \(E\) being perpendicular to \(\mathcal{S}_{W}\). Consequently, the subdivision \(\mathcal{S}_{W}\) completely determines the combinatorial type of \(\mathrm{Trop}(W)\) (see Figure 4 for an example when \(n=2\)).
The following Proposition summarizes our discussion so far.
**Proposition 3.6**.: _The tropicalizaton \(\mathrm{Trop}(W)\) is dual to the subdivision \(\mathcal{S}_{W}\) of \(\Delta_{W}\) determined by the domain of linearity of \(\psi_{W}:\Delta_{W}\to\mathbb{R}\) defined in (3.1)._
**Remark 3.7**.: _Due to difference of conventions as in Remark 3.5, \(\mathrm{Trop}(W)\) should be rotated \(180\) degrees to become precisely the dual of the subdivision \(\mathcal{S}_{W}\)._
Observe that the weight of a top dimensional face \(E\) of \(\mathrm{Trop}(W)\) equals the integral length of its dual \(1\)-cell \(E^{*}\) in \(\mathcal{S}_{W}\). Similarly, we define the weight \(|V|\) of a vertex \(V\) of \(\mathrm{Trop}(W)\) to be the volume \(|V|:=n!vol_{n}(F^{*})\) of the \(n\)-cell \(F^{*}\) in \(\mathcal{S}_{W}\) dual to \(V\). If exactly \((n+1)\) top dimensional faces intersect at a vertex \(V\) of \(\mathrm{Trop}(W)\), its multiplicity equals \(w_{i_{1}}\cdots w_{i_{n}}|\mathrm{det}(v_{i_{1}}\cdots v_{i_{n}})|\) where \(w_{j}\) and \(v_{j}\) are the weight and primitive of the face \(E_{j}\) incident to \(V\), \(j=i_{1},i_{2},\cdots,i_{n}\). The balancing condition ensures that the multiplicity is well-defined, i.e. independent of the choice of \(n\) incident edges.
Each connected component of \(\mathbb{R}^{n}\backslash\mathrm{Trop}(W)\) is the domain of linearity of \(\tau_{W}\), and hence, is associated with a monomial \(a_{i}z^{v_{i}}\) of \(W\) (or the corresponding linear function
Figure 4. The dual subdivision \(\mathcal{S}_{W}\) determined by \(\Gamma_{W}\)
\(\langle v_{i},(x_{1},\cdots,x_{n})\rangle\rangle\). That is, if \(val(z)\) falls in such component, then the _valuation of \(a_{i}z^{v_{i}}\) becomes the minimum_ among the valuations of all monomials of \(W\) (this directly follows from the definition of \(\operatorname{Trop}(W)\)).
Having this interpretation in mind, the dual top dimensional face \(F^{*}\) in \(\mathcal{S}_{W}\), dual to a vertex \(V\in\operatorname{Trop}(W)\), can be described as follows: Let \(V\) be a vertex of \(\operatorname{Trop}(W)\), and suppose that \(a_{i_{1}}z^{v_{i_{1}}},\cdots,a_{i_{k}}z^{v_{i_{k}}}\) are the monomials associated to open components adjacent to the vertex \(V\). Then \(F^{*}\) is precisely the Newton polytope of \(\sum_{l=1}^{k}a_{i_{l}}z^{v_{i_{l}}}\). If \(val(z)=V\), then the valuations of all the monomials \(a_{i_{1}}z^{v_{i_{1}}},\cdots,a_{i_{k}}z^{v_{i_{k}}}\) coincide at \(z\). In other words, a vertex is where adjacent monomials simultaneously attains minimal valuation.
### Tropicalization and Critical points of \(W\)
We finally extract critical point information of \(W\) from its tropicalization \(\operatorname{Trop}(W)\). The argument is essentially a tropical interpretation of [10, Theorem 4.37] or [1, Theorem 10.4], which relates critical points of the leading order \(W_{0}\) (with respect to the energy, \(val\)) and those of \(W\) itself.
**Lemma 3.8**.: _Let \(W=W_{0}+W_{1}\) be a decomposition of \(W\) into the terms \(W_{0}\) of the lowest valuations (of "leading order") and the rest \(W_{1}\)._
1. _The lowest order term_ \(\alpha_{0}\) _of a critical point_ \(\alpha\) _of_ \(W\) _is a critical point of_ \(W_{0}\)_._
2. _Suppose that_ \(\alpha_{0}\in(\Lambda_{U})^{n}\) _is a critical point of_ \(W_{0}\) _and_ \(W_{0}\) _has non-vanishing Hessian_ \(\operatorname{Hess}(W_{0})\) _at_ \(\alpha_{0}\)_. Then,_ \(\alpha_{0}\) _uniquely extends to a critical point_ \(\alpha\) _of_ \(W\)_, in the sense that_ \(\alpha\) _is given by_ \(\alpha=\alpha_{0}+\alpha_{1}\) _where_ \(val(\alpha_{0})<val(\alpha_{1})\)_._
In particular, if \(W_{0}\) is Morse, then there exists a valuation-preserving one-to-one correspondence between the set of critical points of \(W_{0}\) and that of \(W\). By Kushnirenko's theorem (Theorem 3.2), \(W_{0}\) is Morse if and only if it has precisely \(2\Delta_{W_{0}}\) mutually distinct critical points. Lemma 3.8 can be proven by expanding a critical point \(p=p_{0}+\cdots\) in energy-increasing order with its first order term being one of the critical points \(p_{0}\) of \(W_{0}\), and solving higher order terms order-by-order. At each stage in solving higher order terms, one essentially solves a linear equation determined by the Hessian of \(W_{0}\) at \(p_{0}\). It is thus clear that the valuation of \(p\) agrees with that of \(p_{0}\).
We first clarify how the decomposition \(W=W_{0}+W_{1}\) arises in our context more precisely. In fact, the energy of the terms in \(W:(\Lambda^{\times})^{n}\to\Lambda\) depends on the the point in \((\Lambda^{\times})^{n}\) at which we expand \(W\) (our main interest is the case \(n=2\)). A point in \(val^{-1}(x_{1},\cdots,x_{n})\) (\(x_{i}\in\mathbb{R}^{n}\), \(i=1,\cdots,n\)) can be written as \((T^{x_{1}}\underline{z}_{1},\cdots,T^{x_{n}}\underline{z}_{n})\) with \(val(\underline{z}_{1})=\cdots=val(\underline{z}_{n})=0\). We then take the restriction of \(W\) at \(val^{-1}(x_{1},\cdots,x_{n})\), that is,
\[W_{(x_{1},\cdots,x_{n})}:(\Lambda_{U})^{n}\to\Lambda\qquad(\underline{z}_{1}, \cdots,\underline{z}_{n})\mapsto W(T^{x_{1}}\underline{z}_{1},\cdots,T^{x_{ n}}\underline{z}_{n}).\]
Notice that only coefficients of \(W_{(x_{1},\cdots,x_{n})}(\underline{z}_{1},\cdots,\underline{z}_{n})\) can carry nonzero valuation. It is therefore reasonable to write \(W_{(x_{1},\cdots,x_{n})}\) as the sum \(W_{0}+W_{1}\), where \(W_{0}\) consists of the terms in \(W_{(x_{1},\cdots,x_{n})}\) with minimum valuation coefficients. Note that up to a common factor, \(W_{0}\) is essentially a Laurent polynomial on \((\mathbb{C}^{\times})^{n}\), or \(T^{-\delta}W_{0}\in\mathbb{C}[\underline{z}_{1}^{\pm},\cdots,\underline{z}_{n }^{\pm}]\) for the lowest valuation \(\delta\) of \(W\).
One useful consequence of Lemma 3.8 (a) is that the tropicalization of \(W\) confines the possible locations of critical points of \(W\in\Lambda[z_{1}^{\pm},\cdots,z_{n}^{\pm}]\). Primarily, we cannot have any critical points that project to \(\mathbb{R}^{n}\backslash\operatorname{Trop}(W)\) under \(val\). Indeed, as observed in subsection 3.2, the lowest order terms \(W_{0}\) of \(W_{(x_{1},\cdots,x_{n})}\) for \((x_{1},\cdots,x_{n})\in\mathbb{R}^{n}\backslash\operatorname{Trop}(W)\) consists a single
monomial assigned to the connected component of \(\mathbb{R}^{n}\backslash\mathrm{Trop}(W)\) containing \((x_{1},\cdots,x_{n})\). Clearly, a single Laurent monomial cannot have any critical point in \((\Lambda_{U})^{n}\).
In general, consider any \(k\)-dimensional face \(E\) of \(\mathrm{Trop}(W)\) for \(k>0\). Along \(E\), the lowest order terms \(W_{0}\) of the restriction \(W_{(x_{1},\cdots,x_{n})}\) (\((x_{1},\cdots,x_{n})\in E\)) are those monomials assigned to the components of \(\mathbb{R}^{n}\backslash\mathrm{Trop}(W)\) whose boundaries contain \(E\). Note that the exponent \((v_{1},\cdots,v_{n})\) of any such monomial in \(W_{0}\) is orthogonal to the \(k\)-dimensional linear subspace of \(\mathbb{R}^{n}\) parallel to \(E\). (They lie in a cell of \(\mathcal{S}_{W}\) orthogonal to \(E\).) Therefore if \(W_{0}\) of the restriction of \(W\) along \(E\) is convenient, then generically it has no critical points since its Newton polytope is degenerate. Hence we have proven:
**Proposition 3.9**.: _Let \(W\) be a Laurent polynomial over \(\Lambda\), \(W\in\Lambda[z_{1}^{\pm},\cdots,z_{n}^{\pm}]\)._
1. _If_ \(z\in(\Lambda^{\times})^{n}\) _is a critical point of_ \(W\)_, then_ \(\operatorname{\mathit{val}}(z)\in\operatorname{\mathit{Trop}}(W)\)_._
2. _Suppose that the lowest order terms of_ \(W\) _restricted at a point in the interior of a positive dimensional cell_ \(E\) _of_ \(\mathrm{Trop}(W)\) _is convenient. Then the interior of_ \(E\) _cannot support a critical point of_ \(W\)_._
3. _In particular, if the lowest order terms of_ \(W\) _is convenient on the interior of any positive dimensional cell, then the critical point of_ \(W\) _should map to one of vertices of_ \(\mathrm{Trop}(W)\) _under_ \(\operatorname{\mathit{val}}\)_._
4. _Suppose that the lowest order terms of_ \(W_{(x_{1},\cdots,x_{n})}\) _is convenient for some vertex_ \(V=(x_{1},\cdots,x_{n})\) _of_ \(\mathrm{Trop}(W)\)_. Then for generic coefficients, there exists exactly_ \(|V|\)_-many non-degenerate critical points over the vertex_ \(V\)_._
In locating the critical point of a Laurent polynomial \(W\), the ideal situation is therefore when \(W\) satisfies the condition in Proposition 3.9 (c), and additionally, the restriction of \(W\) at each vertex of \(\mathrm{Trop}(W)\) is convenient as in (d).
**Definition 3.10**.: _A Laurent polynomial \(W\) over \(\Lambda\) is said to be_ **locally convenient** _if it satisfies the condition in (c) and (d) of Proposition 3.9, i.e., any subcollection of monomials in \(W\) determined by a cell in the subdivision \(\mathcal{S}_{W}\) of its Newton polytope forms a convenient Laurent polynomial._
In this case, for generic coefficients, the critical points can be essentially read off from the decomposition \(\mathcal{S}_{W}\) of the Newton polytope of \(W\). Recall \(|V|=n!Vol_{n}(F^{*})\) where \(F^{*}\) is a top cell in \(\mathcal{S}_{W}\) dual to \(V\).
**Remark 3.11**.: _We remark that the condition in Proposition 3.9 (c) is not guaranteed for generic coefficients. For instance, \(W=az_{1}+\frac{b}{z_{1}}+cz_{2}+\frac{d}{z_{2}}\) (the mirror \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)) satisfies the condition only when \(a=b=c=d\). Such cases will be dealt with separately._
A more straightforward approach in locating critical points would be to investigate the intersection \(\mathrm{Trop}(z_{1}\frac{\partial}{\partial z_{1}}W)\cap\mathrm{Trop}(z_{2} \frac{\partial}{\partial z_{2}}W)\) of the tropicalization of the log partials of \(W\), as in [10]. However, it poses some difficulties for our purpose, such as ensuring transversality of the intersection or determining how many (if any) critical points are supported by non-displaceable fibers and if they are non-degenerate.
## 4. Singularities of mirror Landau-Ginzburg potentials
Combining the existing works of [10] and [1], one has that for a toric surface \(X_{\Sigma}\), the bulk-deformed potential \(W_{\Sigma}^{\Phi}\) has as many non-degenerate critical points as the
rank of quantum cohomology of \(X_{\Sigma}\) (for generic parameters). The main objective of this section is to generalize this result to a larger class of algebraic surfaces, mainly non-toric blowups \(X\) of toric surfaces, and establish closed string mirror symmetry. While our main result pertains to bulk-deformed potentials (with restricted classes of bulk-insertions when \(X\) is non-Fano), we initially concentrate on potentials without bulk for sake of simplicity. Towards the end of the section, we extend our findings to encompass potentials with bulk \(\mathfrak{b}\in H^{4}(X;\Lambda_{+})\).
Our key observation is presented in Theorem 4.2, which essentially asserts that a holomorphic disk with "excessive energy", such as disks with sphere bubbles, has little contribution to geometric critical points (Definition 2.4) or, at least no contribution to their locations. This allows us to handle situations where the full potential may be unknown.
For instance, we will see (Theorem 4.2) that the "minimal energy terms of \(W\)" essentially determine the critical behavior of \(W\). These terms consist of the Hori-Vafa part of \(W\) together with contributions from what we will call basic broken disks (see Definition 4.1 below). We apply the combinatorial tools developed in Section 3 to this leading order part of \(W\).4 We remark that Gonzalez-Woodward [10] also used a similar method to deduce the analogous result for the Hori-Vafa mirror potentials of toric orbifolds.
Footnote 4: In the absence of non-degeneracy, however, one needs to take into account the second order terms as well, which requires a more delicate control on energies. See Proposition 4.14 below.
We first spell out our geometric setup in detail, mainly to fix notations and terminologies.
### Geometric setup
Let \((X_{\Sigma},D_{\Sigma})\) be a smooth compact toric surface (not necessarily Fano) from the complete fan \(\Sigma\) generated by \(v_{i}\), \(i=1,\cdots,N\). Denote by \(D_{\Sigma,1},\ldots,D_{\Sigma,N}\) their corresponding (irreducible) toric divisors. Then its mirror potential takes the form of
\[W_{\Sigma}=\underbrace{\sum_{i=1}^{N}a_{i}z^{v_{i}}}_{W_{\Sigma}^{HV}}+\sum_{ v}a_{v}z^{v}. \tag{4.1}\]
The first summand is the Hori-Vafa potential \(W_{\Sigma}^{HV}\), contributed by disks intersecting one of \(D_{\Sigma,i}\) (see Example 2.1), whereas the second summand consists of disks with sphere bubbles whose disk component possibly has a higher Maslov index. If we set its moment polytope to be
\[\Delta_{\Sigma}:=\left\{x:\langle x,v_{i}\rangle\geq-\lambda_{i}\quad i=1, \cdots,N\right\} \tag{4.2}\]
as in Example 2.1 (which amounts to fixing symplectic affine coordinates), then \(a_{i}\) is given by \(a_{i}=T^{\lambda_{i}}\). We will frequently use the map
\[val:(\Lambda^{\times})^{2}\to\mathbb{R}^{2},\]
and a point \(z\) will be called _geometric_ in the sense that it corresponds to a Lagrangian torus fiber (see (2.4)) if \(x=val(z)\) satisfies (4.2), i.e. \(x\) lies in \(\Delta_{\Sigma}\). Note that this is consistent with Definition 2.4.
A sequence of non-toric blowups on \(X_{\Sigma}\) gives rise to a surface \(X\), coupled with an anticanonical divisor \(D\) proper-transformed from \(D_{\Sigma}\). We require that the symplectic sizes of all exceptional divisors are _sufficiently small_, and that they are generic compared to the toric divisors. That is, we choose our symplectic form \(\omega\) on \(X\) in the class
\[[\omega]=[\pi^{*}\omega_{\Sigma}-\sum e_{i}E_{i}]\]
as in 2.3 where \(E_{i}\)'s are exceptional classes, and we assume \(\epsilon_{i}\)'s are generic and arbitrarily small. We have a Lagrangian torus fibration with nodal fibers (in one-to-one correspondence with \(\{E_{i}\}\)) which is special away from a small neighborhood of the branch-cut at each nodal fiber.
Consider a Maslov 2 disk \(u\) in \(X\) belonging to the class \(\widetilde{\beta_{v_{j}}}+\sum_{i}a_{i}\beta_{i}\), where \(\widetilde{\beta_{v_{j}}}\) represents the proper transform of the basic disk \(\beta_{v_{j}}\) that intersects \(D_{\Sigma,i}\) exactly once away from the blowup center, and \(\beta_{i}\) is the proper transform of Maslov 2 disk passing through the blowup point associated with \(E_{i}\) exactly once (see 2.5). Then the area of \(u\) with respect to the above choice of symplectic form is
\[\omega_{\Sigma}(\beta_{v_{j}})+\sum_{i}a_{i}\omega_{\Sigma}(\beta_{i})-\sum_{i }a_{i}\epsilon_{i}. \tag{4.3}\]
Here, we used the same notation \(\beta_{i}\) to denote the disk class in \(X_{\Sigma}\) which is proper-transformed to \(\beta_{i}\). Notice that the symplectic area of \(u\) is strictly smaller that that of its projection to \(X_{\Sigma}\) (the sum of first two terms in (4.3)), and the area difference limits to zero as \(\epsilon_{i}\) goes to zero. Under holomorphic/tropical correspondence, such \(u\) corresponds to a broken line whose infinite edge is parallel to \(v_{j}\), and bends nontrivially at the wall induced by \(E_{i}\) if \(a_{i}\neq 0\). To distinguish from basic Maslov 2 disks lifted from \(X_{\Sigma}\), we will refer to such disks as _broken disks_ (of Maslov index 2).
Among Maslov 2 broken disks in \(X\), we will be particularly interested in those described as follows. Recall that the scattering diagram associated with a special Lagrangian fibration on \(X\backslash D\) has a special chamber \(R_{0}\) enclosed by groups of parallel initial rays (Lemma 2.5). Let \(p_{i;1},\cdots,p_{i;l}\) be the points of intersection of the blowup center and the toric divisor \(D_{\Sigma,i}\), each of which gives rise to a wall (a ray) parallel to \(v_{i}\). Due to our specific choice of locations of \(p_{i;j}\) (as outlined in 2.3, Figure 2a), these walls are close enough to the divisor \(D_{\Sigma,i-1}\) where we label generators \(v_{i}\) of \(\Sigma\) counterclockwise. Then the basic disks stemming from \(D_{\Sigma,i-1}\) are glued with Maslov 0 disks (proper transforms of basic disks in \(X_{\Sigma}\) hitting \(p_{i;j}\)'s), resulting in Maslov index 2 broken disks that could potentially enter the region \(R_{0}\). Tropically, these disks are represented by broken lines approaching infinity along the \(-v_{i-1}\)-direction and bending only when encountering the walls induced by the blowups at \(p_{i;j}\)'s.
**Definition 4.1**.: _Let \(p_{i;1},\cdots,p_{i;l}\) be the part of the blowup center lying in \(D_{\Sigma,i}\) for \(\pi:X\to X_{\Sigma}\). A_ **basic broken disk** _is defined to be the Maslov 2 holomorphic disk in \(X\) which is a proper transform of a disk in \(X_{\Sigma}\) that hits \(D_{\Sigma,i-1}\) exactly once and intersects \(D_{\Sigma,i}\) possibly multiple times but only at \(p_{i;j}\)'s. See Figure 5._
Observe that basic broken disks are in the class \(\beta_{v_{i-1}}+k\beta_{v_{i}}\) (more precisely its proper transform under \(\pi:X\to X_{\Sigma}\)) where \(k\) is the number of bends in their corresponding broken lines. These disks are responsible for the terms \(z^{\phi(\beta_{v_{i-1}}+k\beta_{v_{i}})}=z^{v_{i-1}+kv_{i}}\) in the potential. We will see that these basic broken disks have minimal energies among all Maslov 2 disks in \(X\) when the Lagrangian boundary is positioned close enough to the corner formed by \(D_{\Sigma,i-1}\) and \(D_{\Sigma,i}\).
In the chamber \(R_{0}\), the symplectic affine coordinates are pulled back from \(X_{\Sigma}\), providing the mirror coordinates \(z=(x,y)\). Once coordinates are fixed, one can write down the projection of \(z\in\tilde{Y}\) onto the base \(B_{reg}\) as a map from \(\tilde{Y}\) to \(\mathbb{R}^{2}\), which we still denote by \(val\). As discussed in 2.4, this is possible away from a small neighborhood of the branch-cut from each nodal fiber.
Throughout this section, we will work with the potential obtained by counting disks bounding Lagrangians sitting over this chamber. As discussed in 2.4, geometric interpretation, within the context of SYZ mirror symmetry, is present only when considering geometric critical points of the potential \(W\) that resides in the SYZ base. Namely, while \(W\) can a priori be defined on a bigger domain, we will only be interested in critical points whose valuations live in \(B_{reg}\approx\Delta_{\Sigma}\), the moment polytope of \(X_{\Sigma}\). It turns out that when \(R_{0}\) sufficiently covers the interior of the moment polytope (expressed in symplectic affine coordinates), it can be proven that every geometric critical point indeed lies within \(R_{0}\) (Lemma 4.17).
### Geometric critical points and their energy minimizing disks
As mentioned earlier, working with non-Fano surfaces poses several challenges, the obvious one being sphere bubbling phenomena due to divisors with negative Chern number. This results in (possibly infinite) correction terms being added to the Hori-Vafa potential, and it is difficult to write down explicit formulas for the full potential. While there are a few examples where such formulas have been obtained, they are often limited to specific examples or require indirect methods (see [1, 3.2] for an example).
Surprisingly, however, it turns out that the few terms with minimal energy determine the critical behavior of the potential in our situation. In fact, the precise expressions of higher energy terms in the potential are not required for our purpose, as far as the locations of critical points are concerned.
We will use (a) of Proposition 3.9 crucially, which says that, under \(val:X\backslash D\to B\) (defined away from branch-cuts), a critical point of \(W\) maps to a point where at least two (or three if \(\mathcal{S}_{W}\) is locally convenient) Maslov 2 disks attain the minimum energy simultaneously. More concretely, if \(z=\alpha\) is a critical point of \(W\), then there exists distinguished classes \(\beta_{1},\cdots,\beta_{m\geq 2}\in H_{2}(X,L_{val(\alpha)})\) supporting Maslov index 2 (stable) disks, such that at \(z=\alpha\),
\[\omega(\beta_{1})=\cdots=\omega(\beta_{m})\leq\omega(\beta), \tag{4.4}\]
for all \(\beta\) supporting Maslov index 2 disks. For obvious reasons, these disks of class \(\beta_{1},\ldots,\beta_{m}\) will be referred to as _energy minimizing disks_ at \(\alpha\). They give rise to monomials
Figure 5.
\(T^{\delta(\beta_{i})}z^{\partial\beta_{i}}\) for \(1\leq i\leq m\) in the potential, where
\[\delta(\beta_{i}):=\omega(\beta_{i})-\langle val(\alpha),\partial\beta_{i}\rangle. \tag{4.5}\]
For instance, we have \(\delta(\beta)=\lambda_{i}\) when \(\beta\) is a proper transform of a basic disk in class \(\beta_{v_{i}}\), but does not intersect any points in the blowup center. For simplicity, disks in such classes will still be referred to as basic disks when there is no danger of confusion. In general, \(\delta(\beta)\) for a class \(\beta\) supporting a stable holomorphic disk away from exceptional divisors is a linear combination of \(\lambda_{j}\) and \(\omega(D_{\Sigma,j})\) (\(j=1,\cdots,N\)) over nonnegative integers.
**Proposition 4.2**.: _Let \(\alpha\) be a geometric critical point of \(W\), and let \(\beta_{1},\cdots,\beta_{m}\) be energy minimizing disks given as in (4.4), which are responsible for the terms \(T^{\delta(\beta_{1})}z^{\partial\beta_{i}},\cdots,T^{\delta(\beta_{m})}z^{ \partial\beta_{i}}\) with \(\delta(\beta_{i})\) given in (4.5). Then \(\beta_{i}\) should be of the class of a basic disk (from \(X_{\Sigma}\)) or a basic broken disk (Definition 4.1). In particular, it cannot have any sphere bubbles._
Proof.: Since we are only interested in geometric critical points, the valuation \(val(\alpha)\) of \(\alpha\) must satisfy
\[\langle val(\alpha),\nu_{i}\rangle+\lambda_{i}\geq 0 \tag{4.6}\]
for all \(i\) so that it lies inside \(\Delta_{\Sigma}\) (4.2). On the other hand, applying (4.4) to the basic disks in \(X_{\Sigma}\) (or more precisely, their proper-transforms in \(X\)), we have
\[\langle val(\alpha),\partial\beta_{1}\rangle+\delta(\beta_{1})=\cdots=\langle val (\alpha),\partial\beta_{m}\rangle+\delta(\beta_{m})\leq\langle val(\alpha), \nu_{i}\rangle+\lambda_{i} \tag{4.7}\]
for \(i=1,\cdots,N\).
Since \(\nu_{1},\cdots,\nu_{N}\) generates a complete fan \(\Sigma\), \(\partial\beta_{i}:=(p,q)\) is contained in some cone of the fan, say, the cone spanned by \(\nu_{1}\) and \(\nu_{2}\) denoted by
\[\operatorname{Cone}(\nu_{1},\nu_{2}):=\big{\{}av_{1}+bv_{2}:a,b\in\mathbb{Z}_ {\geq 0}\big{\}}.\]
After appropriate affine coordinate change, we may assume that \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\) and \(\lambda_{1}=\lambda_{2}=0\). In this case all other \(\lambda_{i}\)'s must be strictly positive, as they represent (the limits of) energies of basic disks \(\beta_{v_{i}}\) whose boundary lies in the fiber over the origin of the moment polytope, stemming from a divisor inside the first quadrant of \(\mathbb{R}^{2}\). Since \(\delta(\beta_{i})\) is a linear combination of \(\lambda_{j}\)'s and \(\omega(D_{\Sigma,j})\)'s with nonnegative coefficients, it follows that \(\delta(\beta_{i})\geq 0\).
Applying the inequalities (4.6) and (4.7) to \(\nu_{1}=(1,0)\) and \(\nu_{2}=(0,1)\), we see that \((x_{0},y_{0}):=val(\alpha)\) satisfies
\[x_{0}\geq 0,\quad y_{0}\geq 0,\quad px_{0}+qy_{0}+\delta(\beta_{i})\leq x_{0}, \quad px_{0}+qy_{0}+\delta(\beta_{i})\leq y_{0},\quad p,q\geq 0. \tag{4.8}\]
We claim that \(\alpha\) is a geometric critical point only if \(\delta(\beta_{i})=0\). We first assume that the sizes of exceptional divisors arising from the non-toric blowup \(X\to X_{\Sigma}\) is zero, \(\epsilon_{j}=0\), which can be thought of as \(X\) equipped with a degenerate Kahler form.
_(Case I: \(p=0\))._ Note that since \((0,0)\notin\operatorname{supp}W\), we must have \(q\geq 1\).
\[px_{0}+qy_{0}+\delta(\beta_{i})\leq y_{0}\ \Leftrightarrow\ \underbrace{(q-1)y_{0}}_{\geq 0}+ \underbrace{\delta(\beta_{i})}_{\geq 0}\leq 0\ \Leftrightarrow\ (q-1)y_{0}=\delta(\beta_{i})=0.\]
_(Case II: \(p\geq 1\))._ Similarly,
\[px_{0}+qy_{0}+\delta(\beta_{i})\leq x_{0}\ \Leftrightarrow\ \underbrace{(p-1)x_{0}}_{\geq 0}+ \underbrace{qy_{0}}_{\geq 0}+\underbrace{\delta(\beta_{i})}_{\geq 0}\leq 0\ \Leftrightarrow\ (p-1)x_{0}=qy_{0}=\delta(\beta_{i})=0.\]
If \(\epsilon_{j}\) is nonzero, then the left hand side of inequalities above additionally have \(-\epsilon_{j}\). Still, it can be negative only when \(\delta(\beta_{i})=0\) since any nonzero \(\delta(\beta_{i})\) is dominantly positive compared to \(\epsilon\) which is arbitrarily small.
Notice that \(\delta(\beta_{i})=0\) if and only if \(\pi_{*}(\beta_{i})(\in H_{2}(X_{\Sigma},\pi(L_{u})))\) is a linear combination of \(\beta_{v_{1}}\) and \(\beta_{v_{2}}\)'s. First of all, we see that \(\beta_{i}\) cannot involve sphere-bubble components, as otherwise it could only be attached with some exceptional classes (in \(\ker\pi_{*}\)) which have positive Chern numbers contradicting \(\mu(\beta_{i})=2\).
If \(\pi_{*}(\beta_{i})=\beta_{v_{1}}\) or \(\pi_{*}(\beta_{i})=\beta_{v_{2}}\), then it is a basic disk. Now suppose that \(\pi_{*}(\beta_{i})=a_{1}\beta_{v_{1}}+a_{2}\beta_{v_{2}}\) with \(a_{1},a_{2}>0\). In order for \(\beta\) to be of Maslov \(2\) in \(X\), the only possibility is that all of \(\beta_{v_{1}}\) and \(\beta_{v_{2}}\) involved are vanishing thimbles emanating from nodal fibers (initial rays) except one which intersects \(D_{\Sigma}\) once. Due to our special choice of the chamber \(R_{0}\), this happens only when \(a_{1}=1\) and \(\beta_{i}\) is a class of a basic broken disk. It is obvious that basic broken lines are the only broken lines bending only at walls normal to \(D_{v_{1}}\) and \(D_{v_{2}}\) still entering \(R_{0}\). In conclusion,
\[\pi_{*}(\beta_{i})=\beta_{v_{1}},\beta_{v_{2}},\beta_{v_{1}}+k\beta_{v_{2}} \tag{4.9}\]
where \(k\geq 1\) is at most the number of blowup points located in \(D_{\Sigma,2}\). Applying the above argument on each cone \(\operatorname{Cone}(v_{j},v_{j+1})\), our proof is complete.
We denote by \(W_{\min}\) the sum of contributions of basic disks and basic broken disks. In other words, this is the collection of all (low energy) terms of \(W\) that can possibly have the minimal energy at some geometric critical points. More concretely, \(W_{\min}\) can be expressed as follows. Let \(v_{1},\cdots,v_{N}\) in the fan \(\Sigma\) be counterclockwise ordered. We define \(\mathcal{A}_{j}\) as
\[\mathcal{A}_{j}:=\Big{\{}\,\beta_{v_{j}}+k\beta_{v_{j+1}}\ :\ 1\leq k\leq \big{|}\text{blowup points in }D_{\Sigma,j+1}\big{|}\,\Big{\}}\]
so that \(\partial\mathcal{A}_{j}\subset\operatorname{Cone}(v_{j},v_{j+1})\). In other words, \(\mathcal{A}_{j}\) is the set of all disk classes with boundaries in \(\operatorname{Cone}(v_{j},v_{j+1})\) that can be represented by basic broken lines (bending at walls perpendicular to \(D_{\Sigma,j+1}\)). Then we have
\[W_{\min}:=W_{\Sigma}^{HV}+\sum_{j=1}^{n}\sum_{\pi_{*}(\beta)\in\mathcal{A}_{ j}}N_{\beta}T^{\delta_{\beta}}z^{\delta_{\beta}}\]
where, as before, \(N_{\beta}\) is the count of disks in class \(\beta\) passing through a generic point in \(L_{u}\). Observe that we can have \(z^{\partial\beta}=z^{\partial\beta^{\prime}}\) for different \(\beta\) and \(\beta^{\prime}\). This happens when both \(\beta\) and \(\beta^{\prime}\) are basic broken disks which project to the same class in \(X_{\Sigma}\) (i.e., \(\pi_{*}(\beta)=\pi_{*}(\beta^{\prime})\)), but passing through different points in the blowup center. Exponents appearing in \(W\) are boundaries of classes in (4.9). Note that the expression of the leading terms \(W_{\min}\) in \(W\) remains constant on \(R_{0}\) even when \(X\) is non-Fano as mentioned in Remark 2.6.
On the other hand, by the duality in Proposition 3.6, a monomial of \(W\) is assigned to each chamber adjacent to a given point \(val(\alpha)\in\operatorname{Trop}(W)\) in such a way that it has a minimal valuation at \(\alpha\) among all monomials of \(W\). Proposition 4.2 implies that any such monomial should appear in \(W_{\min}\) if \(\alpha\) is a geometric critical point. Thus we have:
**Corollary 4.3**.: _For a geometric critical point \(\alpha\) of \(W\), \(val(\alpha)\) lies on \(\operatorname{Trop}(W_{\min})\)._
**Remark 4.4**.: _Indeed, the above discussion implies that \(\operatorname{Trop}(W_{\min})\) agrees with \(\operatorname{Trop}(W)\) on a compact region containing all the geometric critical points. For this reason, we will not distinguish these two when there is no danger of confusion._
According to Proposition 4.2, we can divide geometric critical points of \(W\) into two different types depending on whether its associated energy minimizing disks contain (basic) broken disks or not. Let us first consider the case where energy minimizers include at least one basic broke disk. Such a geometric critical point will be called _non-toric_
critical points_. We have the following lemma for non-toric critical points, by further examining the inequalities of _(Case I)_ and _(Case II)_ in the proof of Proposition 4.2.
**Lemma 4.5**.: _Let \(\alpha\) be a geometric critical point of \(W\). If one of energy minimizing disks at \(\alpha\) maps to a basic broken disk of class \(\beta_{\nu_{j}}+k\beta_{\nu_{j+1}}\) under \(\pi_{*}\), then \(\alpha\) lies arbitrarily close to a corner \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\) of the moment polytope \(\Delta_{\Sigma}\)._
Proof.: We first prove that \(\mathit{val}(\alpha)=(x_{0},y_{0})\) sits exactly at the corner when the sizes of the exceptional divisors are \(0\). As before, we assume \(\partial\beta_{i}\in\mathrm{Cone}(\nu_{1},\nu_{2})\) and we set \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\) and \(\lambda_{1}=\lambda_{2}=0\) by a suitable affine coordinate change. If \(\beta_{i}\) represents a basic broken disk \(\pi_{*}(\beta_{i})=\beta_{\nu_{1}}+k\beta_{\nu_{2}}\), then he inequality (4.8) becomes:
\[\underbrace{(1-1)x_{0}}_{=0}+ky_{0}+\underbrace{\delta(\beta_{i})}_{=0}\leq 0,\quad x_{0}+\underbrace{(k-1)y_{0}}_{=0}+\underbrace{\delta(\beta_{i})}_{=0} \leq 0.\]
We have \(y_{0}=0\), \(x_{0}=0\) from the first and second inequality, respectively. When exceptional divisors have positive, but small sizes, then \(x_{0}\) and \(y_{0}\) become small positive numbers, since the left hand sides of the inequalities additionally have \(-\epsilon_{j}\)'s (which account for \(\omega(\beta_{i})-\pi^{*}\omega_{\Sigma}(\beta_{i})<0\)).
This leads to the following classification of possible energy minimizing disks for a non-toric critical point. In fact, having at least one basic broken disk in the minimizers significantly restricts types of other energy minimizing disks.
**Proposition 4.6**.: _Let \(\alpha\) be a critical point of \(W\) such that one of associated energy minimizing disks lies in \(\mathcal{A}_{j}=\Big{\{}\,\beta_{\nu_{j}}+k\beta_{\nu_{j+1}}\,:\,1\leq k\leq \big{|}\)blowup points in \(D_{\Sigma,j+1}\big{|}\,\Big{\}}\) after the projection \(\pi_{*}\). Then \(\alpha\) is geometric only if all the other minimizers have classes lying in \(\mathcal{A}_{j}\cup\{\beta_{\nu_{j}},\beta_{\nu_{j+1}}\}\)._
We will perform a detailed local model calculation for such non-toric critical points later in 4.4.
Proof.: Without loss of generality, let \(j=1\), i.e., there exists an energy minimizer at \(\alpha\) lying in \(\mathcal{A}_{1}\). As in the proof of Theorem 4.2, we may assume, after coordinate changes, that \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\), and \(\lambda_{1}=\lambda_{2}=0\). By Lemma 4.5, we have \(\mathit{val}(\alpha)=(0,0)\) when the sizes of exceptional divisors limit to zero.
Now suppose that there exists a class \(\beta\) that has a minimal energy at the critical point \(\alpha\). Then from 4.6 and 4.7, we have:
\[\langle\mathit{val}(\alpha),\partial\beta\rangle+\delta(\beta)=\ \delta(\beta)\leq 0\ = \langle\mathit{val}(\alpha),(1,0)\rangle+\lambda_{1}\]
at the limit. This implies \(\delta(\beta)=0\) which is possible only when \(\pi_{*}(\beta)\in\mathcal{A}_{1}\cup\big{\{}(1,0),(0,1)\big{\}}\) precisely by the same argument in the proof of Theorem 4.2. Therefore, in order for \(\alpha\) to be geometric, the \(\pi_{*}\)-image of every minimizer should lie in \(\mathcal{A}_{1}\cup\big{\{}\beta_{(1,0)},\beta_{(0,1)}\big{\}}\).
We next look into geometric critical points all of whose associated energy minimizing disks are basic disks. They will be referred to as _toric critical points_ making obvious distinction from non-toric critical points discussed previously.
In valuation perspectives, these toric critical points can be completely captured from critical points of \(W^{HV}_{\Sigma}\) (Corollary 4.3), but there is still a subtlety especially when \(X_{\Sigma}\) is non-Fano. Namely, the number of critical points of \(W^{HV}_{\Sigma}\) calculated by Kushnirenko's theorem (Theorem 3.2) still exceeds the expected number of (geometric) toric critical points. Indeed, as long as the resulting surface is semi-Fano, the toric blowup changes the Newton polytope by attaching a unimodal 2-cell whose vertices correspond to the two divisors containing the blowup point and the exceptional divisor. The 2-cell has volume 1 since
the exceptional divisor has self-intersection \(-1\). As a result, we obtain exactly one new non-degenerate critical point of \(W^{HV}_{\Sigma}\) after blowup in this case.
However, if \(X_{\Sigma}\) becomes non-Fano after a blowup, the convexity of the Newton polytope forces a region of bigger volume to be added to the Newton polytope, thereby increasing the number of the critical points of \(W^{HV}_{\Sigma}\) (as a function on \((\Lambda^{\times})^{2}\)) beyond the rank of \(QH^{*}(X_{\Sigma})\) (see Figure 6). We therefore show that no geometric critical points arises from such extra region in the Newton polytope (of a non-Fano surface). More concretely, this region consists of triangles with vertices \(v_{i}\), \(v_{j}\) and \(v_{k}\) satisfying
\[\det\Bigl{|}\begin{matrix}v_{i}-v_{j}\\ v_{k}-v_{j}\end{matrix}\Bigr{|}>0, \tag{4.10}\]
(see Figure 7(a)) where \(v_{i}\), \(v_{j}\) and \(v_{k}\) appearing in \(\Sigma\) in the counterclockwise order.
**Remark 4.7**.: _The issue on the excess number of critical points of \(W^{HV}_{\Sigma}\) in non-Fano situtation has already observed in [1] (in arbitrary dimension), where they excluded non-geometric critical points of \(W^{HV}_{\Sigma}\) implicitly through a similar consideration on energies of disks. We still work with the full potential \(W\) while we reduce the argument to \(W^{HV}_{\Sigma}\) by a refined estimate._
**Proposition 4.8**.: _Let \(\alpha\) be a critical point of \(W\) such that every energy minimizing disks are basic disks. Then \(\alpha\) is geometric only if for any three classes \(\beta_{v_{i}}\), \(\beta_{v_{j}}\), and \(\beta_{v_{k}}\) among minimizers, the inequality_
\[\det\Bigl{|}\begin{matrix}\hat{\sigma}\beta_{v_{i}}-\hat{\sigma}\beta_{v_{j}} \\ \hat{\sigma}\beta_{v_{k}}-\hat{\sigma}\beta_{v_{j}}\end{matrix}\Bigr{|}=\det \Bigl{|}\begin{matrix}v_{i}-v_{j}\\ v_{k}-v_{j}\end{matrix}\Bigr{|}\leqslant 0,\]
_holds, whenever \(v_{i}\), \(v_{j}\) and \(v_{k}\) are arranged in \(\Sigma\) in the counterclockwise order._
We emphasize that \(v_{i}\), \(v_{j}\), and \(v_{k}\) need not be adjacent.
Proof.: Let \(\alpha\) be a geometric critical point of \(W\) with \(val(\alpha)=(x_{0},y_{0})\). Choose any energy minimizing basic disks whose classes project to \(\beta_{v_{i}}\), \(\beta_{v_{j}}\) and \(\beta_{v_{k}}\) under \(\pi_{*}\), and suppose that their boundaries \(v_{i}\), \(v_{j}\) and \(v_{k}\) appear in counterclockwise order. After appropriate coordinate changes, one has \(v_{i}=(a,b)\), \(v_{j}=(0,1)\) and \(v_{k}=(c,d)\) with \(a>0\) and \(c<0\) (so that they are in counterclockwise order). Note that \(a\neq 0\), \(c\neq 0\). We may assume that the
Figure 6. Excessive region added when \(X_{\Sigma}\) becomes non-Fano.
edge of the moment polytope \(\Delta_{\Sigma}\) dual to \(\nu_{j}\) contains the origin in its interior by a suitable translation of symplectic affine coordinates, that is, \(\lambda_{j}=\delta(\beta_{j})=0\) and \(\lambda_{l}>0\) for all \(l\neq j\). In particular, \(\Delta_{\Sigma}\) lies in the upper half plane \(\{y\geq 0\}\). Figure 7 shows the setup.
Now suppose that on the contrary, \(\det\Bigl{|}\begin{matrix}\nu_{i}-\nu_{j}\\ \nu_{k}-\nu_{j}\end{matrix}\Bigr{|}=\det\Bigl{|}\begin{matrix}a&b-1\\ c&d-1\end{matrix}\Bigr{|}>0\). Since we begin with disk classes simultaneously attaining minimum energy, (4.7) reads in our case
\[y_{0}=ax_{0}+by_{0}+\lambda_{i}=cx_{0}+dy_{0}+\lambda_{k}\]
Solving this equation, we find
\[y_{0}=\frac{c\,\lambda_{k}-a\,\lambda_{i}}{a(d-1)-c(b-1)}<0\]
since \(\lambda_{i},\lambda_{k}>0\) due to our choice of symplectic affine coordinates, and hence \((x_{0},y_{0})\) lies outside the moment polytope. Thus, \(\alpha\) cannot be geometric.
The part that has to be ruled out by Proposition 4.8 is shown in Figure 8a, whereas Figure 8b depicts the region ruled out by Proposition 4.6. After filtering out all non-geometric critical points, what remains is the star-shaped region obtained by joining heads of \(\nu_{i}\)'s.
It is worthwhile to mention that if we replace the ordinary convex hull (for the Newton polytope) by this star-shaped region, we can extract only the geometric critical points, removing any non-geometric ones. In fact, as we will see in the subsequent subsections, the (normalized) volume of this star-shaped region coincides with the expected number of geometric critical points. In other words, this star-shaped replacement of the Newton polytope precisely excludes the region where the classical Kushnirenko Theorem fails in our geometric context. From this perspective, our work so far can be seen as a modified version of the Kushnirenko theorem concerning the number of _geometric_ critical points.
In fact, the following Lemma 4.9 asserts that for generic parameters, the Newton subdivision of this star-shaped Newton polytope is always a locally convenient unimodal triangularization (apart from the central cell(s) containing the origin). Lemma 4.9 plays a crucial role in the proof of our main theorem Theorem 4.22, where Lemma 4.9 guarantees that generically, the potential \(W_{\Sigma}\) of the toric model \((X_{\Sigma},D_{\Sigma})\) is not only Morse, but also has distinct critical values.
Figure 7. The moment polytope after coordinate change.
**Lemma 4.9**.: _For generic parameters, the tropicalization \(\operatorname{Trop}(W)\) (restricted to the compact region containing only geometric critical points) is trivalent at every vertex, all of which supports distinct critical points of \(W\). In fact, for generic parameters, each vertex is of weight \(1\), with the possible exception of the single vertex dual to the convenient \(2\)-cell \(\sigma_{0}\) containing the origin in its interior, or the two vertices dual to the non-convenient \(2\)-cells adjacent to the edge containing the origin._
Recall that a tropical curve is called _smooth_ if all of its vertices are of weight \(1\), or equivalently, if its dual Newton subdivision consists of only unimodal cells.
Proof.: Note that by Corollary 4.3 and Remark 4.4, it suffices to work with \(W_{\min}\) instead of \(W\). Let \(\mathcal{S}_{W_{\min}}\) be the Newton subdivision of the leading term potential \(W_{\min}\). We denote by \(\sigma_{0}\in\mathcal{S}_{W_{\min}}\) the cell which contains the origin. Note that \(\sigma_{0}\) can be either a \(2\)-cell or a \(1\)-cell (i.e. an edge containing the origin).
Proposition 4.6 and 4.8 guarantees that a \(2\)-cell \(\sigma\in\mathcal{S}_{W_{\min}}\) is convenient if \(\sigma\neq\sigma_{0}\) when \(\sigma_{0}\) is a \(2\)-cell (or \(\sigma\cap\sigma_{0}=\emptyset\) when \(\sigma_{0}\) is a \(1\)-cell). Hence for any such \(2\)-cell \(\sigma\), there exists _at least three_ disk classes, say \(\beta_{v_{i_{1}}},\ldots,\beta_{v_{i_{k}}}\), which simultaneously attain minimum valuation at \(V=\sigma^{*}\). Let \(\alpha\) be a geometric critical point supported at this vertex \(V\). We have our usual energy minimizing equality
\[\langle val(\alpha),\beta_{v_{i_{1}}}\rangle+\lambda_{i_{1}}=\cdots=\langle val (\alpha),\beta_{v_{i_{k}}}\rangle+\lambda_{i_{k}}\leqslant\langle val(\alpha ),v_{i}\rangle+\lambda_{i}\]
for \(i=1,\ldots,N\). But since the defining equalities of the moment polytope
\[\langle val(\alpha),v_{i}\rangle+\lambda_{i}\geqslant 0\]
are open conditions, \(\alpha\) would still be a geometric critical point even after we perturb \(\lambda_{i}\)'s slightly. Hence for generic coefficients, there are exactly three disks which has minimum energy at any given geometric critical point. In other words, the Newton subdivision \(\mathcal{S}_{W_{\min}}\) is generically a triangulation of \(\Delta_{W_{\min}}\) which is locally convenient, except possibly at the unique edge containing the origin. In terms of the tropicalization, this is equivalent to say that all vertices are trivalent.
We further claim that if the origin is not contained in (the closure of) the cell \(\sigma\in\mathcal{S}_{W_{\min}}\), then \(\sigma\) is unimodal. Suppose that a \(2\)-cell \(\sigma\) has \(2\mathrm{Vol}(\sigma)>1\), and denote by \(v_{i}\), \(v_{j}\), \(v_{k}\) its three vertices. Note that this implies that \(\sigma\) contains at least one extra lattice point,
Figure 8. Classification of geometric critical points.
say \(\nu_{i_{0}}=(p,q)\), other than \(\nu_{i}\), \(\nu_{j}\), and \(\nu_{k}\). But since any toric surface \(X_{\Sigma}\) is obtained via a sequence of blowups from \(\mathbb{F}_{k}\), it is clear that \(\mathcal{S}_{W_{\min}}\) can be obtained by continuously deforming an existing unimodal configuration (constructed later in Proposition 4.12 and Theorem 4.16), during which no lattice points are lost in order for the fan to remain simplicial. Hence we may assume \(\nu_{i_{0}}\in\operatorname{supp}W_{\min}\), which is to say that there exists a disk class \(\beta_{i_{0}}\in H_{2}(X_{\Sigma},L)\) with \(\partial\beta_{i_{0}}=\nu_{i_{0}}\). Let us denote its corresponding term in the potential by \(a_{i_{0}}z^{\partial\beta_{i_{0}}}\) where \(a_{i_{0}}\in\Lambda\) with \(\operatorname{\mathit{val}}(a_{i_{0}})=\lambda_{i_{0}}\). Note that it suffices to show that \(\nu_{i_{0}}\in\mathcal{S}_{W_{\min}}\) has at least one connected edge.
Suppose that \(\nu_{i_{0}}=(p,q)\in\mathcal{S}_{W_{\min}}\) has no connected component. In tropical geometry terms, this implies that \(\operatorname{\mathit{val}}(a_{i_{0}}z^{\partial\beta_{i_{0}}})\) is never the minimum. Equivalently,
\[\langle(x,y),\,\nu_{i_{0}}\rangle+\lambda_{i_{0}}\geqslant\langle(x,y),\,\nu _{i}\rangle+\lambda_{i}\quad\text{for all $(x,y)\in\mathbb{R}^{2}$, $i=1,\ldots,N$,}\]
where equality holds only if \(i=i_{0}\). Notice that this is true only if \(\lambda_{i_{0}}=\infty\), that is, \(a_{i_{0}}=0\). Hence \(\nu_{i_{0}}=\partial\beta_{i_{0}}\notin\operatorname{supp}W_{\min}\) which is a contradiction.
**Remark 4.10**.: _Consider the case where there exists precisely three energy minimizing disks at a vertex \(\alpha\) of \(\operatorname{Torp}(W)\), and suppose that the corresponding triangle is convenient. It is easy to prove by direct calculation that there are as many distinct critical points over \(\alpha\) as the volume of the triangle. We will see in Proposition 4.14 that an analogous statement is true for non-convenient cells._
### Toric blowups of \((X_{\Sigma},D_{\Sigma})\)
Let us first consider the toric blowup of a given toric surface, and study how the critical behavior of its mirror (Floer) potential changes. Here, the toric blowup refers to a blowup of a toric surface \(X_{\Sigma}\) at one of nodal points, say \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\), in the toric divisor \(D_{\Sigma}\). It modifies the fan \(\Sigma\) by adding additional primitive generator \(\nu^{\prime}\) between \(\nu_{j}\) and \(\nu_{j+1}\). Note that \(\nu^{\prime}\) is uniquely determined by requiring the resulting fan \(\tilde{\Sigma}\) is still simplicial: \(\nu^{\prime}=\nu_{j}+\nu_{j+1}\).
When performing (a sequence of) toric blowups, there is a caveat concerning the choice of size of the blowups. Namely, if \(X_{\tilde{\Sigma}}\) is obtained from a toric blowup on \(X_{\Sigma}\) and if the blowup size is not "small enough", the Newton subdivision \(\mathcal{S}_{W_{\Sigma}^{HV}}\) may not be preserved. In other words, \(\mathcal{S}_{W_{\tilde{\Sigma}}^{HV}}\) might not contain cells from \(\mathcal{S}_{W_{\Sigma}^{HV}}\). This is problematic, since critical points a priori to blowups are no longer extended in a valuation-preserving manner. We therefore require the exceptional divisor to be "small enough", given precisely by (4.12) during the proof of Proposition 4.12.
**Remark 4.11**.: _If \(X_{\Sigma}\) is obtained from a sequence of blowups on a Hirzebruch surface \(\mathbb{F}_{k}\), it would be convenient to have all blowup sizes sufficiently small. This ensures that the polyhedral decomposition \(\mathcal{S}_{W_{\Sigma}^{HV}}\) not only contains the cells in \(\mathcal{S}_{W_{\Sigma}^{HV}}\) but also includes all cells that appear during the inductive sequence of blowups starting from \(\mathcal{S}_{W_{\mathbb{F}_{k}}^{HV}}\). However, we will make no such assumptions, and only assume that the potential \(W_{\Sigma}^{HV}\) is Morse. In fact, Lemma 4.9 and Proposition 4.14 tell us that \(W_{\min}\) (and hence \(W_{\Sigma}^{HV}\)) is indeed Morse for generic parameters, and that critical points of \(W_{\min}\) extend to those of its blowup even if they are not locally convenient._
**Proposition 4.12**.: _Let \((X_{\Sigma},D_{\Sigma})\) be a smooth toric surface and let \(X_{\tilde{\Sigma}}\) be a toric surface obtained by blowing up \(X_{\Sigma}\) at one of the nodal points \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\) in \(D_{\Sigma}\). Then there exists \(r>0\) depending on \(\omega\) such that if the size of the exceptional divisor is less than \(r\), then \(W_{\tilde{\Sigma}}\) has precisely \(1\) new geometric critical point which is non-degenerate. Moreover, if \(W_{\Sigma}\) is Morse, then \(W_{\tilde{\Sigma}}\) is also Morse._
Here, 'new critical point' means that apart from this, there is one-to-one correspondence between geometric critical points of \(W_{\Sigma}\) and \(W_{\tilde{\Sigma}}\). We will see that the correspondence matches the lowest energy terms of the potential at the corresponding pair of critical points of \(W_{\Sigma}\) and \(W_{\tilde{\Sigma}}\).
Proof.: As before, it suffices to work with \(W_{\text{min}}\) (which in this case is just the Hori-Vafa part) instead of \(W_{\tilde{\Sigma}}\). Without loss of generality, let \(X_{\tilde{\Sigma}}\) be the blowup of \(X_{\Sigma}\) at the corner \(D_{\Sigma,1}\cap D_{\Sigma,2}\) and assume \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\) and \(\lambda_{1}=\lambda_{2}=0\). Then the new primitive generator of \(\tilde{\Sigma}\) has to be \(\nu^{\prime}=\nu_{1}+\nu_{2}=(1,1)\). Moreover, the Hori-Vafa potential for \(X_{\Sigma}\) with this choice of cooridnates is by definition given as
\[W_{\tilde{\Sigma}}^{HV} =W_{\Sigma}^{HV}+T^{-\eta}z_{1}z_{2}\] \[=z_{1}+z_{2}+T^{-\eta}z_{1}z_{2}+\sum_{i=3}^{N}T^{\lambda_{i}}z^ {\phi\beta_{i_{i}}} \tag{4.11}\]
for some \(\eta>0\), where \(z_{1}\) and \(z_{2}\) are the counts of \(\beta_{\nu_{1}}\) and \(\beta_{\nu_{2}}\) basic disks.
The new critical point can be obtained by solving the critical point equation for the first three terms in (4.11) and applying Lemma 3.8, once there is a guarantee that the last summand \(\sum_{i=3}^{N}T^{\lambda_{i}}\) in (4.11) has higher energy than these three. For this reason we restrict the size of the blowup so that every toric critical point \(\alpha_{j}\) with \(val(\alpha_{j})=(x_{j},y_{j})\) lies in either of the shaded regions in Figure 9. Specifically, we require that the size of the toric blowup \(\eta\) satisfies
\[0<\eta<r:=\min_{j}\max\{x_{j},y_{j}\}. \tag{4.12}\]
This ensures that \(\beta_{\nu^{\prime}}\) does not become an energy minimizing disk of any existing critical points of \(W_{\Sigma}\). Indeed, for any toric critical point \(\alpha_{j}\) with an energy minimizer \(\beta_{\nu_{k}}\) (\(1\leqslant k\leqslant N\)),
\[\big{\langle}(x_{j},y_{j}),\nu_{k}\big{\rangle}+\lambda_{k}\,\leqslant\,\min \{x_{j},y_{j}\}<x_{j}+y_{j}-\eta,\quad\text{for all}\,\,\,\,j \tag{4.13}\]
where the first inequality says that the area of \(\beta_{\nu_{k}}\) does not exceed those of \(\beta_{\nu_{1}}\) and \(\beta_{\nu_{2}}\).
Now suppose the blowup size \(\eta\) is less than \(r\).
_(i)_: We first show that every critical points of \(W_{\Sigma}\) extend to critical points of \(W_{\tilde{\Sigma}}\) in valuation-preserving manner.
1. If \(W_{\Sigma}^{HV}\) satisfies local convenience (Definition 3.10), then each critical point of \(W_{\Sigma}^{HV}\) has its corresponding critical point of \(W_{\tilde{\Sigma}}^{HV}\) which shares the same lowest energy term. This is because every vertices of \(\operatorname{Trop}(W_{\Sigma}^{HV})\) are also vertices of \(\operatorname{Trop}(W_{\tilde{\Sigma}}^{HV})\) from the above discussion. On the other hand, higher energy terms in the full potential do not affect the valuations of geometric critical points by Proposition 4.2, as before.
2. If \(W_{\Sigma}^{HV}\) is not locally convenient, we will prove separately in 4.14 that \(W_{\Sigma}^{HV}\) fails to be locally convenient over a unique edge \(e\) (4.20) of its tropicalization, and that the edge supports four critical points of \(W_{\Sigma}\) each of which extends to that of \(W_{\tilde{\Sigma}}\) in a valuation-preserving manner. For the other critical points, we still have local convenience, and hence can proceed as in (a).
We conclude that ever critical points \(\alpha_{j}\) of \(W_{\Sigma}\) are extended to those of \(W_{\Sigma}\) keeping the same lowest energy terms.
_(ii)_ We next seek for a new critical point. Observe that \(\beta_{\nu_{1}}\), \(\beta_{\nu_{2}}\), and \(\beta_{\nu^{\prime}}=\beta_{\nu_{1}}+\beta_{\nu_{2}}\) are the only energy minimizing disk classes at the critical point \(\alpha_{0}=(-T^{\eta},-T^{\eta})\) of \(z_{1}+z_{2}+T^{-\eta}z_{1}z_{2}\) (with their energies being \(\eta\)). This is because if the equality
\[\eta=\underbrace{\langle\mathit{val}(\alpha_{0}),\ \beta_{\nu_{k}}\rangle+ \lambda_{k}}_{\omega_{\Sigma}(\beta_{\nu_{k}})}\]
holds for some \(k\neq 1,2\), then \(\beta_{\nu^{\prime}}\) would be a minimizer at a critical point of \(W_{\Sigma}^{HV}\) determined by \(\beta_{\nu_{1}}\), \(\beta_{\nu_{2}}\) and \(\beta_{\nu_{k}}\), contradicting (4.13).
Note that \(\nu_{1}\), \(\nu_{2}\), and \(\nu_{1}+\nu_{2}\) forms a convenient unimodal 2-cell, implying that \(\alpha_{0}\) is indeed non-degenerate. (In particular, the Hori-Vafa potential \(W_{\Sigma}^{HV}\) is locally convenient at the new vertex \(\mathit{val}(\alpha)\).) By Lemma 3.8, \(\alpha\) uniquely extends to a critical point \(\alpha\) of \(W_{\Sigma}\).
It remains to prove that there exist no other geometric critical points of \(W_{\Sigma}^{HV}\) having \(\beta_{\nu^{\prime}}\) as an energy minimizing disk. Suppose that such a critical point exists, say \(\alpha^{\prime}\) with \(\mathit{val}(\alpha^{\prime})=(x_{0},y_{0})\), assuming \(x_{0}\leq y_{0}\) without loss of generality. Let \(\beta_{\nu_{k}}\) be another energy minimizing disk class at \(\alpha^{\prime}\). We have \(\partial\beta_{\nu_{k}}=\nu_{k}=(p,q)\notin\mathrm{Cone}(\nu_{1},\nu_{2})\). From (4.7), we have
\[\underbrace{x_{0}+y_{0}-\eta}_{\omega_{\Sigma}(\beta_{\nu^{\prime}})}= \underbrace{px_{0}+qy_{0}+\lambda_{k}}_{\omega_{\Sigma}(\beta_{\nu_{k}})}\leq \underbrace{x_{0}}_{\omega_{\Sigma}(\beta_{\nu_{1}})},\underbrace{y_{0}}_{ \omega_{\Sigma}(\beta_{\nu_{2}})}. \tag{4.14}\]
The first part of the inequality implies \(x_{0}\), \(y_{0}\leq\eta\), while the second part gives us
\[0\geq(p-1)x_{0}+qy_{0}+\lambda_{k}\geq(p+q-1)x_{0}+\lambda_{k}.\]
Note that \(\lambda_{k}>0\) since \(\nu_{k}\notin\mathrm{Cone}(\nu_{1},\nu_{2})\). Hence we have \(p+q-1<0\).
On the other hand, since \(\beta_{\nu_{k}}\) is not energy minimizing at the critical point \(\alpha\) by our hypothesis,
\[\langle\mathit{val}(\alpha),\nu_{1}+\nu_{2}\rangle-\eta<\langle\mathit{val}( \alpha),(p,q)\rangle+\lambda_{k}.\]
Plugging in \(\mathit{val}(\alpha)=(\eta,\eta)\), we get
\[(1-p-q)\eta<\lambda_{k}.\]
Figure 9. Restriction of sizes of toric blowups
Thus,
\[\omega_{\widetilde{\Sigma}}(\beta_{v_{k}})=px_{0}+qy_{0}+\lambda_{k} >px_{0}+qy_{0}+(1-p-q)\eta\] \[=p(x_{0}-\eta)+q(y_{0}-\eta)+\eta\] \[\geqslant(p+q)(x_{0}-\eta)+\eta\] \[=(p+q-1)(x_{0}-\eta)+x_{0}\qquad\quad\because p+q-1<0,\,x_{0}\leqslant\eta\] \[\geqslant x_{0},\]
contradicting (4.14). Hence no such \(\alpha^{\prime}\) can exist, and we conclude that \(\alpha\) is the only new geometric critical point.
By inductively blowing up corners while imposing similar restrictions as in condition (4.12), we have:
**Corollary 4.13** (_Theorem II_).: _Let \(X_{\widetilde{\Sigma}}\) be the toric surface obtained by taking a sequence of toric blowup of another toric surface \(X_{\Sigma}\). There exists \(r>0\) depending on \(\omega\) such that if the sizes of exceptional divisors are smaller than \(r\), then \(W_{\widetilde{\Sigma}}\) has as many new non-degenerate critical points as the number of exceptional divisors for generic parameters. Moreover, every critical point of \(W_{\Sigma}\) are extended to that of \(W_{\widetilde{\Sigma}}\) in a valuation-preserving manner. If \(W_{\Sigma}\) is Morse, then \(W_{\widetilde{\Sigma}}\) is also Morse._
Finally, we demonstrate how critical points of \(W_{\Sigma}\) extend to those of its blowup when \(W_{\Sigma}\) is not local convenient. (Note that we do not distinguish toric/non-toric blowups here.) If \(W_{\Sigma}\) fails to be locally convenient, then it does so over (the closure of) the edge \(e\) dual to a unique edge in the Newton subdivision that passes through the origin. Under basis change, we may assume that \(e\) in the Newton subdivision joins \((0,1)\) and \((0,-1)\). Lemma 4.9 tells us that generically, \(e\) is contained in two triangles in the Newton subdivision. Let us write \((i,j)\) and \((k,l)\) for the vertices of the triangles other than \((0,\pm 1)\). See Figure 10.
In accordance with energies of disks measured at a point in \(e\), \(W\) is given as
\[W=z_{2}+\frac{T^{b}}{z_{2}}+T^{\lambda}z_{1}^{i}z_{2}^{j}+T^{\eta}z_{1}^{k}z_{ 2}^{l}+\text{h.o.t.},\]
where the higher order terms contain contributions from disks with sphere bubbles, or from broken disks intersecting exceptional divisors. Without loss of generality, we may
Figure 10. When the Newton subdivision is not locally convenient.
assume \(i>0\), \(k<0\) and \(\lambda,\eta\geq 0\). Let \(\alpha\) be a geometric critical point of \(W\) lying over \(e\) with \(val(\alpha)\) on \(e\). Clearly, \(e\) is a part of \(\{val(z_{2})=b/2\}\), so we may set \(z_{2}=T^{b/2}\underline{z}_{2}\) with \(\underline{z}_{2}\in\Lambda_{U}\) to obtain
\[W=T^{b/2}\left(\underline{z}_{2}+\frac{1}{\underline{z}_{2}}\right)+T^{\lambda +\frac{b}{2}}z_{1}^{i}\underline{z}_{2}^{j}+T^{\eta+\frac{lb}{2}}z_{1}^{k} \underline{z}_{2}^{l}+\text{h.o.t.}\,. \tag{4.15}\]
By (a) of Lemma 3.8, we conclude that we must have \(\underline{z}_{2}=\pm 1+z_{2}^{+}\) for some nonzero \(z_{2}^{+}\in\Lambda_{+}\) (since \(\underline{z}_{2}=\pm 1\) is obviously not a critical point).
Note that direct calculation shows that the \(x\)-coordinates of the end points of the edge \(e\) are
\[\frac{1}{i}\left(\frac{b}{2}-\lambda-\frac{jb}{2}\right)<\frac{1}{k}\left( \frac{b}{2}-\eta-\frac{lb}{2}\right) \tag{4.16}\]
from left to right, and within this rage, the first two terms in (4.15) are of minimal energy. Notice that the above inequality is a necessary condition for the aforementioned two triangles to be a part of the Newton subdivision. Thus we have an expansion of \(W\) in \(z_{2}^{+}\)
\[W=T^{b/2}\left(\left(z_{2}^{+}\right)^{2}-\left(z_{2}^{+}\right)^{3}+\cdots \right)+T^{\lambda+\frac{b}{2}}z_{1}^{i}(1+z_{2}^{+})^{j}+T^{\eta+\frac{lb}{2 }}z_{1}^{k}(1+z_{2}^{+})^{l}+\text{h.o.t.} \tag{4.17}\]
modulo constant. Note that the higher order terms of (4.15) are still of higher order in (4.17), since the leading order term of \(T^{\lambda^{\prime}}z_{1}^{p}(1+z_{2}^{+})^{q}\) is still \(T^{\lambda^{\prime}}z_{1}^{p}\) due to the fact that \(z_{2}^{+}\in\Lambda_{+}\).
In the expansion (4.17), the only possible minimal energy (nonconstant) terms over \(e\) are \(T^{b/2}(z_{2}^{+})^{2}\), \(T^{\lambda+\frac{b}{2}}z_{1}^{i}\), and \(T^{\eta+\frac{lb}{2}}z_{1}^{k}\). Now if (4.17) admits a critical point, then again it must solve the critical equation of the lowest order part, and in this case, since \(z_{2}^{+}\in\Lambda_{+}\), we should have the last two terms
\[T^{\lambda+\frac{b}{2}}z_{1}^{i}+T^{\eta+\frac{lb}{2}}z_{2}^{k} \tag{4.18}\]
sitting in the lowest degree in order for the potential to admit critical points over \(e\). We will see shortly that this is indeed the case.
Critical points of (4.18) occur when \(val(z_{1})=\frac{2(\eta-\lambda)+(l-j)b}{2(i-k)}\), so we may set
\[z_{1}=T^{\frac{2(\eta-\lambda)+(l-j)b}{2(i-k)}}\underline{z}_{1}.\]
Expanding \(\underline{z}_{1}\) at the critical points of (4.18) leads to
\[\underline{z}_{1}=\rho+z_{1}^{+},\quad z_{1}^{+}\in\Lambda_{+}\]
where \(\rho\) is one of \((i-k)\)-th roots of \(-\frac{k}{i}\). Note that there are \((i-k)\)-many choices for \(\rho\). Hence we can conclude
\[val(\alpha)=\left(\frac{2(\eta-\lambda)+(l-j)b}{2(i-k)},\frac{b}{2}\right)\]
if \(\alpha\) does exist.
Substituting \(\underline{z}_{1}=T^{\frac{2(\eta-\lambda)+(l-j)b}{2(i-k)}}(\rho+z_{1}^{+})\), the (leading terms of) second order expansion (over \(e\)) reads
\[W=T^{b/2}\left(\left(z_{2}^{+}\right)^{2}-\left(z_{2}^{+}\right)^{3}+\cdots \right)+T^{\delta}(\rho+z_{1}^{+})^{i}(1+z_{2}^{+})^{j}+T^{\delta}(\rho+z_{1}^ {+})^{k}(1+z_{2}^{+})^{l}\]
modulo constants, where
\[\delta=\lambda+\frac{jb}{2}+\frac{i(2(\eta-\lambda)+(l-j)b)}{2(i-k)}=\eta+\frac{ lb}{2}+\frac{k(2(\eta-\lambda)+(l-j)b)}{2(i-k)}.\]
(In a more symmetric form, this equals \(\frac{1}{2(i-k)}(-2k\lambda+2i\eta+(il-kj)b)\).) Notice that \(\rho\) obtained from the critical point equation should make two linear terms in \(z_{1}^{+}\) cancel. Hence the possible minimal energy terms are
\[T^{b/2}(z_{2}^{+})^{2},\ T^{\delta}(z_{1}^{+})^{2},\ T^{\delta}z_{2}^{+},\]
followed by \(T^{\delta}z_{1}^{+}z_{2}^{+}\). Taking partial derivatives, we have
\[\partial_{1}W =iT^{\delta}(\rho+z_{1}^{+})^{i-1}(1+z_{2}^{+})^{j}+kT^{\delta}( \rho+z_{1}^{+})^{k-1}(1+z_{2}^{+})^{l}\] \[=C_{1}+\left(a_{1}T^{\delta}+\text{h.o.t.}\right)z_{1}^{+}+ \left(a_{2}T^{\delta}+\text{h.o.t.}\right)z_{2}^{+}+\sum\lambda_{pq}(z_{1}^{+ })^{p}(z_{2}^{+})^{q},\] \[\partial_{2}W =\left(2T^{\frac{b}{2}}+\text{h.o.t.}\right)z_{2}^{+}+jT^{\delta }(\rho+z_{1}^{+})^{i}(1+z_{2}^{+})^{j-1}+lT^{\delta}(\rho+z_{1}^{+})^{k}(1+z_{ 2}^{+})^{l-1}\] \[=C_{2}+\left(b_{1}T^{\delta}+\text{h.o.t.}\right)z_{1}^{+}+ \left(b_{2}T^{\frac{b}{2}}+\text{h.o.t.}\right)z_{2}^{+}+\sum\eta_{pq}(z_{1}^{ +})^{p}(z_{2}^{+})^{q}\]
with \(\mathit{val}(C_{1})>\delta\) and \(\mathit{val}(C_{2})\geqslant\delta\). Note that the two summations above are higher order terms, in the sense that \(\mathit{val}(\lambda_{pq})\geqslant\delta\), \(\mathit{val}(\eta_{pq})\geqslant\frac{b}{2}\), and \(\mathit{val}(\eta_{pq})\geqslant\delta\) for \(p\geqslant 1\).
Crucial observation here is that \(b/2<\delta\). This is in fact equivalent to (4.16) that was originated from our geometric assumption (on the Newton subdivision). Therefore by applying Lemma A.1, we see that \((\partial_{1}W-C_{1},\partial_{2}W-C_{2})=(0,0)\) obtained by substituting
\[(z_{1},z_{2})=\left(T^{\frac{2(\eta-\lambda)+(l-l)b}{2(i-k)}}(\rho+z_{1}^{+}), T^{b/2}(\pm 1+z_{2}^{+})\right) \tag{4.19}\]
has a unique solution, as the left hand side is a small perturbation of the identity map. Moreover, the leading term calculation tells us that \(\mathit{val}(z_{2}^{+})=\delta-\frac{b}{2}\). In this case, \(T^{b/2}(z_{2}^{+})^{2}\) can be checked to have a higher valuation than the other two among
\[T^{b/2}(z_{2}^{+})^{2},\ T^{\lambda+\frac{b}{2}}z_{1}^{i},\ T^{\eta+\frac{b}{ 2}}z_{1}^{k}\]
where \(\mathit{val}(z_{1})=\frac{2(\eta-\lambda)+(l-j)b}{2(i-k)}\), which justifies our earlier assumption. Consequently, we obtain \(2(i-k)\) distinct critical points all supported at \(\left(\frac{2(\eta-\lambda)+(l-j)b}{2(i-k)},\frac{b}{2}\right)\), which in fact coincides with the normalized volume of the Newton polytope. We have thus proven:
**Proposition 4.14**.: _Let \((X,D)\) be obtained from \((X_{\Sigma},D_{\Sigma})\) via a sequence of toric/non-toric blowups, and suppose that its Newton subdivision contains a non-convenient edge whose adjoint \(2\)-cells, say \(\sigma_{1}\), \(\sigma_{2}\), consists of Hori-Vafa terms, as in Figure 10. Then \(W_{\Sigma}\) has \(2(\mathrm{V}(\sigma_{1})+\mathrm{V}(\sigma_{2}))\)-many non-degenerate geometric critical points on the dual edge \(e\in\mathrm{Trop}(W_{\Sigma})\), all of which extends to \(W\)._
**Example 4.15**.: _Notice that the Hirzebruch surface \(\mathbf{F}_{k}\) is the case when \((i,j)=(1,0)\) and \((k,l)=(-1,-k)\), and it follows that \(\mathbf{F}_{k}\) has \(2(1-(-1))=4\) non-degenerate geometric critical points. To give a more concrete description of the location of these critical points, let us consider the following scenario where the moment polytope of \(\mathbf{F}_{k}\) is given by_
\[x\geqslant 0,\quad y\geqslant 0,\quad b-y\geqslant 0,\quad a-x-ky\geqslant 0.\]
_The edge \(e\) where local convenience fails is given by_
\[e:=\big{\{}(x,y)\in\mathbb{R}^{2}:b/2\leqslant x\leqslant a-(b(k+1)/2),\quad y =b/2\big{\}}. \tag{4.20}\]
_(When \(k=0\), we assume \(a>b\) without loss of generality.) From Proposition 4.14, it follows that \(\mathbf{F}_{k}\) has 4 non-degenerate geometric critical points for \(k\geqslant 1\), all of which are of valuation \((\frac{1}{2}a-\frac{k}{4}b,\frac{1}{2}b)\in e\)._
### Non-toric blowups \((X,d)\) of a toric surface \((X_{\Sigma},d_{\Sigma})\)
We now proceed to examine the changes in the number of critical points during the process of non-toric blowups on a given toric surface \((X_{\Sigma},d_{\Sigma})\). We will explicitly locate every non-toric critical points of \(W\), and show that they are Morse critical points. The main result is the following.
**Theorem 4.16** (_Theorem III_).: _Let \((X_{\Sigma},d_{\Sigma})\) be a toric surface, and let \((X,D)\) be the surface obtained after a sequence of non-toric blowups equipped with a symplectic form described in 2.3 with generic parameters \(\epsilon_{i}\). Then_
1. _each (non-toric) blowup at a point in the interior of a toric divisor is responsible for a unique new geometric critical point of_ \(W\) _which is non-degenerate. If the blowup point lies in_ \(D_{\Sigma,j+1}\)_, the corresponding new critical point is located near the corner (the nodal point)_ \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\)_._
2. _if_ \(W_{\Sigma}\) _is Morse, then_ \(W\) _is also Morse. Moreover, every critical point of_ \(W_{\Sigma}\) _extends to that of_ \(W\) _in a valuation-preserving manner._
Proof.: To each blowup point in the toric divisor \(D_{\Sigma,j+1}\), we will associate a non-toric critical point one of whose energy minimizing disks projects to the class \(\beta_{\nu_{j}}+k\beta_{\nu_{j+1}}\) under \(\pi_{*}\) for some \(k\). If such a critical point exists, then it is located near the corner \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\) by Proposition 4.6, and all the other energy minimizers should also lie in \(\mathcal{A}_{j}\cup\{\beta_{\nu_{j}},\beta_{\nu_{j+1}}\}\) after \(\pi_{*}\) by Lemma 4.5.
For this purpose, it is natural to begin with a local expansion of \(W\) near the corner \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\). Obviously, if the point is close enough to this corner, then the images of low energy contributions to \(W\) under \(\pi_{*}\) belong to classes in \(\mathcal{A}_{j}\cup\{\beta_{\nu_{j}},\beta_{\nu_{j+1}}\}\). Indeed, if a Lagrangian boundary is close enough to \(D_{\Sigma,j}\cap D_{\Sigma,j+1}\), then the only Maslov 2 disks in its small neighborhood are basic broken disks.
Thus \(W\) can be decomposed as \(W=\overline{W}_{j}+W_{+}\) with
\[\begin{array}{rcl}\overline{W}_{j}&=&T^{\lambda_{j}}z^{\partial\hat{\rho}_{ \gamma_{j}}}+T^{\lambda_{j+1}}z^{\partial\hat{\rho}_{\gamma_{j+1}}}+\sum_{ \beta\in\mathcal{A}_{j}}N_{\beta}T^{\delta(\beta)}z^{\partial\beta}\\ &=&T^{\lambda_{j}}z^{\partial\hat{\rho}_{\gamma_{j}}}+T^{\lambda_{j+1}}z^{ \partial\hat{\rho}_{\gamma_{j+1}}}+\sum_{k=1}^{l}({}^{l}_{k})T^{\lambda_{j}+k \lambda_{j+1}}z^{\nu_{j}+k\nu_{j+1}}\end{array} \tag{4.21}\]
when we set the sizes of exceptional divisors to be zero. Here, \(l\) is the number of blowup points in the interior of \(D_{\Sigma,j+1}\), so \(\mathcal{A}_{j}=\big{\{}\,\beta_{\gamma_{j}}+k\beta_{\nu_{j+1}}:1\leq k\leq l \,\big{\}}\). From the above discussion, the valuation of each monomial in \(W_{+}\) is bigger than any of those in \(\overline{W}_{j}\).
In what follows, we will work with \(j=1\) for notational simplicity, where we further assume \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\), and \(\lambda_{1}=\lambda_{2}=0\) by some affine coordinate change as before. However we set the sizes of the exceptional divisors to be generic from now on. Thus, using the energy formula (4.3), we have
\[\begin{array}{rcl}\overline{W}_{1}=z_{1}+z_{2}&+&(T^{-\epsilon_{1}}+T^{- \epsilon_{2}}+\cdots+T^{-\epsilon_{l}})z_{1}z_{2}\\ &+&(T^{-\epsilon_{1}-\epsilon_{2}}+T^{-\epsilon_{1}-\epsilon_{3}}+\cdots+T^{- \epsilon_{l-1}-\epsilon_{l}})z_{1}z_{2}^{2}\\ &+&\cdots+T^{-\epsilon_{1}-\epsilon_{2}-\cdots-\epsilon_{l}}z_{1}z_{2}^{l} \end{array} \tag{4.22}\]
which goes back to (4.21) when \(\epsilon_{j}=0\) for all \(j\).
Observe that
\[\begin{array}{rcl}\frac{\partial}{\partial z_{1}}\overline{W}_{1}&=&1+(T^{ \epsilon_{1}}+\cdots+T^{\epsilon_{l}})z_{2}+(T^{-\epsilon_{1}}T^{-\epsilon_{2 }}+\cdots+T^{-\epsilon_{l-1}}T^{-\epsilon_{l-1}})z_{2}^{2}+\cdots+(T^{-\epsilon _{1}}\cdots T^{\epsilon_{l}})z_{2}^{l}\\ &=&(1+T^{-\epsilon_{1}}z_{2})(1+T^{-\epsilon_{2}}z_{2})\cdots(1+T^{-\epsilon_ {l}}z_{2}).\end{array}\]
Solving \(\frac{\partial}{\partial z_{1}}\overline{W}_{1}=0\), we obtain
\[z_{2}=-T^{\epsilon_{i}}\quad\text{for}\ \ 1\leq i\leq l.\]
Therefore, as long as \(\epsilon_{j}\)'s are generic, \(\overline{W}_{1}\) has \(l\)-many distinct critical points. In this case, one can check that \(z_{1}=(-1)^{l}T^{\epsilon_{i}+\sum_{j<i}\epsilon_{j}-\epsilon_{i}}\) modulo higher order terms. See 5.2 for the detailed calculation and some related discussion. Note that these points are all geometric. For example, when \(l=3\), we have three solutions with valuations
\[val(\alpha_{1})=(\epsilon_{1},\epsilon_{1}),\quad val(\alpha_{2})=(\epsilon_{ 1},\epsilon_{2}),\quad val(\alpha_{3})=(\epsilon_{1}+\epsilon_{2}-\epsilon_{3},\epsilon_{3}),\]
where \(\epsilon_{1}>\epsilon_{2}>\epsilon_{3}>0\). In fact, the valuation of the critical point \((z_{1},z_{2})\) lies inside \(R_{0}\) (Lemma 2.5) if every blowup points in \(D_{\Sigma,i}\) locate close enough to the corner \(D_{\Sigma,i}\cap D_{\Sigma,i-1}\) for each \(i\) (observe that the valuation does not depend on the location of the blowup center).
On the other hand, applying Kushnirenko's theorem (Theorem 3.2), we find that \(|\mathrm{Crit}(\overline{W}_{1})|=2V(\Delta_{\overline{W}_{1}})=l\). This implies that each of the above \(l\) distinct points has multiplicity \(1\), i.e., they are non-degenerate. Every such critical point \(\alpha_{j}\) of \(\overline{W}_{1}\) can be successfully extended to a geometric critical point of \(W\) using Lemma 3.8 thanks to non-degeneracy of its associated minimal energy terms.
Hence for each boundary divisor of \(X\) containing \(l\) points in the blowup center, we have successfully identified \(l\) distinct Morse (non-toric) critical points for \(W\), all located on the left-hand corner of the respective divisors. Since we have worked with all possible minimal energy terms near each corner of the moment polytope, Lemma 3.8 guarantees that no new geometric (non-toric) critical points arise when patching up local models. This proves (a).
We next show that the non-toric critical points arising from newly added first order terms do not interfere with any of the previous toric critical points from the Hori-Vafa part, essential reason being that non-toric blowup are done in relatively small scale. To see this, observe that geometric critical points of \(W\) other than the newly added ones after blowup are all toric critical points by Theorem 4.2, i.e., their energy minimizers are basic disks (we have exhausted all possible non-toric critical points of \(W\) above).
We need to show that these toric critical points are in one-to-one correspondence with the (geometric) critical points of \(W_{\Sigma}\). The argument is completely parallel to the part _(i)_ in the proof of Proposition 4.12. Namely,
1. when \(W_{\Sigma}^{HV}\) is locally convenient, the valuation of a toric critical point of \(W\) sits at a vertex of \(\operatorname{Trop}(W_{\Sigma}^{HV})\) which establishes one-to-one correspondence between geometric critical points of \(W_{\Sigma}\) and toric critical points of \(W\) (since they are both determined by inductively solving the same leading order terms at each vertex of \(\operatorname{Trop}(W_{\Sigma}^{HV})\)).
2. If \(W_{\Sigma}^{HV}\) is not locally convenient, it fails to be so on the unique edge \(e\) (4.20) over which we pick up four nongenerate critical points of \(W\) by Proposition 4.14. The other critical points can be handled in the same way as in (a).
So far, we have worked with the fixed expression of \(W\) that is valid on \(R_{0}\) (given in Lemma 2.5). As the analytic continuation does not completely cover the complement of \(R_{0}\) (it misses some codimension \(1\) region, see Remark 2.3), we should additionally prove that no critical point occur outsider \(R_{0}\). This will be done in Lemma 4.17 below, which finishes the proof.
**Lemma 4.17**.: _If all the blowup points on \(D_{\Sigma,i}\) are close enough to the corner \(D_{\Sigma,i}\cap D_{\Sigma,i-1}\) for every \(i\), then the glued LG potential (regardless of the choice of a chamber) does not have geometric critical points outside \(R_{0}\)._
Proof.: From the discussion in 4.3, we see that the valuations of toric critical points of \(W\) only depend on \(W_{\Sigma}^{HV}\), and in particular there are only finitely many. Since \(R_{0}\) exhausts the whole moment polytope as points in the blowup centers approach the nearby corners, it contains all toric critical point at some point.
On the other hand, Theorem 4.5 justifies that the local calculation in the proof of Theorem 4.16 (see (4.22)) is still valid on any chamber, except that we need to remove terms containing \(\epsilon_{i}\) as we pass to the left side of the corresponding wall (speaking in the setting of the proof of Theorem 4.16). In particular, if we approach close enough to the boundary of the moment polytope, we are only left with \(z_{1}+z_{2}\) in the local expression of \(W\) which does not admit any critical point.
The following corollary is straightforward.
**Corollary 4.18**.: _In the situation of (b) Theorem 4.16, the number of critical of \(W\) agrees with the rank of the cohomology of \(X\)._
Proof.: Corollary 4.13 tells us that \(W_{\Sigma}\) has as many geometric critical points as the rank of \(H^{*}(X_{\Sigma})\). This is based on the induction on the number of (toric) blowups, where the base case of the induction is covered by Lemma 4.9 and Proposition 4.14. Then Theorem 4.16 implies that the number of geometric critical points of \(W\) equals the rank of cohomology of \(X\) since they are both increased by the number of (non-toric) blowups from \(X_{\Sigma}\) to
We conclude that performing non-toric blowups (with generic \(\epsilon_{i}\)) results in the attachment of unimodal 2-cells to the Newton polytope of \(W_{\Sigma}\), which translates to a "splitting" of an unbounded branch of the tropicalization \(\operatorname{Trop}(W_{\Sigma})\).
In particular, performing non-toric blowups leaves \(\operatorname{Trop}(W_{\Sigma})\) unchanged but one branch, specifically the branch representing the corner near the blowup center. The unique new geometric critical point is supported at the point where the branch splits, whereas the non-geometric critical points that has been removed precisely corresponds to the vertices of \(\operatorname{Trop}(W_{\min})\) arising from the intersection between the resulting split branches and other branches.
**Example 4.19**.: _Let \((X_{\Sigma},D_{\Sigma}=\Sigma_{i=1}^{5}D_{\Sigma,i})\) be the semi-Fano surface obtained by blowing up \(\mathbb{P}^{2}\) twice as illustrated with the dotted lines in Figure 13a. The Hori-Vafa part of the potential is given by_
\[W_{\Sigma}^{HV}=z_{1}+z_{2}+T^{a}\frac{1}{z_{1}z_{2}}+T^{b}\frac{1}{z_{2}}+T^{ c}\frac{z_{1}}{z_{2}}.\]
_The gray region in Figure 13b represents the Newton subdivision \(\mathcal{S}_{W_{\Sigma}^{HV}}\) of \(X_{\Sigma}\), which is locally convenient. Now consider the log Calabi-Yau surface \((X,D)\) obtained by taking three non-toric blowups on \(D_{\Sigma,2}\) and one non-toric blowup on \(D_{\Sigma,4}\) (indicated by crosses). Then four unimodal 2-cells, namely \(\sigma_{1}\), \(\sigma_{2}\), \(\sigma_{3}\in\operatorname{Cone}(\nu_{1},\nu_{2})\) and \(\sigma_{4}\in\operatorname{Cone}(\nu_{3},\nu_{4})\) are attached to \(\mathcal{S}_{W_{\Sigma}^{HV}}\). Note that the resulting surface \(X\) is no longer Fano. The blue line in Figure 13a depicts the tropicalization \(\operatorname{Trop}(W_{\min})\) of the minimal energy terms, and the blue dotted lines indicate branches of \(\operatorname{Trop}(W_{\Sigma}^{HV})\) before splitting. The critical point information of \(W\) can be read off of the Newton subdivision \(\mathcal{S}_{W_{\min}}\) and the tropicalization \(\operatorname{Trop}(W_{\min})\) as follows: **1.** The three vertices of \(\operatorname{Trop}(W_{\min})\) lying away from the boundary divisors corresponds to the three 2-cells of \(\mathcal{S}_{W_{\Sigma}^{HV}}\), each of which supports \(1\), \(1\), and \(3\) toric critical points. **2.** The three vertices of \(\operatorname{Trop}(W_{\min})\) near the origin, as well as the vertex near the corner \(D_{\Sigma,3}\cap D_{\Sigma,A}\), are dual to the unimodal 2-cells \(\sigma_{1}\), \(\sigma_{2}\), \(\sigma_{3}\), and \(\sigma_{4}\), respectively. Each of these vertices supports a single non-toric geometric critical point. **3.** On the other hand, the red vertex above \(D_{\Sigma,A}\) lying outside the moment polytope is dual to the 2-cell excluded by Proposition 4.6, running across different cones. The excluded 2-cell is represented with dashed line in Figure 13b.
Figure 12. Branching of \(\operatorname{Trop}(W_{\Sigma})\) locally.
### Bulk-deformed potential (with toric bulk insertions only in non-Fano)
We finally return to potential functions with bulk, in particular the bulk-deformation \(W^{\mathfrak{b}}\) of \(W\) associated with the bulk parameter \(\mathfrak{b}\in H^{even}(X;\Lambda_{+})\) which are pulled back from some torus invariant cycles in \(X_{\Sigma}\). This restriction is to guarantee the weakly-unobstructedness (Lemma 2.9), and we can consider more general bulk-insertions in the semi-Fano case where the weakly-unobstructedness is automatic by the obvious degree reason.
Following [10, 11], \(W^{\mathfrak{b}}\) is defined as
\[W^{\mathfrak{b}}_{u}(\bar{z})=\sum_{k=0}^{\infty}\,\sum_{\beta}\frac{1}{k!}\, \mathfrak{q}_{k}(\beta;\mathfrak{b},\dots,\mathfrak{b})\,T^{\omega(\beta)}\, \bar{z}^{\partial\beta}\]
where the sum is taken over \(\beta\in\pi_{2}(X,L_{u})\). Here, \(\mathfrak{q}_{k}(\beta;\mathfrak{b},\dots,\mathfrak{b})\) is roughly the count of holomorphic disks with \(k\) interior marked points incident to the ambient cycle \(\mathfrak{b}\) and passing through a generic point of the fiber \(L_{u}\). We refer readers to [10] or [10] for more details.
Note that the Maslov index of the class \(\beta\) can be strictly bigger than \(2\) when it admits the insertion from the point class (\(\in H^{4}(X)\)), and hence its contribution to \(W^{\mathfrak{b}}\) has relatively high energy in general. This is also the case for degree-\(2\) bulk insertion \(\mathfrak{b}\), as \(\mathfrak{b}\) itself carries a positive energy in its \(\Lambda_{+}\)-coefficient. In fact, one can prove that such extra terms of \(W^{\mathfrak{b}}\) arising from nontrivial \(\mathfrak{b}\) do not affect the leading term potential at any geometric critical points. To see this, let us denote by \(W^{\mathfrak{b}}_{\min}\) the collection of terms in \(W^{\mathfrak{b}}\) induced from the disks that can possibly have minimal energy at some geometric critical point of \(W^{\mathfrak{b}}\).
**Lemma 4.20**.: _For generic parameters, \(W^{\mathfrak{b}}_{\min}=W_{\min}\)._
Proof.: Let \(\beta\) be any holomorphic disk with \(\mathfrak{q}_{k}(\beta;\mathfrak{b},\dots,\mathfrak{b})\neq 0\), and suppose that \(\partial\beta\) is contained in \(\operatorname{Cone}(\nu_{1},\nu_{2})\), where we assume \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\) and \(\lambda_{1}=\lambda_{2}=0\) without loss of generality. Our argument in the proof of Theorem 4.2 remains valid, showing that disks in the class \(\beta\) can be energy minimizing at a geometric critical point only if \(\delta(\beta)=0\) (see (4.5)). As before, this forces \(\pi_{*}(\beta)\) to be a linear combination of
Figure 13. Example 4.19.
and \(\beta_{v_{2}}\). Previously, we concluded at this point that \(\beta\) are either s basic or a basic broken disk due to Maslov 2 constraint. Since we now allow bulk-deformations via the point cycle \(D_{\Sigma,1}\cap D_{\Sigma,2}\), there is the third possibility of \(\beta\) being the proper transform of a higher Maslov disk in \(X_{\Sigma}\), hitting the corner \(D_{\Sigma,1}\cap D_{\Sigma,2}\) (included in \(\mathfrak{b}\)) multiple times. However these disks should have strictly bigger energies than the proper transforms of \(\beta_{v_{1}}\) and \(\beta_{v_{2}}\) at any \(u\in B\), and hence cannot serve as a energy minimizing disk anywhere.
Thus we have an analogue of Corollary 4.18 for bulk-deformed potentials.
**Corollary 4.21**.: _Let \((X_{\Sigma},D_{\Sigma})\) be a toric surface, and let \((X,D)\) be the surface obtained after a sequence of non-toric blowups. Generically, the number of critical of \(W^{\mathfrak{b}}\) agrees with the rank of the cohomology of \(X\). Here, \(\mathfrak{b}\in H^{even}(X;\Lambda)_{+}\) is a bulk parameter (which requires to be the lift of a torus-invariant cycle in \(X_{\Sigma}\) when \(D\) contains a sphere with a negative Chern-number)._
We summarize our discussion so far in the frame of closed-string mirror symmetry for algebraic surfaces \((X,D)\) obtained by taking (a sequence of) non-toric blowup on a toric surface \((X_{\Sigma},D_{\Sigma})\). We consider the quantum cohomology ring \(QH^{*}_{\mathfrak{b}}(X)=H^{*}(X;\Lambda)\) equipped with deformed cup product \(\star_{\mathfrak{b}}\). The coefficient of \(z\) in \(x\star_{\mathfrak{b}}y\) is given by the Gromov-Witten invariants \(\sum_{n}\sum_{A\in H_{2}(X)}GW^{A}_{0,n+3}(x,y,z,\mathfrak{b},\mathfrak{b}, \cdots,\mathfrak{b})\) which roughly counts the number of holomorphic spheres passing through \(x,y,z\) and arbitrary many \(\mathfrak{b}\)'s. For more details, see, for e.g., [12, Chapter 11].
On the mirror side, we take the Jacobian ideal ring of the potential function \(W\). Lemma 4.17 allows us to regard \(W^{\mathfrak{b}}\) as a function defined on \(val^{-1}(R_{0})\), where \(R_{0}\) is sufficiently close to the entire base \(B\) (which approximates the moment polytope of \(X_{\Sigma}\)). Then the Jacobian ideal ring \(\operatorname{Jac}(W^{\mathfrak{b}})\) of \(W^{\mathfrak{b}}\) can be defined as in [10, Definition 1.3.10], but with the moment polytope replaced by \(R_{0}\). Roughly speaking, the Jacobian ideal ring is the quotient of the convergent power series ring \(\Lambda\ll z_{1}^{\pm 1},z_{2}^{\pm 1}\gg\), consisting of power series that converge with respect to the valuation \(z^{\partial\beta_{v_{j}}}\mapsto\lambda_{j}+\langle u,\partial\beta_{v_{j}}\rangle\) for all \(j\) and \(u\in R_{0}\) or its associated norm.5 (Recall \(z_{1}=z^{\partial\beta_{v_{1}}}\) and \(z_{2}=z^{\partial\beta_{v_{2}}}\).) Notice that the valuation is nothing but the areas of basic disks. The ideal in the quotient process is generated by partial derivatives of \(W^{\mathfrak{b}}\), but we may need to take its closure with respect to \(T\)-adic topology. We refer readers to [10] for more details.
Footnote 5: Our \(\lambda_{j}\) corresponds to \(-\lambda_{j}\) in [10].
We are now ready to state our main theorem:
**Theorem 4.22** (_Theorem I_).: _In the situation of Corollary 4.21, we have_
\[\operatorname{Jac}(W^{\mathfrak{b}})=QH^{*}_{\mathfrak{b}}(X).\]
Proof.: Recall from Lemma 4.9 that for generic parameters, the Newton subdivision is a locally convenient triangulation, apart from the cell(s) containing the origin (in their closure). For each convenient triangle \(\sigma\), direct computation on the leading terms yields the desired number (\(=\operatorname{Vol}(\sigma)\)) of non-degenerate critical points (Remark 4.10), whereas Proposition 4.14 covers the case when there exists a non-convenient edge. That is, for generic parameters, the bulk-deformed potential \(W^{\mathfrak{b}}\) is Morse. Hence, by [10, Proposition 1.3.16], the left hand side is isomorphic to the semi-simple ring \(\prod_{a\in crit(W_{R_{0}})}\Lambda\) where \(\alpha\) runs over the set of critical points of \(W\) over \(R_{0}\) which equals the set of all geometric critical points of \(W\) due to Lemma 4.17. On the other hand, the right hand side
is also known to be semi-simple by [1]. We see that both sides are semi-simple and have the same rank by Corollary 4.21, which completes the proof.
### Continuum of critical points
Even though we can pin-down the precise location of all (geometric) critical points of \(W^{\mathsf{b}}\) for generic \(\omega\), for some special \(\omega\), the bulk deformed potential might have a continuum of critical points, as observed in [1]. Interestingly, this phenomena occurs when \(X\) with a non-convenient Newton polytope is equipped with a "locally monotone" symplectic form. The following example analyzes the critical behavior of \(\mathbb{F}_{k}\) blown up once at the origin.
Let \(X\) be the blowup of \(\mathbb{F}_{k}\) at the origin (Figure 13(a)). The Hori-Vafa part of the potential is given by
\[W^{HV}_{\Sigma}=z_{1}+z_{2}+\frac{T^{b}}{z_{2}}+\frac{T^{a}}{z_{1}z_{2}^{k}}+T ^{-c}z_{1}z_{2}.\]
Figure 14.
First note that if we keep our assumption \(c<\frac{b}{2}\) from (4.12), then the three terms \(z_{1}\), \(z_{2}\), and \(T^{-c}z_{1}z_{2}\) gives rise to a new toric critical point \(\alpha\) with \(val(\alpha)=(c,c)\). We observe the change in the critical behavior of \(W_{\Sigma}\) and that of the bulk deformed potential \(W^{\mathfrak{b}}\) when \(c=\frac{b}{2}\).
Let \(c=\frac{b}{2}\). We follow the steps of the proof of Proposition 4.14 analogously. Normalizing with respect to \(z_{2}=T^{b/2}\underline{z}_{2}\), we have
\[W_{\Sigma}^{HV}=z_{1}+T^{b/2}\Big{(}\underline{z}_{2}+\frac{1}{\underline{z}_{ 2}}\Big{)}+T^{a-\frac{b}{2}b}\frac{1}{z_{1}\underline{z}_{2}^{k}}+T^{-c+ \frac{b}{2}}z_{1}\underline{z}_{2},\]
and substituting \(\underline{z}_{2}=-1+z_{2}^{+}\) gives
\[W_{\Sigma}^{HV} =z_{1}+T^{\frac{b}{2}}\Big{(}-(z_{2}^{+})^{2}-(z_{2}^{+})^{3}- \cdots\Big{)}\pm\frac{T^{a-\frac{b}{2}b}}{z_{1}}\Big{(}1+z_{2}^{+}+(z_{2}^{+} )^{2}+\cdots\Big{)}^{k}+z_{1}(-1+z_{2}^{+})\] \[=-T^{\frac{b}{2}}(z_{2}^{+})^{2}\pm\frac{T^{a-\frac{b}{2}b}}{z_{1 }}+z_{1}z_{2}^{+}+\cdots.\]
where the two terms \(z_{1}\) and \(-T^{-c+\frac{b}{2}}z_{1}\) have canceled out one another. The leading terms become convenient, hence by Proposition 3.9 (after normalizing), we conclude that there are three critical points located at \(Q=(\frac{1}{3}(a-\frac{k}{2}b+\frac{b}{2}),\frac{b}{2})\in e\), and two critical points at \(P=(\frac{1}{2}a-\frac{k}{4}b,\frac{1}{2}b)\). See Figure 13(b).
Now consider (the \(z_{2}^{+}\) expansion of) the bulk-deformed potential:
\[W^{\mathfrak{b}}=-T^{\frac{b}{2}}(z_{2}^{+})^{2}\pm\frac{T^{a-\frac{k}{2}b}}{ z_{1}}-z_{1}z_{2}^{+}+T^{\epsilon}z_{1}+\cdots\]
Notice that \(T^{\epsilon}z_{1}\) can now become an energy minimizer. If \(0<\epsilon<\frac{1}{3}(a-\frac{k}{2}b-b)\), one can show by following the steps of the proof of Proposition 4.14, that \(T^{\epsilon}z_{1}\) indeed serves as an energy minimizer6. We obtain two critical points that are located at
Footnote 6: To be precise, one needs to expand the potential to the third order so that the coefficients satisfies the conditions in Lemma A.1.
\[\Big{(}\frac{1}{2}(a-\frac{k}{2}b-\epsilon),\,\frac{1}{4}(a-\frac{k}{2}b-b+ \epsilon)\Big{)},\]
and a single critical point at
\[(\epsilon+\frac{b}{2},\epsilon)\]
as shown in Figure 13(c). The former two critical points' change in position (as \(\epsilon\) varies) is depicted in a blue arrow, whereas the latter is depicted in a red arrow. We omit the details.
## 5. Homological Mirror Symmetry
We now discuss open-string (homological) mirror symmetry within our geometric context. For a surface \(X\) obtained by taking a non-toric blowup of a toric surface \(X_{\Sigma}\), let us consider the Fukaya category \(\mathcal{F}_{0}(X)\) of \(X\) consisting the fibers of the SYZ fibration on \(X\). More precisely, these Lagrangian torus fibers are equipped with \(\Lambda_{U}\) flat connections, and their Floer cohomologies are well-defined as long as their curvature \(m_{0}(1)\)'s coincide (recall they are all unobstructed by Lemma 2.9). We allow \(\mathcal{F}_{0}\) to contain nodal fibers with
weak-bounding cochains from immersed generators if \(X\) is semi-Fano, in which case we can guarantee their weakly-unobstructedness (see Remark 2.10).
Nontrivial objects in \(\mathcal{F}_{0}\) can be detected from singularity information of \(W\). Recall from 2.1 that each point in the domain \(\tilde{Y}\) of the mirror LG model \(W:\tilde{Y}\to\Lambda\) represents a Lagrangian brane supported over a SYZ fiber. An object of \(\mathcal{F}_{0}(X)\) corresponding to a point \(p\in\tilde{Y}\) is nontrivial if and only if \(p\) is a critical point of \(W\) (see 5.1 below). On the other hand, we consider the category of singularities for \(W\) on the mirror side. Assuming generic parameters, all of its (geometric) critical points are non-degenerate, and hence the category decomposes into skyscraper sheaves supported at critical points of \(W\), where the morphism space between any two different skyscraper sheaves is trivial.
**Remark 5.1**.: _Although we focus on \(W\) to keep the exposition simple, the entire argument in this section has a straightforward generalization to the bulk-deformed potential \(W^{\mathfrak{b}}\)._
We begin by finding nontrivial objects (generators) of \(\mathcal{F}_{0}(X)\) associated with critical points of \(W\).
### Critical points of \(W\) and the Floer cohomology of the associated Lagrangian
Suppose \(p=(z_{1},z_{2})\) is a critical point of \(W\). By taking the (unique) factorization \(p=(T^{\text{val}(z_{1})}\underline{z}_{1},T^{\text{val}(z_{2})}\underline{z}_{2})\) for \((\underline{z}_{1},\underline{z}_{2})\in\Lambda^{2}_{U}\), we can assign a Lagrangian torus \(L_{u}\) fiber sitting over \(u:=(\text{val}(z_{1}),\text{val}(z_{2}))\) equipped with the bounding cochain \(b=x_{1}d\theta_{1}+x_{2}d\theta_{2}\in H^{1}(L_{u};\Lambda_{+})\cong(\Lambda_ {+})^{2}\) such that \(e^{b}=(\underline{z}_{1},\underline{z}_{2})\). In view of (2.5), \(p\) being a critical point is equivalent to \(b\) being a critical point of \(W_{u}(b)\), where
\[W_{u}(b)\cdot[L_{u}]=m_{0}(1)+m_{1}(b)+m_{2}(b,b)+\cdots. \tag{5.1}\]
Differentiating (5.1), we see that the differential \(m_{1}^{b,b}\) on \(\hom((L_{u},b),(L_{u},b))\cong H^{*}(L_{u},\Lambda)\) vanishes on \(H^{1}(L_{u},\Lambda)\) as it acts trivially on the generators \(d\theta_{1},d\theta_{2}\). Moreover, differentiating (5.1) once more, we see that \(m_{2}^{b,b,b}\) on these generators satisfies
\[m_{2}^{b,b,b}(d\theta_{1},d\theta_{2})+m_{2}^{b,b,b}(d\theta_{2},d\theta_{1}) =\partial_{12}W\cdot[L_{u}].\]
Notice that \(d\theta_{1}\wedge d\theta_{2}\) cannot be an image of \(m_{1}^{b,b}\) which vanishes on \(d\theta_{1}\) and \(d\theta_{2}\), and \(m_{2}^{b,b,b}(d\theta_{1},d\theta_{2})=d\theta_{1}\wedge d\theta_{2}+c\cdot[L_ {u}]\) for some \(c\in\Lambda\) where \([L_{u}]\) serves as the unit. Consequently, the Lagrangian brane \((L_{u},b)\), considered as an object of \(\mathcal{F}_{0}(X)\), has its endomorphism algebra quasi-isomorphic to the Clifford algebra with respect to the quadratic form \(\text{Hess}_{b}(W_{u})\) (the Hessian of \(W\) at \(b\)), which equals \(\text{Hess}_{p}(W)\). Observe that this calculation is still valid even if there exist nontrivial contributions of stable disks with negative Maslov indices, and we use crucially the fact that \(\dim L_{u}\) is 2. Thus we have the following:
**Proposition 5.2**.: _For an object \(L(p):=(L_{(\underline{z}_{1},\underline{z}_{2})},b=(x_{1},x_{2}))\) corresponding to the critical point \(p=(\underline{z}_{1}e^{x_{1}},\underline{z}_{2}e^{x_{2}})\) in \(X\), its endomorphism space in \(\mathcal{F}_{0}(X)\) is quasi-isomorphic to the Clifford algebra with respect to \(\text{Hess}_{p}(W)\)._
_Moreover, for two distinct critical points \(p_{1}\) and \(p_{2}\) of \(W\), \(\hom_{\mathcal{F}_{0}(X)}(L(p_{1}),L(p_{2}))\) is trivial._
Proof.: It only remains to prove the last assertion, so consider \(p_{1}\neq p_{2}\) with \(\text{val}(p_{1})=\text{val}(p_{2})\) and \(W(p_{1})=W(p_{2})\) (as otherwise, \(\hom_{\mathcal{F}_{0}(X)}(L(p_{1}),L(p_{2}))\) would be automatically trivial, or not even defined). The situation boils down to computing the cohomology of
\[m_{1}^{b_{1},b_{2}}:CF((L_{u},b_{1}),(L_{u},b_{2}))\to CF((L_{u},b_{1}),(L_{u}, b_{2}))\]
for two different weak bounding cochains \(b_{1}\) and \(b_{2}\) in \(H^{1}(L_{u};\Lambda_{+})\) such that \(W_{u}(b_{1})=W_{u}(b_{2})\). We use the Morse-Bott model for \(CF((L_{u},b_{1}),(L_{u},b_{2})\), in that it is still generated by \([L_{u}]\) and \(d\theta_{1}\wedge d\theta_{2}\) in even degree, and \(d\theta_{1},d\theta_{2}\) in odd degree.
Observe first that \(m_{1}^{b_{1},b_{2}}([L_{u}])=b_{1}-b_{2}\neq 0\). Therefore, modulo the image, the odd degree part of \(HF((L_{u},b_{1}),(L_{u},b_{2})\) is generated by one of \(d\theta_{i}\), say \(d\theta_{1}\), or it vanishes. However, due to the classical part (the de Rham differential) of \(m_{1}^{b_{1},b_{2}}\),
\[m_{1}^{b_{1},b_{2}}(d\theta_{1})=Cd\theta_{1}\wedge d\theta_{2}+E[L_{u}]\neq 0 \tag{5.2}\]
for some constants \(C\) and \(E\) with \(C\neq 0\) (\(C\) would cancel out if \(b_{1}=b_{2}\)), which gives a contradiction.
On the other hand, (5.2) implies that the even degree part of \(HF((L_{u},b_{1}),(L_{u},b_{2})\) should be generated by \([L_{u}]\) unless it is trivial. Since \([L_{u}]\) is not a cycle, the even degree component should also vanish.
### Nontrivial objects in \(\mathcal{F}_{0}(X)\) from non-toric critical points
We look into the non-trivial object of \(\mathcal{F}_{0}(X)\) arising from the blowup process to see how the category changes under the blowup. Recall that each blowup (at a non-torus-fixed point) introduces a critical point to \(W\) which we call a non-toric critical point. We are interested in the associated Lagrangian torus fiber (coupled with a suitable bounding cochain), especially in its behavior when the position of a blowup point changes.
We first find the precise location of such a critical point on the SYZ base (i.e., the valuation of a non-toric critical point). We also show that critical values for non-toric critical points of \(W\) are mutually distinct. Clearly, the leading order terms \(\overline{W}\) of \(W\) only matter for our purpose.
As before, we look at the local expression of \(W\) around exceptional divisors arising from blowing up at points in \(D_{\Sigma,2}\) (and hence, near \(D_{\Sigma,1}\cap D_{\Sigma,2}\) by our earlier choice) in \(X_{\Sigma}\). Without loss of generality, we assume \(\nu_{1}=(1,0)\), \(\nu_{2}=(0,1)\), and \(\lambda_{1}=\lambda_{2}=0\). In the moment polytope \(\Delta_{\Sigma}\), critical points of our interest lie on the \(x\)-axis, close to the origin. We stick to this setting throughout 5.2 and 5.3.
Suppose that there are \(k\) distinct such blowup points in \(D_{\Sigma,2}\), and that the sizes of exceptional divisors are \(\epsilon_{1},\cdots,\epsilon_{k}\) in increasing order. The leading terms \(\overline{W}\) of \(W\) locally near the origin of \(\Delta_{\Sigma}\) are given as in (4.22). We have seen in 4.4 that there are \(k\)-many critical points of \(\overline{W}\) all of which extend to critical points of \(W\) via energy induction. Let \(\alpha_{1},\cdots,\alpha_{k}\) denote these critical points (of \(W\)) where \(\alpha_{i}\) corresponds to the exceptional divisor of size \(\epsilon_{i}\).
Alternatively, we can express \(\overline{W}\) as
\[\overline{W}=z_{2}+z_{1}\prod_{i=1}^{k}\big{(}1+T^{-\epsilon_{i}}z_{2}\big{)}.\]
Taking the partial derivative with respect to \(z_{2}\), we obtain
\[\frac{\partial}{\partial z_{2}}\overline{W}=1+z_{1}\sum_{i=1}^{k}T^{-\epsilon_ {i}}\prod_{j\neq i}\big{(}1+T^{-\epsilon_{j}}z_{2}\big{)}.\]
Substituting \(z_{2}=-T^{\epsilon_{i}}\) and solving for \(z_{1}\), we have
\[z_{1}=-T^{\epsilon_{i}}\prod_{j\neq i}\frac{1}{1-T^{\epsilon_{i}-\epsilon_{j}}}.\]
We can expand each term using Taylor series. If \(j<i\), then \(\epsilon_{i}-\epsilon_{j}<0\), and we have
\[\frac{1}{1-T^{\epsilon_{i}-\epsilon_{j}}}=-\frac{T^{\epsilon_{j}-\epsilon_{i}}}{ 1-T^{\epsilon_{j}-\epsilon_{i}}}=-T^{\epsilon_{j}-\epsilon_{i}}\left(1+T^{ \epsilon_{j}-\epsilon_{i}}+T^{2(\epsilon_{j}-\epsilon_{i})}+\cdots\right).\]
Otherwise, \(\frac{1}{1-T^{\epsilon_{i}-\epsilon_{j}}}\) is of valuation \(0\). Therefore,
\[z_{1}=(-1)^{k}T^{\epsilon_{i}}T^{\sum_{j<i}\epsilon_{j}-\epsilon_{i}}+T^{\text {h.o.t.}}\]
and we thus have
\[val(\alpha_{i})=\left(\epsilon_{i}+\sum_{j=1}^{i-1}(\epsilon_{j}-\epsilon_{i} ),\ \epsilon_{i}\right).\]
Due to our choice of \(\lambda_{1}=\lambda_{2}=0\), the moment polytope \(\Delta_{\Sigma}\) is located in the 1st quadrant of \(\mathbb{R}^{2}\), and our local calculation is performed near the origin. In particular, \(\alpha_{i}\) sits in the interior of \(\Delta_{\Sigma}\). Moreover, by choosing the locations of the blowup points sufficiently close to \(D_{\Sigma,1}\cap D_{\Sigma,2}\) (which are irrelevant to \(\epsilon_{i}\)), we can ensure that all \(\alpha_{1},\cdots,\alpha_{k}\) lie within \(R_{0}\) where the local expansion \(W=\overline{W}+T^{h.o.t.}\) is valid (see Figure 15). Therefore we obtain nontrivial objects \(L(\alpha_{1}),\cdots,L(\alpha_{k})\) of \(\mathcal{F}_{0}(X)\) supported over SYZ fibers at \(val(\alpha_{1}),\cdots,val(\alpha_{k})\).
Now, let's proceed to calculate the valuation of the critical value of \(W\) at \(\alpha_{i}\):
\[val(W|_{\alpha_{i}}) =val\left(T^{\epsilon_{i}}+\left(T^{\epsilon_{i}+\sum_{j=1}^{i-1} (\epsilon_{j}-\epsilon_{i})}\right)\prod_{i=1}^{k}\left(1+\frac{1}{T^{ \epsilon_{i}}}T^{\epsilon_{i}}\right)\right)\] \[=\min\{\epsilon_{i},\,\epsilon_{i}+\sum_{j=1}^{i-1}(\epsilon_{j} -\epsilon_{i})\}=\epsilon_{i}.\]
Therefore, as long as \(\epsilon_{i}\)'s are generic, the critical values (of non-toric critical points) are all distinct.
Moreover, Lemma 4.9 tells us that generically, toric critical points also have distinct eigenvalues. For the case where the central cell is non-convenient, one can check by direct computation that the critical values are indeed distinct for each critical point (4.19) found in the proof of 4.14. The following is a direct consequence of the discussion so far combined with Proposition 5.2.
**Proposition 5.3**.: _Let \(X\) be obtained after a sequence of toric/non-toric blowups on \(X_{\Sigma}\), where we assume that the non-toric blowup centers are close enough to the corner of the moment polytope.7 Then each toric/non-toric blowup gives rise to a Floer-nontrivial SYZ fiber in \(X\), and hence a nontrivial object in \(\mathcal{F}_{0}(X)\). Its endomorphism algebra is isomorphic to the Clifford algebra with respect to the Hessian of \(W\) at its corresponding critical point._
Footnote 7: i.e. blowup points in the interior of the toric divisor \(D_{j}\) are close enough to \(D_{\Sigma,j}\cap D_{\Sigma,j-1}\) for each \(j\) according to our earlier convention.
_For generic parameter, all such Floer-nontrivial SYZ fibers live in mutually distinct component of \(\mathcal{F}_{0}(X)\) when decomposed into potential values, and hence they do not interact with each other._
When \(k=1\), then the unique critical point created by this blowup is located at \((\epsilon,\epsilon)\) (for \(\epsilon=\epsilon_{1}\) in the notation above). This precisely coincides with the effect of the toric blowup at \(D_{\Sigma,1}\cap D_{\Sigma,2}\) with the size of the exceptional divisor \(\epsilon\). Indeed, the two (toric and non-toric ones) are equivalent in view of symplectomorphism constructed in the proof of Lemma 2.9. Apart from technical details (in achieving transversality for Floer
theory), the associated Lagrangian should coincide with what is called the exceptional brane in [133] (which was constructed in the local model of the toric blowup).
### Geometric meaning of critical points lying outside \(R_{0}\)
Finally, we can remove the assumption on the location of the blowup center in Proposition 5.3 by including nodal fibers under the semi-Fano assumption on \(X\). It is natural to expect that the singularity information of its LG mirror is independent of positions of the blowup center in any case, but the only reason we impose this assumption is to ensure the weakly unobstructedness of nodal fibers (the argument in the proof of Lemma 2.9 is obviously not valid for these fibers). We speculate that nodal fibers are weakly unobstrcuted for all bounding cochains formed by immersed generators beyond the semi-Fano case.
For the sake of simplicity, we assume \(k=1\), and the argument easily generalizes for \(k>1\). We work with the same setting as above, but now we allow the blowup points to be projected onto \((\delta,0)\) in the moment polytope, where \(\delta\) is greater than the blowup size \(\epsilon\). The situation is described in Figure 14(b), whereas previously we worked with the setting in Figure 14(c) (cf. Proposition 5.3).
Notice that for this choice of the blowup point, the critical point \((\epsilon,\epsilon)\) falls in the chamber \(R_{1}\) on the left side of \(R_{0}\) sharing a single vertical wall (with respect to our choice of coordinates). On the other hand, the leading terms for \(W_{R_{1}}\) near the origin is given as
\[\overline{W}=z_{1}+z_{2},\]
which does not admit any critical points (while being convenient). This seemingly contradictory phenomenon occurs because analytic-continuation of \(W_{R_{0}}\) over \(R_{1}\) cannot cover a certain region in \(R_{1}\) due to singularity of the coordinate change (a cluster transformation) between \(R_{0}\) and \(R_{1}\) (see Remark 2.3).
More concretely, let \((z_{1}^{\prime},z_{2}^{\prime})\) and \((z_{1},z_{2})\) denote mirror affine coordinates over \(R_{1}\) and \(R_{0}\), respectively, induced by the disk classes \(\nu_{1}\) and \(\nu_{2}\). Recall from (2.12) that they are related by
\[\begin{array}{l}z_{1}=z_{1}^{\prime}(1+T^{-\epsilon}z_{2}^{\prime})\\ z_{2}=z_{2}^{\prime}\end{array}\]
Figure 15. Local picture of the relative position of the blowup center with respect to \(\epsilon\) and corresponding locations of the non-toric critical point.
in an infinitesimal neighborhood of the wall between \(R_{0}\) and \(R_{1}\). The map gives a diffeomorphism away from \(\{z_{2}^{\prime}=-T^{\epsilon}\}\). Besides, if \(z_{2}^{\prime}=-T^{\epsilon}+T^{h.o.t}\), then the transition can change the valuations of points dramatically, and hence the continuation of \(W_{R_{0}}\) in 2.2.1 cannot cover the region over \(R_{1}\) given by \(\{z_{2}^{\prime}\in T^{\epsilon}(-1+\Delta_{+})\}\). Notice that the critical point \(\alpha_{0}=(-T^{\epsilon},-T^{\epsilon})\) of \(\overline{W}\) and hence \(\alpha=(-T^{\epsilon}+T^{h.o.t},-T^{\epsilon}+T^{h.o.t})\) of \(W\) precisely lie on this region. For this reason, it is impossible to directly identify the geometric object associated with \(\alpha\).
To remedy, we bring in an additional mirror chart induced by the (Floer) deformation space of the nodal fiber in \(X\) responsible for the wall between \(R_{0}\) and \(R_{1}\). Such a technique of extending of the mirror space has first appeared in [11]. See [1, 7.1] for its usage in a similar context. Recall that near the exceptional divisor, the SYZ fibration on \(X\) is locally modeled on the one given in [1]. The nodal fiber \(\mathbb{L}\) in our situation is an immersion of a sphere that has a single transversal self-intersection. It produces two degree \(1\) immersed generators, say \(U\) and \(V\). One can deform the Fukaya algebra \(CF(\mathbb{L},\mathbb{L})\) by the weak bounding cochain \(b=uU+vV\) for \((u,v)\in(\Lambda_{+})^{2}\).
The coordinate transitions between this new \((u,v)\)-chart and \(R_{i}\) are given as follows. Consider regular fibers \(L_{0}\) and \(L_{1}\) over \(R_{0}\) and \(R_{1}\), respectively and assume that they are close enough to \(\mathbb{L}\). Their projections to the divisor \(D_{2}\) is depicted in Figure 16, where the origin represents \(D_{1}\cap D_{2}\). It will be convenient to introduce a local model for a neighborhood of the fiber \(\mathbb{L}\), which is \(\mathbb{C}^{2}\backslash\{xy=1\}\) equipped with a Lagrangian torus fibration given by \((|x|^{2}-|y|^{2},|xy-1|)\). It models a neighborhood of \(D_{2}\backslash D_{1}\cap D_{2}\) where (a part of) \(D_{1}\) is given as \(\{xy=1\}\). Notice that the conic fibration \((x,y)\mapsto xy-1=z\in\mathbb{C}^{\times}\) can be interpreted as the reduction of \(\mathbb{C}^{2}\backslash\{xy=1\}\) by the circle action \(\rho\mapsto(\rho x,\rho^{-1}y)\). It is nothing but the pull-back of the toric action generated by \(\nu_{2}\in\Sigma\).
The projections of \(L_{i}\) and \(\mathbb{L}\) to the divisor \(D_{2}\) are depicted in Figure 16. See [1, 4.2] for more details (Figure 7 therein will be helpful to understand the picture). In fact, all fibers are sitting over the concentric circles in the \(z:=xy-1\)-plane centered at \(0\), and \(\mathbb{L}\) corresponds to the fiber \(\{|x|^{2}-|y|^{2}=0,|xy-1|=1\}\) projecting to \(|z|=1\). We may choose \(L_{i}\) to be \(L_{0}:=\{|x|^{2}-|y|^{2}=0,|z|=1+\epsilon\}\) and \(L_{1}:=\{|x|^{2}-|y|^{2}=0,|z|=1-\epsilon\}\) for some small \(\epsilon\).
Let \(\mathbb{L}^{s}\) be a perturbation of \(\mathbb{L}\) obtained by slightly modifying the base circle \(|z|=1\) as in Figure 16. We want to compare two Lagrangians \((\mathbb{L},uU+vV)\) and \((L_{0},\nabla^{\mathbb{z}_{1}\cdot\mathbb{z}_{2}})\) each coupled with a suitable bounding cochain. More precisely, the latter is equipped with
Figure 16. The projections of SYZ fibers near a nodal fiber
the flat connection \(\nabla^{(\underline{z}_{1},\underline{z}_{2})}\) for \((\underline{z}_{1},\underline{z}_{2})\in\Lambda^{2}_{U}\). In our picture, the holonomy along the direction of the circle action is set to be \(\underline{z}_{2}\) and that around a fixed choice of a horizontal lift of \(\{z=1-\epsilon\}\) is \(\underline{z}_{1}\). It was shown in [10] that \((\mathbb{L},uU+vV)\) and \((L_{1},\nabla^{(\underline{z}_{1},\underline{z}_{2})})\) are isomorphic to each other (as objects of the Fukaya category) if and only if
\[T^{\delta(s)}\underline{z}_{1}=v,\quad\underline{z}_{2}=uv-1 \tag{5.3}\]
where \(\delta(s)>0\) is the area difference between two shaded strips in Figure 16. It is positive due to our particular choice of perturbation \(\mathbb{L}^{s}\). We have some flexibility on \(\delta\) by adjusting the size of the base circle for \(\mathbb{L}^{s}\).
For the critical point \(\alpha=(z_{1},z_{2})\), we have \(\underline{z}_{2}=-1+T^{h,o.t.}\), and (5.3) can be solved to get \((u,v)\in\Lambda^{2}_{+}\). Since \(\alpha_{0}\) is certainly not a critical point of the global potential of \(W\), \(\alpha\) carries nontrivial higher order terms and, \(u,v\neq 0\) for such \(\alpha\). Thus \((u,v)\) lies in the region where the coordinate change (5.3) is valid. The Floer potential for \(\mathbb{L}\) can be obtained by applying this coordinate change to \(W\) over \(R_{0}\), and from the discussion so far, it admits a critical point \((u,v)\) which transfers to \(\alpha\) under (5.3). This gives us a nontrivial object in \(\mathcal{F}_{0}(X)\) whose underlying Lagrangian in \(\mathbb{L}\), although \(\alpha\) itself does not directly represent a geometric object.
Finally we remark that in the earlier situation where \(R_{0}\) contains all the critical points (Figure 14(a)), the above process does not create any new critical points. In fact, one can check by direct order-by-order calculation similar to the proof of Proposition 4.14 that the Floer potential (for \(\mathbb{L}\)) does not admit a critical point over its associated chart \((\Lambda_{+})^{2}\) if the location of \(\mathbb{L}\) is too close to the corner.
### Equivalence of categories
From the discussion so far, \(\mathcal{F}_{0}(X)\) is generated by SYZ fibers \(L(p)\) equipped with suitable bounding cochains corresponding to the critical points \(p\) of \(W\). For a regular point \(p\) of \(W\), \(L(p)\) gives a trivial object since the unit class in \(\hom(L(p),L(p))\) is an image of the differential, and hence is zero in the cohomology. Thus the triangulated category \(D^{b}\mathcal{F}_{0}(X)\) is essentially the derived category of modules over the direct sum of as many Clifford algebras as the number of critical points of \(W\) (associated with the Hessian of \(W\) at those critical points).
**Remark 5.4**.: _By [11], the computation of critical values of \(W\) (for non-toric critical points) tells us that the \(\epsilon_{i}\)-eigenspace \(QH(X)_{\epsilon_{i}}\) of \(QH(X)\) with respect \(c_{1}(X)\star-\) is \(1\)-dimensional. Assuming the extension of [12, Corollary 1.12] to non-monotone setting, this would imply that the the nontrivial object \(L(\alpha_{i})\) generates the corresponding component of the genuine Fukaya category._
On the mirror side, we have a decomposition of the singularity category of \(W\) into critical values of \(W\), that is \(\oplus_{\lambda}D^{b}_{sing}(W^{-1}(\lambda))\). Equivalently, one may consider the matrix factorizations of \(W-\lambda\). Clearly each factor \(D^{b}_{sing}(W^{-1}(\lambda))\) is generated by skyscraper sheaves at critical points of \(W\) whose corresponding critical values are \(\lambda\). Let us consider one of such skyscraper sheaves \(\Bbbk(p)\) in \(\mathcal{D}^{b}_{sing}(W^{-1}(\lambda))\). (More precisely, it is the image of \(\mathcal{O}_{p}\) in the quotient category \(\mathcal{D}^{b}_{sing}(W^{-1}(\lambda_{p}))\) by perfect complexes.) It is well-known that the endomorphism of \(\Bbbk(p)\) is quasi-isomorphic to the Clifford algebra associated with the Hessian of \(W\) at the corresponding point. See for e.g., [13] or [11]. Thus we proved:
**Theorem 5.5**.: _There is an equivalence between \(D^{b}\mathcal{F}_{0}(X)\) and \(\oplus_{\lambda}D^{b}_{sing}(W^{-1}(\lambda))(\cong MF(W))\). Both categories admit orthogonal decomposition with respect to critical (potential) values \(\lambda\), and each summand in the decomposition is generated by the skyscraper sheaf at the unique critical point whose value is \(\lambda\) or its corresponding SYZ fiber._
## Appendix A Estimates for the second-order expansion of \(\mathbf{W}\)
We prove that \((\partial_{x_{+}}W,\partial_{y_{+}}W)=0\) has a unique solution by considering in a more abstract setup as follows. \(F\) below generalizes \((\partial_{x_{+}}W,\partial_{y_{+}}W)\) minus the constant terms. Readers are warned that \(a,b,c,d\) in the statement represent some general coefficients, and are irrelevant to the symplectic sizes of toric divisors in \(\mathbb{F}_{k}\) in Proposition 4.14.
**Lemma A.1**.: _Let \(F:(\Lambda_{+})^{2}\to(\Lambda_{+})^{2}\) is given in the form of_
\[F(z_{1},z_{2})=\left(az_{1}+bz_{2}+\sum_{v\neq\epsilon_{1},\epsilon_{2}} \lambda_{v}z^{v},cz_{1}+dz_{2}+\sum_{v\neq\epsilon_{1},\epsilon_{2}}\eta_{v}z^ {v}\right)\]
_for generic coefficients \((a,b,c,d)\in\Lambda_{+}^{4}\), where the terms \(\lambda_{v}z^{v}\), \(\eta_{v}z^{v}\) are higher order terms. We assume the following :_
1. \(ad-bc\neq 0\)_,_
2. \(val(a)\leq val(b),val(c)\) _and_ \(val(d)\leq val(b),val(c)\) _(a and_ \(d\) _are not assumed to be comparable),_
3. \(val(a)+val(d)<val(b)+val(c)\)_,_
4. \[val(\lambda_{(i,j)})\geq val(d)\text{ for }j\geq 1,\quad\text{and }val(\eta_{(i,j)})\geq val(a)\text{ for }i\geq 1.\] (A.1)
_Then \(F=(C_{1},C_{2})\) has a unique solution for any \(C_{1}\) and \(C_{2}\) satisfying \(val(C_{1})>val(a)+\epsilon\) and \(val(C_{2})>val(d)+\epsilon\)._
Proof.: Since \(\lambda_{v}z^{v}\), \(\eta_{v}z^{v}\) are higher order terms, we have
\[val(\lambda_{v}z^{v})>\min\{val(az_{1}),val(bz_{2})\},\quad val(\eta_{v}z^{v} )>\min\{val(cz_{1}),val(dz_{2})\},\]
which implies
\[val(\lambda_{v})\geq\min\{val(a),val(b)\}=val(a)>0,\quad val(\eta_{v})\geq\min \{val(c),val(d)\}=val(d)>0.\] (A.2)
If we consider a restriction of \(F\) to \((T^{\epsilon}\Lambda_{0})^{2}\), then under the given conditions, this restriction maps to \(T^{a+\epsilon}\Lambda_{0}\times T^{d+\epsilon}\Lambda_{0}\).
We claim that \(F(z_{1},z_{2})=(C_{1},C_{2})\) has a unique solution for any \(C_{1}\) and \(C_{2}\) satisfying \(val(C_{1})>val(a)+\epsilon\) and \(val(C_{2})>val(d)+\epsilon\). To see this, let us first make the following linear coordinate change
\[w_{1}:=az_{1}+bz_{2},\quad w_{2}:=cz_{1}+dz_{2}\]
so that
\[val(w_{1})\geq\min\{val(z_{1})+val(a),val(z_{2})+val(b)\}\geq val(a)+\epsilon\]
and likewise,
\[val(w_{2})\geq val(d)+\epsilon.\]
Therefore \((w_{1}(z),w_{2}(z))\) defines an isomorphism between \((T^{\epsilon}\Lambda_{0})^{2}\) and \(T^{a+\epsilon}\Lambda_{0}\times T^{d+\epsilon}\Lambda_{0}\). In fact, the inverse coordinate change is obviously given as
\[z_{1}=\frac{dw_{1}-bw_{2}}{ad-bc},\quad z_{2}=\frac{-cw_{1}+aw_{2}}{ad-bc}\]
and one can easily check that
\[val\left(\frac{dw_{1}}{ad-bc}\right),\ val\left(\frac{bw_{2}}{ad-bc}\right),\ val \left(\frac{cw_{1}}{ad-bc}\right),\ val\left(\frac{aw_{2}}{ad-bc}\right)\geq \epsilon,\] (A.3)
and hence \(F(w_{1},w_{2})\) (in new coordinates) gives a map from \(T^{a+\epsilon}\Lambda_{0}\times T^{d+\epsilon}\Lambda_{0}\) to itself. Notice that non-degeneracy of the leading order of \(F\) is crucially used here in this coordinate change.
We want to find a solution for \(F(w_{1},w_{2})=(C_{1},C_{2})\in T^{a+\epsilon}\Lambda_{0}\times T^{d+\epsilon} \Lambda_{0}\). To this end, we introduce another function \(G:T^{a+\epsilon}\Lambda_{0}\times T^{d+\epsilon}\Lambda_{0}\to T^{a+ \epsilon}\Lambda_{0}\times T^{d+\epsilon}\Lambda_{0}\) defined by
\[G(w_{1},w_{2}) :=F(-w_{1},-w_{2})-(C_{1},C_{2})+(w_{1},w_{2})\] \[=-(C_{1},C_{2})+\left(\sum_{v\neq(1,0),(0,1)}\bar{\lambda}_{v}w^ {v},\sum_{v\neq(1,0),(0,1)}\bar{\eta}_{v}w^{v}\right).\]
We show that \(G\) is a contraction. Since the constant terms play no role, we will assume \((C_{1},C_{2})=(0,0)\) from now on. We will only deal with the first component; the second component can be dealt with similarly. A typical term appearing in the expansion \(\lambda_{v}z^{v}\) in \((w_{1},w_{2})\) is of the form
\[m(w_{1},w_{2}):=\lambda_{v}\left(\frac{dw_{1}}{ad-bc}\right)^{i}\left(\frac{ bw_{2}}{ad-bc}\right)^{j}\left(\frac{cw_{1}}{ad-bc}\right)^{k}\left(\frac{aw_{2}}{ ad-bc}\right)^{l}\]
with \(i+j+k+l\geq 2\), omitting the binomial coefficients in \(\mathbb{Z}\).
For simplicity, we put \(\tilde{a}:=\frac{a}{ad-bc}\), \(\tilde{b}:=\frac{b}{ad-bc}\), \(\tilde{c}:=\frac{c}{ad-bc}\) and \(\tilde{d}:=\frac{d}{ad-bc}\). Then (A.3) translates to
\[val(\tilde{c}w_{1}),\ val(\tilde{d}w_{1}),\ val(\tilde{a}w_{2}),\ val(\tilde{ b}w_{2})\geq\epsilon>0.\] (A.4)
Now, for two elements \((w_{1},w_{2})\) and \((w^{\prime}_{1},w^{\prime}_{2})\) in \(T^{a+\epsilon}\Lambda_{0}\times T^{d+\epsilon}\Lambda_{0}\), we have
\[m(w_{1},w_{2})-m(w^{\prime}_{1},w^{\prime}_{2})= \lambda_{v}(\tilde{d}w_{1})^{i}(\tilde{c}^{k}w_{1})^{k}(\tilde{b} w_{2})^{j}(\tilde{a}w_{2})^{l}-\lambda_{v}(\tilde{d}w^{\prime}_{1})^{i}( \tilde{c}^{k}w^{\prime}_{1})^{k}(\tilde{b}w^{\prime}_{2})^{j}(\tilde{a}w^{ \prime}_{2})^{l}\] \[= (w_{2}-w^{\prime}_{2})\sum_{\beta}\lambda_{v}(\tilde{d}w_{1})^{i }(\tilde{c}w_{1})^{k}\cdot\tilde{b}^{i}\tilde{a}^{l}w^{\beta}_{2}(w^{\prime}_ {2})^{j+l-1-\beta}\] \[+(w_{1}-w^{\prime}_{1})\sum_{\beta}\lambda_{v}\tilde{d}^{k}w^{ \alpha}_{1}(w^{\prime}_{1})^{i+k-1-\alpha}\cdot(\tilde{b}w^{\prime}_{2})^{j}( \tilde{a}w^{\prime}_{2})^{l}\]
Let us estimate the valuation of each of summands in the last equation. We first find a lower bound for the valuation \(\nu_{1}\) of the first summand. If \(j=l=0\), then it is obvious from (A.4) that
\[\nu_{1}\geq val(w_{2}-w^{\prime}_{2})+\epsilon.\] (A.5)
Suppose now that \(j\geq 1\), while \(l\) can be possibly zero. Observe that \(val(\tilde{b}^{i_{1}}\tilde{a}^{i_{2}}w^{i_{1}}_{2}(w^{\prime}_{2})^{j_{2}})\geq\epsilon\) as long as \(i_{1}+i_{2}\leq j_{1}+j_{2}\) due to (A.4). On the other hand, using the given condition on \(b\) an (A.2), we have
\[val(\lambda_{v}\tilde{b})\geq val(a)+val(\tilde{b})\geq val(a)+val(b)-val(a)- val(d)\geq 0.\]
Hence rewriting the first summand as
\[(w_{2}-w^{\prime}_{2})\left(\lambda_{v}\tilde{b}\right)(\tilde{d}w_{1})^{i}( \tilde{c}w_{1})^{k}\cdot\tilde{b}^{j-1}\tilde{a}^{l}w^{\beta}_{2}(w^{\prime}_ {2})^{j+l-1-\beta},\]
we see that the estimate (A.5) holds for \(j\geq 1\).
Finally, if \(j=0\) but \(l\geq 1\), then \(m(w_{1},w_{2})\) should come from \(\lambda_{v}z^{v}\) with \(z^{v}\) divisible by \(z_{2}\). Thus (A.1) tells us that \(val(\lambda_{v})\geq val(d)\) in this case. A similar argument as in the previous
paragraph using \(val(\lambda_{v}\bar{a})\geq 0\) leads to the same inequality (A.5) for \(\nu_{2}\). The valuation \(\nu_{2}\) of the second summand can be estimated in the same way (using \(val(\lambda_{v}\bar{d})\geq 0\) in this case), resulting in:
\[\nu_{2}\geq val(w_{1}-w_{1}^{\prime})+\epsilon.\]
In summary, we have
\[val(m(w_{1},w_{2})-m(w_{1}^{\prime},w_{2}^{\prime}))\geq\min(\nu_{1},\nu_{2}) \geq\min(val(w_{1}-w_{1}^{\prime}),val(w_{2}-w_{2}^{\prime}))+\epsilon,\]
and we conclude that
\[||G(w_{1},w_{2})-G(w_{1}^{\prime},w_{2}^{\prime})|| =e^{-val(G(w_{1},w_{2})-G(w_{1}^{\prime},w_{2}^{\prime}))}\] \[\leq e^{-\min_{m}\{val(m(w_{1},w_{2})-m(w_{1}^{\prime},w_{2}^{ \prime}))\}}\] \[\leq e^{-\min(val(w_{1}-w_{1}^{\prime}),val(w_{2}-w_{2}^{\prime}) )-\epsilon}\] \[=e^{-\epsilon}||(w_{1},w_{2})-(w_{1}^{\prime},w_{2}^{\prime})||\]
which is as desired.
|
2309.07104 | Polygon Intersection-over-Union Loss for Viewpoint-Agnostic Monocular 3D
Vehicle Detection | Monocular 3D object detection is a challenging task because depth information
is difficult to obtain from 2D images. A subset of viewpoint-agnostic monocular
3D detection methods also do not explicitly leverage scene homography or
geometry during training, meaning that a model trained thusly can detect
objects in images from arbitrary viewpoints. Such works predict the projections
of the 3D bounding boxes on the image plane to estimate the location of the 3D
boxes, but these projections are not rectangular so the calculation of IoU
between these projected polygons is not straightforward. This work proposes an
efficient, fully differentiable algorithm for the calculation of IoU between
two convex polygons, which can be utilized to compute the IoU between two 3D
bounding box footprints viewed from an arbitrary angle. We test the performance
of the proposed polygon IoU loss (PIoU loss) on three state-of-the-art
viewpoint-agnostic 3D detection models. Experiments demonstrate that the
proposed PIoU loss converges faster than L1 loss and that in 3D detection
models, a combination of PIoU loss and L1 loss gives better results than L1
loss alone (+1.64% AP70 for MonoCon on cars, +0.18% AP70 for RTM3D on cars, and
+0.83%/+2.46% AP50/AP25 for MonoRCNN on cyclists). | Derek Gloudemans, Xinxuan Lu, Shepard Xia, Daniel B. Work | 2023-09-13T17:25:06Z | http://arxiv.org/abs/2309.07104v1 | # Polygon Intersection-over-Union Loss for Viewpoint-Agnostic Monocular 3D Vehicle Detection
###### Abstract
Monocular 3D object detection is a challenging task because depth information is difficult to obtain from 2D images. A subset of viewpoint-agnostic monocular 3D detection methods also do not explicitly leverage scene homography or geometry during training, meaning that a model trained thus can detect objects in images from arbitrary viewpoints. Such works predict the projections of the 3D bounding boxes on the image plane to estimate the location of the 3D boxes, but these projections are not rectangular so the calculation of IoU between these projected polygons is not straightforward. This work proposes an efficient, fully differentiable algorithm for the calculation of IoU between two convex polygons, which can be utilized to compute the IoU between two 3D bounding box footprints viewed from an arbitrary angle. We test the performance of the proposed polygon IoU loss (PIoU loss) on three state-of-the-art viewpoint-agnostic 3D detection models. Experiments demonstrate that the proposed PIoU loss converges faster than L1 loss and that in 3D detection models, a combination of PIoU loss and L1 loss gives better results than L1 loss alone (+1.64% \(AP_{70}\) for MonoCon on cars, +0.18% \(AP_{70}\) for RTM3D on cars, and +0.83%/+2.46% \(AP_{50}/AP_{25}\) for MonoRCNN on cyclists).
## 1 Introduction
Autonomous driving is a primary domain that propels research in 3D object detection. Precise detection and localization of vehicles and pedestrians within a driving scenario are paramount to autonomous vehicles functioning safely and effectively. To enable this end, densely annotated ego-vehicle driving datasets produced with carefully calibrated and heavily instrumented test vehicles such as KITTI [7], NuScenes [2], Waymo OpenDrive [28] have enabled a large body of work on 3D object detection and tracking tasks. As a result, extremely accurate detection 3D vehicle, cyclist, and pedestrian models have been proposed that leverage the full suite of available sensors, including LIDAR, stereo images, and depth images [12][16][19][25][24][29]. State-of-the-art 3D detection methods on KITTI frequently score above 90% \(AP_{70}\) (average precision) [7].
The dense sensor set provided in these datasets comes at a price. Methods proposed utilizing these works are not generalizable to other vehicles with different or less capable sensing. Recognizing this shortcoming, _monocular 3D object detection_ methods have been proposed to predict object positions using a single camera and no additional sensors [9, 23, 10]. Posing the 3D object detection problem in this manner introduces the challenge of recovering depth information from an image, which is inherently depth-ambiguous. Monocular methods take a step in the direction of generality; only a single camera is required for detection; yet these methods incorporate information from the 3D scene explicitly (e.g., into the model anchor box generation architecture) or implicitly (by training to regress object positions directly in 3D space). Thus, a model trained for one camera in one vehicle can't be easily applied to another vehicle and camera, and training data is only available for a very small subset of instrumented vehicles.
To detect 3D objects from a single, arbitrary camera, such as a cell-phone camera or dashcam from an arbitrary vehicle, a more general method is required. Recently, a subset of monocular 3D detection methods have attempted to detect 3D bounding boxes for vehicles without utilizing scene information in the trained model. Instead, these models predict positions natively in 2D image space and incorporate scene _homography_ only after training and after inference [15, 27, 14, 26]. By posing 3D detection in this way, a trained model may be able to generalize to an unseen camera view simply by changing the post-inference scene
homography. In other words, these methods are _viewpoint-agnostic_. The performance of these models, while steadily improving, still trails other monocular 3D detection methods and 3D detection methods more generally.
This work seeks to leverage a key trend in object detection works: across a variety of domains and detection problem formulations, _intersection-over-union_ (IoU) based methods have been shown to outperform L1 and L2 norm-based methods for loss calculation during training. Intersection over Union (IoU) is commonly used for measuring the similarity between (generally) two rectangular, axis-aligned bounding boxes. Many previous works on 2D object detection tasks demonstrate that IoU can be used as a regression loss for axis-aligned 3D bounding boxes. In [22][32] and [33] it is shown that incorporating IoU loss into 2D object detection models can improve their performance. In a similar vein, [34] and [31] show that IoU for two rotated rectangles can be used as a loss function to improve the performance of 3D object detection models. Figure 1 summarizes. Unfortunately, these methods are not directly applicable to viewpoint-agnostic monocular methods because the projection of a 3D bounding box into an image does not result in rectangular planes; rather, the six surrounding planes of vehicles occupy arbitrary quadrilaterals in pixel-space. Thus, most existing methods use L1 loss to regress the eight corner points of the 3D box on 2D image planes.
The core contribution of this work is to present a new and efficient way of calculating the IoU between two convex polygons which we refer to as _polygon IoU_ and implement it as a loss function (PIoU loss). We show both in simulation and in 3 state-of-the-art viewpoint-agnostic 3D detection models that the loss function converges faster than L1 loss. We implement a batched version of the IoU function between two polygons to enable fast training of models with the method. We utilize models trained with PIoU loss on the KITTI 3D detection benchmark and show that, in most cases, the new loss formulation increases model accuracy, particularly for higher requisite IoU thresholds.
The rest of this paper is laid out as follows: Section 2 reviews related works, Section 3 describes the PIoU method in more detail. Section 4 describes experiments comparing L1 loss and PIoU loss with simulated polygons, Section 5 details experiments on the KITTI benchmark and describes implementation details for incorporating PIoU loss into 3 detection models. Section 6 describes the results.
## 2 Related Work
### Monocular 3D object detection
Monocular 3D detection methods seek to generate a set of 3D bounding boxes in 3D space based on a single camera image. One early work is Mono3D [4], which generates rich 3D proposals with the assumption that vehicles locate on the ground plane and then scores the boxes with contextual information and size, location, and shape priors. Likewise, [23] generates 3D proposals and ensures that feature map computations are orthographic such that objects further away and occupying fewer pixels do not occupy less of the final feature map space. In [10], detection and object tracking are accomplished by directly regressing 3D coordinates, but anchor boxes are generated in 2D image-space (thus the scene homography is implicitly learned during training).
### Viewpoint-Agnostic Monocular 3D Detection
Viewpoint agnostic monocular methods can roughly be divided into two categories: i.) methods that regress 2D bounding boxes or segmentations along with augmenting outputs and utilize homography constraints to subsequently predict 3D outputs, and ii.) methods that regress 3D projections of keypoints or bounding box corner points.
In the first category, Deep3DBox [17] predicts a 2D bounding box, the observation angle, 3D object size, and
Figure 1: Example of IoU-based losses and relative improvements versus L1-loss in a.) image coordinates, where bounding boxes are axis-aligned, [22] b.) 3D viewpoint-based detection, in which the scene homography can be used to precisely compute object rectangular footprints in a _bird’s-eye view_[34] and c.) 3D viewpoint-agnostic detection (this work), using 3D box projections into 2D space.
object 3D center position (in the image) from the features enclosed by the 2D bounding boxes, as the bounding box can subsequently be fit by the constraint that its 2D projection falls within the 2D bounding box. Shift R-CNN [18] and Cascade Geometric Constraint [6] leverage the fact that 4 vertices of the 3D bounding box must lie on the 2D bounding box. The main drawback of these models is that they rely on accurate predictions of 2D bounding boxes. Errors in 2D bounding boxes compound in the 3D prediction. Likewise, [3] utilizes a 2D bounding box and manipulates a simplified 3D vehicle model to optimize the 3D object position within the bounding box. 3D-RCNN [11] takes additional segmentation inputs and generates a compact 3D representation of the scenes. It exploits class-specific shape priors by learning a low-dimensional shape-space from collections of CAD models.
Most recent methods fall in the latter category, representing vehicles as polyhedrons or 3D bounding boxes. Mono3D++ [9] represents a vehicle as 14 keypoints and learns the 2D keypoints using EM-Gaussian method. MonoRCNN [26] is built upon Faster R-CNN and adds 3D attribute and distance heads to recover 3D bounding boxes. The heatmap concepts proposed by CenterNet [35] inspired many monocular 3D detection models because this model's structure is well-suited to keypoint regression. RTM3D [14] uses CenterNet-based structures to regress nine keypoints of the 3D bounding box corresponding to the eight corners and the center of the 3D cuboids. RTM3D also regresses distance, 3D box dimension, and orientation of vehicles, then solves an optimization for the best-fitting bounding box in 3D space for each object. Likewise, Monocon [15] and Monoflex [30] are built upon CenterNet. Monoflex directly predicts 3D dimensions and orientations and incorporates multiple depth estimation methods to increase accuracy. In [27], no scene information is ever used, and instead the vanishing points and, thus, scene homography are directly computed from output 3D bounding boxes (albeit in a traffic monitoring context).
### IoU Loss in Object Detection
L1 and L2 losses are widely used in object detection models but ignore the shape of bounding boxes and are easily influenced by the scales of boxes. Conversely, IoU encodes the shape properties of objects and is invariant to the scale. Thus, IoU-based loss formulations have achieved good performance in object detection. In [22], Generalized Intersection over Union (GIoU) loss is proposed to provide better convergence for non-overlapping bounding boxes. The authors incorporate GIoU loss into YOLO v3 [20], Faster R-CNN [21], and Mask R-CNN [8] and show a consistent improvement in their performance on popular object detection benchmarks such as PASCAL VOC and MS COCO. [32] similarly incorporates the distance between bounding boxes to aid convergence in non-overlapping cases. In [33], the authors introduce Complete-IoU (CIoU) loss to consider three geometric factors: overlap area, normalized central point distance, and aspect ratio. CIoU loss is used in YOLOv4 [1] and leads to notable gains of average precision (AP) and average recall (AR). In [34] and [31], IoU loss is defined for two rotated bounding boxes into several 3D object detection frameworks. which leads to consistent improvements for both bird-eye-view 2D detection and point cloud 3D detection on the public KITTI benchmark [7].
While promising, these IoU loss variants are not suitable for the keypoints of 3D bounding boxes projected to the image plane. No work yet analyzes the performance of incorporating IoU loss into viewpoint-agnostic monocular 3D detection frameworks.
## 3 Polygon IoU Loss
The _Polygon IoU_ (PIoU) method proposed calculates the intersection-over-union metric for any two convex polygons in 2D coordinates. The inputs are two sets \(\mathcal{A}\) and \(\mathcal{B}\) consisting of the \((x,y)\) corner coordinates of each polygon, and the algorithm output falls in the range \([0,1]\). This output can then be utilized as a loss function \(Loss=1-PIoU\).
### Overview
Polygon IoU loss calculation consists of:
1. Order the points of \(\mathcal{A}\) and \(\mathcal{B}\) clockwise.
2. Compute \(\mathcal{C}\), the set of all points of intersection of any two edges of the polygons.
3. Find \(\mathcal{A}_{B}\), the set of all points in \(\mathcal{A}\) that lie in the interior of \(\mathcal{B}\), and vice versa for \(\mathcal{B}_{A}\).
4. Compute the area of the convex polygon defined by the overlapping set of points \(\mathcal{I}=\mathcal{A}_{B}\cup\mathcal{B}_{A}\cup\mathcal{C}\)
5. Compute the areas of \(\mathcal{A}\), \(\mathcal{B}\) and \(\mathcal{I}\).
6. Compute PIoU according to: \[\frac{Area_{\mathcal{I}}}{Area_{\mathcal{A}}+Area_{\mathcal{B}}-Area_{ \mathcal{I}}}\] (1)
We describe each step in more detail below.
### Clockwise
A set of points is ordered in a clockwise manner by computing the geometric center of the polygon. Then, angles are calculated between an arbitrary first point (defined to be at 0deg), and each other point, relative to the geometric center. Points are then sorted in order of decreasing angle relative to the center. Note that the clockwise ordering of \(\mathcal{A}\) and \(\mathcal{B}\) is necessary for subsequent computational steps which assume a clockwise, geometrically adjacent ordering of points.
### Finding intersections \(\mathcal{C}\)
For a line that passes through points \((x_{1},y_{1}),(x_{2},y_{2})\), and a line that passes through points \((x_{3},y_{3}),(x_{4},y_{4})\), the intersections \((I_{x},I_{y})\) are calculated as:
\[I_{x}=\frac{(x_{1}y_{2}-y_{1}x_{2})(x_{3}-x_{4})-(x_{3}y_{4}-y_{3}x_{4})(x_{1}-x_ {2})}{D},\]
\[I_{y}=\frac{(x_{1}y_{2}-y_{1}x_{2})(y_{3}-y_{4})-(x_{3}y_{4}-y_{3}x_{4})(y_{1}- y_{2})}{D} \tag{2}\]
\[D=(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4}) \tag{3}\]
Utilizing this formula, the intersection point between every line defined by consecutive points in \(\mathcal{A}\) and \(\mathcal{B}\) is computed. Some of these intersections do not lie on the polygons \(\mathcal{A}\) and \(\mathcal{B}\). We filter invalid points and only keep the intersections with x coordinates within the range of both two pairs of points in \(\mathcal{C}\). x must satisfy \(x_{1}\leq I_{x}\leq x_{2}\) and \(x_{3}\leq I_{x}\leq x_{4}\). (Restrictions on y coordinates are automatically satisfied if x coordinates are within the correct range.)
### Finding points \(\mathcal{A}\) inside \(\mathcal{B}\)
For a convex polygon, each edge is assigned a direction in clockwise order, as in Figure 2. Then if and only if a point \((x,y)\) lies on the same side of all the edges of the polygon, it lies inside the polygon. Let the endpoints of the line segment be \((x_{1},y_{1})\) and \((x_{2},y_{2})\). We compute:
\[(y-y_{1})(x_{2}-x_{1})-(x-x_{1})(y_{2}-y_{1}) \tag{4}\]
where a positive result means that the point lies on the left of the line, a negative result means that the point lies on the right of the line, and zero means that the point lies on the line. Each point in \(\mathcal{A}\) is checked against the line segments defined by \(\mathcal{B}\) to determine \(\mathcal{A}_{B}\), the set of points in \(\mathcal{A}\) lie within \(\mathcal{B}\), and the opposite is done to determine \(\mathcal{B}_{A}\). The full set of points defining the intersection of \(\mathcal{A}\) and \(\mathcal{B}\) is then defined by \(\mathcal{I}=\mathcal{A}_{B}\cup\mathcal{B}_{A}\cup\mathcal{C}\).
### Calculating the area of a polygon
Let \((x_{i},y_{i})\) represent the coordinates of the i-th point of a polygon. Let
\[\begin{split}\boldsymbol{x}&=[x_{1},x_{2},...,x_{ n}]^{T},\boldsymbol{x^{*}}=[x_{2},x_{3},...,x_{n},x_{1}]^{T},\\ \boldsymbol{y}&=[y_{1},y_{2},...,y_{n}]^{T}, \boldsymbol{y^{*}}=[y_{2},y_{3},...,y_{n},y_{1}]^{T}\end{split} \tag{5}\]
The area of a polygon is computed by:
\[Area=\frac{1}{2}(\boldsymbol{x}^{T}\boldsymbol{y^{*}}-\boldsymbol{y}^{T} \boldsymbol{x^{*}}) \tag{6}\]
The areas of \(\mathcal{I}\), \(\mathcal{A}\), and \(\mathcal{B}\) are thus computed, and equation 1 is used to determine the PIoU.
### Batched implementation
Polygon IoU loss is applicable for convex polygons with any number of corners. However, a variable number of points among polygons impedes calculation in a batched, vectorized implementation. Thus, for batched implementation, we restrict inputs to a fixed number of points per batch (in practice, for 3D bounding box calculation, all polygons will be four-sided). For 4-sided polygons, there are at most 8 points in the set \(\mathcal{C}\), at most 4 interior points in each set \(\mathcal{A}_{B}\) and \(\mathcal{B}_{A}\), and at most 8 points in \(\mathcal{I}\). So, the size of the vector that represents the intersection region is set to 8. If the actual number of points in the set \(\mathcal{I}\) is less than 8, the set is padded with repeated points which won't alter the result of (6).
For polygons with \(P\) points, the maximum number of corner points in \(\mathcal{I}\) is set to be \(2P\). The batched implementation of a function empirically computes forward and backward training passes significantly faster than a non-batched loop-based implementation of PIoU. Pseudo-code for a batched, vectorized implementation of PIoU with batch size \(B\) is given below. The shape of the output tensor at each step is given in square brackets.
**Algorithm: Batched Tensor PIoU**
Inputs: \(polygonA\) and \(polygonB\). Each [\(B\),\(P\),2].
\(\triangleright\)\(\mathcal{C}\) = intersections of all line segments in \(polygonA\) and \(polygonB\) using (0, 0) to fill the empty. [\(B\),\(2P\),2].
\(\triangleright\)\(\mathcal{A}_{B}\) = all points of \(polygonA\) that are inside \(polygonB\), using (0, 0) to fill the empties. [\(B\),\(P\),2].
\(\triangleright\)\(\mathcal{B}_{A}\) = all points of \(polygonB\) that are inside \(polygonA\) using (0, 0) to fill the empties. Tensor shape: [\(B\),\(P\),2].
\(\triangleright\)\(overlap\) = union of [\(\mathcal{C}\), \(\mathcal{A}_{B}\), \(\mathcal{B}_{A}\)]. [\(B\),\(4P\),2].
\(\triangleright\) sort \(overlap\) in decreasing order with respect to the distance from (0, 0)
\(\triangleright\) keep the first 8 points in \(overlap\). [\(B\),\(2P\),2].
\(\triangleright\)\(placeholder\) = the points farthest from (0, 0) in \(overlap\).
[\(B\),1,2].
\(\triangleright\) replace (0, 0) in \(overlap\) with \(placeholder\)
\(\triangleright\)\(areaO\),\(areaA\),\(areaB\) = areas of \(overlap\), \(polygonA\), and \(polygonB\). Each [\(B\)].
\(\triangleright\) PIoU = \(areaO/(areaA+areaB-areaO)\). [\(B\)].
### Edge Cases
When an edge from polygon A and an edge from polygon B are parallel, the intersections between the two edges
Figure 2: A polygon with edges marked with directions in clockwise order. Relative to each line segment, the point \((x,y)\) lies on the right side, while \((\bar{x},\bar{y})\) lies on the left of one line segment and on the right of three line segments.
are ill-defined because the denominator in (2) approaches zero. This occurs when: i.) two edges coincide with the same line. ii.) Two edges are parallel but not coincident with the same line. In case i.), the points of \(\mathcal{A}\) and \(\mathcal{B}\) already suitably define the intersection points, so including points of \(\mathcal{A}\) exactly on an edge of \(\mathcal{B}\) in \(\mathcal{A}_{B}\) covers this case. In case ii.), the two edges have no intersection. Thus, we can remove these intersections from \(\mathcal{C}\) for numerical stability.
## 4 Experiments on simulated polygons
### Four-sided convex polygons
We generate two sets of quadrilaterals (4-sided polygons), with one set as the initial polygons and one set as the ground truth. The polygons are generated as centers and offsets of four points to ensure they are convex. We use the Adam optimizer to regress predicted polygons with the goal of approximating the ground truth polygons. The polygons are generated in a batch of 32 for the training. We compare the result of L1 loss, PIoU loss, and a combination of L1 and PIoU loss. The IoU with respect to iterations is plotted on the left of Figure 3. The results take the average of 5 independent trials. The PIoU loss converges the fastest in the beginning. However, the PIoU loss does not achieve a high IoU when it converges. PIoU+L1 loss has the fastest convergence speed and accuracy after around 2000 iterations.
### Eight-sided unrestricted polygons
We repeat this experiment with 8-sided polygons, this time not restricting the predicted polygons to be convex. IoU versus optimization iteration is plotted on the right of Figure 3. The results take the average of 5 independent trials. The results are similar to the 4-sided polygon case. The PIoU loss converges the fastest initially, while PIoU+L1 loss converges faster than L1 loss alone and additionally reaches the highest overall IoU score. Non-convex polygons produce slightly more noise in loss curves, visible in the PIoU and PIoU+L1 curves of Figure 3 (right). This experiment on simulated 8-sided polygons shows that our PIoU loss has good performance even when the polygons are not convex and have more than four sides.
### Computation speed
We compare the computation speed of our batched implementation of PIoU loss relative to a pixel-wise IoU loss (as used for object segmentation tasks). For a batch size of 1, our PIoU loss is 4.0x faster than the pixel-wise implementation. PIoU loss is 56.0x faster for a batch size of 16 and 281.6x faster for a batch size of 128.
## 5 Experiments on KITTI 3D
PIoU loss supports polygons with any number of points. However, there are not existing detection problems well suited to evaluating polygons with more than 4 points. We test it on 3D detection problems where the projection of 3D bounding boxes to the image plane can be separated into two quadrilaterals, the front 4 and back 4 corner coordinates of the 3D bounding box. We incorporate PIoU loss into RTM3D [14], MonoCon [15], and MonoRCNN [26] and test each on the KITTI 3D benchmark [7]. In all three models, we compare the performance of using PIoU+L1 loss and only using L1 loss. The rest of this section describes the models, modifications, and experimental parameters for each experiment. The results are listed in Section 6.
### Dataset
We use KITTI 3D object detection benchmark as our training and evaluation dataset. As KITTI does not allow more than three submissions and the labels for the testing set are not released, we follow [15] to divide the official training set of KITTI 3D into 3712 training images and 3769 evaluation images. (We use _train/val_ to represent
Figure 3: PIoU score versus iteration for (left) 4-sided simulated polygons and (right) 8-sided unrestricted simulated polygons.
this split.) Three classes of objects, cars, pedestrians, and cyclists, are used for training. During training, only left-camera images and ground-truth labels are used (calibration matrices are used only to produce pixel-space 3D keypoint projections.)
### Evaluation Metrics
The objects in KITTI are categorized into easy, moderate, and hard according to their height, truncation level, and occlusion ratio. We evaluate the results for each category using the same evaluation guidelines as KITTI. As suggested by KITTI in 2019, we use 40 recall positions to calculate the average precision (AP) of results. KITTI sets different 3D bounding box overlap thresholds for cars (70%) and cyclists (50%). We evaluate cars at 70%, 50% and 30% \(AP_{3D}\) and cyclists at 50% and 25% \(AP_{3D}\).
### Ensuring convexity
As our polygon IoU loss is accurate when the two polygons are convex, we encourage the predicted keypoints to form convex polygons during training. The ground truth keypoints are projected from 3D cuboids, so it is guaranteed that they form two convex quadrilaterals. We ensure that the initial predictions of the model are convex by adding a small offset (4 corner points of a square centered at the origin) to the predicted keypoints at initialization. During our training and testing, we find that a convex initial prediction is sufficient to make the PIoU loss converge.
### RTM3D
#### 5.4.1 Experiment Settings
We modify an unofficial implementation [5] of RTM3D to do our experiments. We do not follow RTM3D to assume a Gaussian kernel for keypoints. We solve a least-squares problem to obtain the best-fitting 3D bounding boxes from the predicted keypoints, 3D dimension, and orientation. For comparison, We add our PIoU loss to regress the 2D projected keypoints of 3D bounding boxes. We use the same training settings for both the baseline and modified model. We use an Adam optimizer with an initial learning rate of 0.0002 which decreases by a factor of 0.1 in epochs 150 and 180. The weight decay is 1e-6. We train for a total of 200 epochs with a batch size of 16. The experiments run on Ubuntu 20.04 and RTX A5000. It took 14.3 hours to train the model with PIoU loss and 11.5 hours to train the model with L1 loss. Adding PIoU loss computation to the RTM3D model only slightly increases the training time.
#### 5.4.2 The Least-Squares Problem
We define the least-squares problem for finding the best-fitting 3D bounding boxes similar to the definitions in RTM3D. For each predicted object, \(\hat{P}\) represents the eight corner points of the 3D bounding box on the 2D image plane. \(\hat{\Theta},\hat{D}=[\hat{h},\hat{w},\hat{h}]^{T},\hat{d}\) represent the predicted orientation, dimensions, and depth of the 3D bounding box. \(T=[x,z,d]^{T}\) represents the position of the 3D bounding box, where \(d\) represents the horizontal depth. \(f(D,T,\Theta)\) maps the 3D bounding box to the 8 corner points in the image plane. We set \(\alpha_{P}=0.05,\alpha_{T}=1,\alpha_{D}=1,\alpha_{d}=1\) in
Figure 4: Predicted (red) and ground truth (green) 3D bounding boxes from the train/val split of KITTI dataset from MonoCon object detection model trained with L1 loss (left) and PIoU+L1 loss (right). Top images show the predicted bounding boxes in 3D space, and bottom images show the corresponding predicted footprints in a birds-eye view.
our experiments. The least-squares problem is defined as
\[\max_{D,T,\Theta} \alpha_{P}\|f(D,T,\Theta)-\hat{P}\|^{2}+\alpha_{T}\|\Theta-\hat{ \Theta}\|^{2} \tag{7}\] \[+ \alpha_{D}\|D-\hat{D}\|^{2}+\alpha_{d}\|d-\hat{d}\|^{2}\]
### MonoCon
#### 5.5.1 Experiment Settings
We use an unofficial implementation [13] of MonoCon. For comparison, we add our polygon IoU loss to regress the 2D projected keypoints of 3D bounding boxes. We use the same training settings for both the baseline and modified model. The batch size is 8, and the total epoch number is 240. Following the original paper, We use an AdamW optimizer with a weight decay of 0.00001. We use a cyclic learning rate scheduler with an initial learning rate of 0.000225 which first gradually increases to 0.00225 with a step ratio of 0.4 and then gradually drops to \(2.25e-8\). The experiments run on Ubuntu 20.04 and RTX A5000. Figure 4 shows predicted outputs from the implemented model.
### MonoRCNN
#### 5.6.1 Experiment Settings
We modify the official code of MonoRCNN to incorporate our polygon IoU loss and use the original code to train the baseline. For comparison, we add PIoU loss to regress the 2D projected keypoints of 3D bounding boxes. We use the same training settings for both models. We train for 60000 iterations with a batch size of 8. The initial learning rate is 0.01 and is reduced by 0.1 after 30k, 40k, and 50k iterations. The experiments run on Ubuntu 20.04 and RTX A5000.
### Results on KITTI Cyclists
Table 2 compares the AP scores of incorporating PIoU loss versus L1 loss alone on KITTI 3D cyclists. For RTM3D, the accuracy is very low and therefore the change in prediction accuracy between the loss functions is negligible when IoU thresholds are 0.5., but PIoU+L1 loss significantly improves performance over L1 loss alone for an IoU threshold of 0.25 (+1.65% \(AP_{30}\) averaged across all difficulties.)
MonoCon performance is less notable on cyclists than on cars. Still, the performance of a model trained with PIoU+L1 loss is better than L1 alone at the more stringent IoU threshold of 0.5 (+0.56% \(AP_{50}\).)
For MonoRCNN, adding PIoU loss strictly improves cyclist detection performance. The AP scores in all IoU thresholds and difficulty levels increase by around 1-3%. Here again, the largest improvements in prediction accuracy occur for the easy subsets of the data (+1.26% \(AP_{50}\) and +3.07% \(AP_{25}\)).
### Model Performance at Different Epochs
Lastly, we test the convergence speed of PIoU + L1 loss versus the baseline L1 loss. Table 3 shows the AP scores for RTM3D on KITTI cars at different epochs. When using PIoU loss, the AP scores early in training are significantly better than the baseline model (e.g. +2.75% \(AP_{50}\) on easy subset). (A similar trend is visible for \(AP_{70}\) but this table is omitted for brevity). Table 4 similarly shows PIoU+L1 loss results in faster model convergence than the L1 baseline at an IoU threshold of 0.7 for KITTI cars. After 80 epochs, PIoU results in at least +5% \(AP_{70}\) for each difficulty, with the largest performance improvement on the easy subset (+11.37%). The results indicate strongly that PIoU+L1 loss converges faster than L1 loss alone.
## 7 Conclusions and Future Work
In this work, we propose an efficient way to calculate IoU between convex polygons with irregular shapes. The batched implementation of the proposed PIoU loss in PyTorch is differentiable and can be used as the loss function for large object detection models. We show that PIoU loss can speed training convergence, both for simulated polygons and on the KITTI 3D detection dataset. We also show that using PIoU+L1 loss can increase the AP scores over L1 loss alone. Improvements vary when we incorporate PIoU loss into different 3D object detection models, with The CenterNet-based models benefitting more than the R-CNN-based model and a best result of +1.64% \(AP_{70}\) for MonoCon on KITTI cars. The most notable performance gains occur for highly visible vehicles when a strict IoU metric is required, meaning PIoU is especially helpful in transforming "good" predictions to "great" predictions.
This work tests PIoU loss on 3 different 3D object detection models for a benchmark dataset with relatively constrained (4-sided) polygons. In future work, we would like to incorporate the loss function into more difficult cases where rectangular bounding boxes are not sufficiently expressive and a detection formulation is preferred to a segmentation-based model formulation, providing an "in-between" for expressive detections of middling complexity. The results of PIoU in this work are quite promising, and we hope that future work can test the loss formulation on a larger sampling of methods and datasets, and in combination with other loss formulations.
## Acknowledgements
This work is supported by the National Science Foundation (NSF) under Grant No. 2135579, the NSF Graduate Research Fellowship Grant No. DGE-1937963 and the USDOT Dwight D. Eisenhower Fellowship program under Grant No. 693JJ32245006 (Gloudemans) and No. 693JJ322NF5201 (Wang). This material is based upon work supported by the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy (ERE) award number CID DE-EE0008872. This material is based upon work supported by the CMAQ award number TN20210003. The views expressed herein do not necessarily represent the views of the U.S. Department of Energy, or the United States Government.
|
2309.05713 | GA-NIFS: Early-stage feedback in a heavily obscured AGN at $z=4.76$ | Dust-obscured galaxies are thought to represent an early evolutionary phase
of massive galaxies in which the active galactic nucleus (AGN) is still deeply
buried in significant amounts of dusty material and its emission is strongly
suppressed. The unprecedented sensitivity of the James Webb Space Telescope
enables us for the first time to detect the rest-frame optical emission of
heavily obscured AGN and unveil the properties of the hidden accreting
super-massive black holes (BHs). In this work, we present the JWST/NIRSpec IFS
data of ALESS073.1, a massive, dusty, star-forming galaxy at $z = 4.76$ hosting
an AGN at its center. The detection of a very broad $H_\alpha$ emission
associated with the Broad Line Region (BLR) confirms the presence of a BH
($\log(M_{BH}/M_\odot)>8.7$) accreting at less than 15\% of its Eddington limit
and classifies the target as a Type 1 AGN. The rest-frame optical emission
lines also reveal a fast ionized gas outflow marginally resolved in the galaxy
center. The high sensitivity of NIRSpec allows us to perform the kinematic
analysis of the narrow H$\alpha$ component which indicates that the warm
ionized gas velocity field is consistent with disk rotation. We also find that,
in the innermost nuclear regions ($< 1.5$ kpc), the intrinsic velocity
dispersion of the disk reaches $\sim 150$ km/s, $\sim 2-3$ times higher than
the velocity dispersion inferred from the [CII] 158$\mu$m line tracing mostly
cold gas. Since, at large radii, the velocity dispersion of the warm and cold
gas are comparable, we conclude that the outflows are injecting turbulence in
the warm ionized gas in the central region, but they are not sufficiently
powerful to disrupt the dense gas and quench star formation. These findings
support the scenario that dust-obscured galaxies represent the evolutionary
stage preceding the unobscured quasar when all gas and dust are removed from
the host. | Eleonora Parlanti, Stefano Carniani, Hannah Übler, Giacomo Venturi, Chiara Circosta, Francesco D'Eugenio, Santiago Arribas, Andrew J. Bunker, Stéphane Charlot, Nora Lützgendorf, Roberto Maiolino, Michele Perna, Bruno Rodríguez Del Pino, Chris J. Willott, Torsten Böker, Alex J. Cameron, Jacopo Chevallard, Giovanni Cresci, Gareth C. Jones, Nimisha Kumari, Isabella Lamperti, Jan Scholtz | 2023-09-11T18:00:04Z | http://arxiv.org/abs/2309.05713v2 | # GA-NIFS: Early-stage feedback in a heavily obscured AGN at \(z=4.76\)
###### Abstract
Context:
Dust-obscured galaxies are thought to represent an early evolutionary phase of massive galaxies in which the active galactic nucleus (AGN) is still deeply buried in significant amounts of dusty material and its emission is strongly suppressed. The unprecedented sensitivity of the _James Webb Space Telescope_ (JWST) enables us for the first time to detect the rest-frame optical emission of heavily obscured AGN and unveil the properties of the hidden accreting super-massive black holes (BHs). In this work, we present the JWST/NIRSpec Integral Field Spectroscopy (IFS) data of ALESSO73.1, a massive (log(\(M_{\star}/M_{\odot}\))\(=10.7\)) dusty, star-forming galaxy at \(z=4.755\) hosting an AGN at its center. The detection of a very broad (\(>9000\) km s\({}^{-1}\)) H\(\alpha\) emission associated with the Broad Line Region (BLR) confirms the presence of a BH (log(\(M_{\rm BH}/M_{\odot}\)) \(>8.7\)) accreting at less than 15% of its Eddington limit. The identification of the BLR classifies the target as a Type 1 AGN despite the observed high column density of \(N_{\rm H}\sim 10^{24}\) cm\({}^{-2}\). The rest-frame optical emission lines also reveal a fast (\(\sim 1700\) km s\({}^{-1}\)) ionized gas outflow marginally resolved in the galaxy center. The high sensitivity of NIRSpec allows us to perform the kinematic analysis of the narrow H\(\alpha\) component which indicates that the warm ionized gas velocity field is consistent with disk rotation. Interestingly, we find that in the innermost nuclear regions (\(<1.5\) kpc), the intrinsic velocity dispersion of the disk reaches \(\sim 150\) km s\({}^{-1}\), \(\sim 2-3\) times higher than the velocity dispersion inferred from the [Cii]158\(\mu\)m line tracing mostly cold gas. Since, at large radii, the velocity dispersion of the warm and cold gas are comparable, we conclude that the outflows are injecting turbulence in the warm ionized gas in the central region, but they are not sufficiently powerful to disrupt the dense gas and quench star formation. These findings support the scenario that dust-obscured galaxies represent the evolutionary stage preceding the unobscured quasar when all gas and dust are removed from the host.
Conclusions:
## 1 Introduction
Supermassive black holes (BHs) are thought to reside in the center of the majority of local massive galaxies (Hopkins et al., 2008). A symbiotic connection between the growth of BHs and their hosts is suggested by the observed relations between the BH masses and the galaxy properties (e.g. Kormendy & Ho, 2013; Heckman & Best, 2014). Specifically, observations of local galaxies reveal tight relations between the BH mass and the stellar velocity dispersion and the mass and luminosity of the galactic bulge. These relations hold up throughout several orders of magnitude in black hole masses and galaxy properties and up to high redshift with different normalization (Carraro et al., 2020; Suh et al., 2020).
During the accretion phase, BHs are revealed as active galactic nuclei (AGN) due to the radiation emitted from radio to X-ray wavelengths by the accretion disk (Padovani et al., 2017). X-ray surveys show that the most luminous and massive AGN were most numerous at \(z>1\) and, in particular, observations find the cosmic black hole accretion rate density peaks at \(z\sim 2\), like the cosmic star formation rate density (Shankar et al., 2009; Delvecchio et al., 2014; Madau & Dickinson, 2014). Both cosmic star formation rate and BH accretion rate are driven by the availability of cold gas in the system (Hopkins et al., 2008). The tight relations between the BH and host galaxy properties and the similar evolution of activity with redshift suggests that BHs and the galaxies they inhabit have undergone a common evolutionary process (Kormendy & Ho, 2013).
The origin and the mechanisms that regulate the co-evolution of BHs with their host galaxy are still unclear. Massive galactic outflows driven by the radiation emitted by the most luminous AGN are considered a fundamental physical process in the evolution of galaxies. They are believed to regulate star formation (e.g., Fabian, 2012; Zubovas & King, 2014; Muratov et al., 2015; Nelson et al., 2019) and reduce the number of galaxies at the high-mass end of the stellar mass function (Benson et al., 2003; Puchwein & Springel, 2013). Such fast outflows can potentially accelerate a substantial mass of gas beyond the escape velocity of the local gravitational potential, inject turbulence in the ISM, and/or heat the gas in the galaxy, in this way damping or even halting star formation in their host galaxies ("Negative feedback"; e.g., Fabian, 2012; Harrison et al., 2017). On the other hand, AGN feedback has been observed to enhance star formation ("Positive Feedback"; e.g., Shin et al., 2019), with stars actively forming in outflowing material (e.g., Maiolino et al., 2017; Gallagher et al., 2019). It is thus fundamental to investigate the feedback mechanism over the various phases of the evolution of the BHs and galaxies to understand their role in the co-evolution process.
Sub-millimeter galaxies (SMGs) are a class of high-redshift (\(z>0.1\)) galaxies mostly characterized by a high luminosity
in the FIR continuum emission (\(L_{\rm FIR}>10^{11}\) L\({}_{\odot}\)), high star-formation rates (SFR \(\sim 10^{3}\) M\({}_{\odot}\) yr\({}^{-1}\)), and high dust content (\(M_{\rm dust}=10^{8-10}\); Santini et al. 2010). Although SMGs are a rare cosmological population of galaxies, they account for \(\sim 20\%\) of the SFR cosmic density at \(1<z<5\) (Swinbank et al. 2014). These galaxies are also thought to be the precursors of local massive and quiescent early-type galaxies (ETG) that host BHs with masses \(M_{\rm BH}>10^{8}\) M\({}_{\odot}\) in their centers (Sanders et al. 1988; Swinbank et al. 2004; Hopkins et al. 2008; Toft et al. 2014). Quiescent galaxies are already common at (\(z\sim 2--3\)), and are observed up to \(z\sim 4.6\)(Carnall et al. 2023). The evolutionary path that connects SMGs and quiescent galaxies is thought to be driven by the interaction between the galaxy and the BH (Sanders et al. 1988; Hopkins et al. 2008). The current theoretical scenarios expect that the feeding of galaxies with gas from the cosmic web or via mergers triggers both episodes of intense star formation and BH accretion. This process is thought to be self-regulated. In particular, the feedback from the accreting BHs should be able to balance the star formation (SF) in the host galaxy and the accretion of gas on the BH itself due to powerful radiation-driven outflows that are able to sweep out the host galaxy gas reservoir (Debuhr et al. 2012) and halt the accretion of gas (Peng et al. 2015). These outflows are thought to become important when the accretion rate on the BH reaches the Eddington rate (King & Pounds 2015). The dust in the system will also be swept out, allowing the radiation coming from the BH accretion zone to be detected and identified as a luminous quasar (QSO) (Sanders et al. 1988; Hopkins et al. 2008). The removal and heating of gas due to the AGN feedback halts star formation in the system, turning the host into a "red and dead" galaxy. This is consistent with what is observed in local ETGs, which are dominated by old stellar populations, with formation redshifts of \(z>2\)(Thomas et al. 2005; McDermid et al. 2015). Several studies also find galaxies whose gas content has been depleted on short timescales (Bezanson et al. 2019; Williams et al. 2021). Moreover, recent observations of quiescent galaxies at \(z>3\) indicate that fast quenching processes are already in place in the first 2 Gyr of the Universe (Glazebrook et al. 2017; Valentino et al. 2020; Carnall et al. 2023). In conclusion, it is crucial to study the SMG population to asses whether they really represent the evolutionary stage preceding the active quasar phase and, thus, are the progenitors of local ETGs or not. Moreover, SMGs enable us to study the first phases of the BH feedback process outlined above feedback process.
The ISM properties of the high-redshift SMG population have been mainly studied through millimeter observations, which do not allow us to determine whether a BH is hidden at the center of the galaxies. Up until the advent of the James Webb Space Telescope (JWST), the rest-frame optical emission lines from galaxies at \(z>4\) were extremely difficult to access from ground-based telescopes because the lines are redshifted to wavelength which are outside the atmospheric windows. Thanks to its unique sensitivity, the NIRSpec instrument (Jakobsen et al. 2022) on board JWST has already proven its capabilities to detect faint \(z>4\) galaxies and identify serendipitous AGN by observing the emission that arises from the broad line regions (BLRs) surrounding super-massive BHs (Kocevski et al. 2023; Maiolino et al. 2023; Matthee et al. 2023; Ubler et al. 2023)
By using the Integral Field Spectrograph (IFS) mode of NIRSpec (Boker et al. 2022) we can also exploit the rest-frame optical lines at high redshift to spatially resolve the emission from the ISM and stellar population and determine the impact of the feedback mechanism on the host galaxy (Cresci et al. 2023; Marshall et al. 2023; Perna et al. 2023; Ubler et al. 2023).
In this work, we study the properties of ALES073.1 (from the LABOCA ECDFS submillimetre survey LESS J033229.3-275619), a massive (\(M_{\star}\sim 10^{11}\) M\({}_{\odot}\) Circosta et al. 2019) SMG at \(z=4.755\) showing spectral features consistent with a Compton thick AGN (\(N_{\rm H}=17.0^{+1.6}_{-1.8}\times 10^{23}\) cm\({}^{-2}\)) (Vanzella et al. 2009; Gilli et al. 2011; Giallongo et al. 2019). ALES073.1 has a plethora of observations targeting both rest-frame far-infrared (Coppin et al. 2009; De Breuck et al. 2011; Nagao et al. 2012; De Breuck et al. 2014; D'Amato et al. 2020; Zhao et al. 2020; Lelli et al. 2021), UV (Vanzella et al. 2006, 2009; Coppin et al. 2009), and X-ray emission (Tozzi et al. 2006; Gilli et al. 2011; Vito et al. 2013; Gilli et al. 2014; Circosta et al. 2019; Giallongo et al. 2019). The bright rest-frame far-infrared emission indicates a massive burst of star formation with SFR of \(\sim 1000\) M\({}_{\odot}\) yr\({}^{-1}\)(Coppin et al. 2009; Gilli et al. 2014; Swinbank et al. 2014; Circosta et al. 2019) and a large dust content (\(M_{\rm dust}\sim 10^{9}\) M\({}_{\odot}\); Swinbank et al. 2014; da Cunha et al. 2015). This massive, dusty, highly star-forming SMG offers us a great opportunity to study the interplay between AGN and host galaxy at their first stages when the AGN accretion is creating a weak outflow that is starting to reveal the central AGN. We use NIRSpec observations to study the properties of the BH and investigate the impact of AGN-driven outflows (if any) on the host galaxy.
This work is structured as follows. In section 2, we describe the target and the new JWST IFS observations. In section 3, we present the analysis of the spatially-integrated and single-spaxel spectra. In section 4, we investigate which is the primary excitation mechanism of the gas. In Section 5 we compute the black hole mass and compare the position of ALES073.1on the \(M_{\star}-M_{\rm BH}\) plane with other low and high redshift AGN and QSOs. In section 6, we study the properties and energetics of the outflow. In section 7, we perform a detailed kinematic analysis of the host galaxy. We discuss our results in section 8, and we draw our conclusions in section 9. In this work, we adopt the cosmological parameters from Planck Collaboration et al. (2016): \(H_{0}\) = 67.7 km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{m}\) = 0.307, and \(\Omega_{\Lambda}\) = 0.691, 0.1\({}^{\prime}\)= 0.66 kpc at \(z=\)4.755.
## 2 Observations
### Target
ALES073.1 is part of the Extended Chandra Deep Field South (Lehmer et al. 2005). It was identified as a strong sub-millimeter (Coppin et al. 2009; De Breuck et al. 2011) and X-ray source (Gilli et al. 2011, 2014) as well as Ly\(\alpha\) emitter (Vanzella et al. 2006, 2009). The presence of a narrow Ly\(\alpha\), as well as a broader emission of N\(\lambda\)1240 A (FWHM \(\sim 2000\) km s\({}^{-1}\)), and a candidate line emission of Civ\(\lambda\)1549 A (Vanzella et al. 2006, 2009; Coppin et al. 2009) identify the target as an AGN. This was furthermore confirmed by the detection of X-ray emission which resulted in an estimation of the column density of \(N_{\rm H}=17.0^{+11.7}_{-6.8}\times 10^{23}\) cm\({}^{-2}\)(Ciroosta et al. 2019) that implies the presence of a Compton thick AGN (Gilli et al. 2011; Vito et al. 2013; Gilli et al. 2014; Circosta et al. 2019). The estimated intrinsic luminosity in the 2-10 keV band is 1.2\(\times\) 10\({}^{44}\) erg s\({}^{-1}\)(Luo et al. 2017) with the AGN bolometric luminosity of \(2.66\pm 0.80\times 10^{12}\) L\({}_{\odot}\) estimated via SED fitting (Ciroosta et al. 2019) classifying the target as a low-luminosity obscured quasar.
The kinematic of the host galaxy was extensively studied through high-angular resolution observations of the [Cn]158\(\mu\)m emission line, revealing a massive, dusty disk with ordered rotation and low levels of turbulence (random, non-circular motion of the gas) (De Breuck et al. 2014; Lelli et al. 2021). The
kinematics indicate the presence of a stellar bulge, which is a sign of an already-evolved galaxy. The fast evolution of the galaxy is also supported by metallicity measurements computed by exploiting the ratio between the FIR lines [Cii]158\(\mu\)m and [Nii]205\(\mu\)m that report an estimated gas-phase metallicity close to solar but we note that that the measurements have large uncertainties and this tracer of metallicity is less reliable than other optical diagnostics (Nagao et al. 2012; De Breuck et al. 2014). Gilli et al. (2014) suggest the presence of an outflow because of the observed \(\sim 400\) km s\({}^{-1}\) velocity shift between the Ly\(\alpha\) emission and other observed lines at sub-mm wavelengths ([Cii], CO(2-1), [CI], CO(7-6), [Nii]\(\lambda\)205 \(\mu\)m; De Breuck et al. 2011; Nagao et al. 2012; D'Amato et al. 2020).
### JWST observations and data reduction
The target was observed on the 12\({}^{\rm th}\) of September 2022, as part of the NIRSpec GTO "Galaxy Assembly with NIRSpec IFS survey (GA-NIFS)", under the program 1216 "Integral Field Spectroscopy in GOODS-S" (PI: Nora Lutzgendorf). The observations were carried out using a medium cycling pattern between four dithers to achieve a total exposure time of \(\sim 1\)h with both G235H/F170LP and G395H/F290LP gratings/filters combinations to target the emission of the galaxy from 1.7 to 5.2 \(\mu\)m at high spectral resolution (on average R\(\sim\)2700). The raw data were downloaded from the MAST archive and then processed with a modified version of the JWST Science Calibration Pipeline version 1.8.2 with the CRDS context "jwst_1068.pmap". The count rate maps were created by correcting at the detector level by using the module _Detector1Pipeline_ of the pipeline. The calibration was performed by applying the _Calweb_spec2_ stage of the pipeline. Finally, the cube was created by adding the individual calibrated images with a _drizzle_ weighting and a spaxel size of 0.05\({}^{\prime\prime}\)using the _Calweb_spec3_ step of the pipeline. Several corrections were made to the pipeline steps to allow for better data quality and to correct known bugs in the pipeline. These corrections are presented in detail in Perna et al. (2023), but here we mention only the major changes. The \(1/f\) correlated noise was subtracted from the count-rate images. The rejection of outliers was performed by using an algorithm similar to lacosmic (van Dokkum 2001) that removes the outliers on individual 2-d exposures before combining them to create the final data cube.
### ALMA observation and data reduction
In this work, we also use the Atacama Large Millimeter Array (ALMA) high-resolution observation of the [Cii] emission line. We retrieve the [Cii] raw data from the ALMA archive (2017.1.01471.S, PI: F. Lelli), and then we use the pipeline scripts included in the datasets to calibrate the visibilities with the Common Astronomy Software Application CASA (McMullin et al. 2007). Using the CASA task tclean, we perform the cleaning on the calibrated visibilities with a natural weighing scale and a spaxel scale of 0.05\({}^{\prime\prime}\) to create the final datacube. The resulting datacube has a beam size of 0.17\({}^{\prime\prime}\)\(\times\) 0.14\({}^{\prime\prime}\).
## 3 Data analysis
### Spectral fit of the central region
We first analyzed the emission from the nuclear region in which we expect to find emission features associated with the hidden AGN. Figure 1 illustrates the G395H/F290LP spectrum extracted from a circular aperture of radius 0.15\({}^{\prime\prime}\)and centered on the central region. The errors on the spectrum were initially computed by summing in quadrature the noise from the error extension present in the data cube ("ERR") in the spaxels of the selected region. To take into account the spatial correlations of the noise due to the PSF size, we scaled the errors to match the standard deviation in line-free regions of the spectrum (see also Ubler et al. 2023). In the spectrum, we clearly see the narrow (\(FWHM<100\) A) emission lines of H\(\alpha\) and [Nii] doublets. In addition to the narrow components, it is evident the presence of a broad (\(FWHM>1000\) A) line associated only with the permitted line of H\(\alpha\) that is consistent with being emission from the Broad Line Region (BLR). This confirms the presence of an AGN in ALESSO73.1 and identifies the target as a Type 1 AGN, in contrast to what was expected due to the high value of \(N_{\rm H}\).
To model the emission in the central region, we performed a least-square fitting by modeling the emission-line spectrum as a sum of Gaussian profiles. The H\(\alpha\) line profile required the addition of a broad (\(FWHM>5000\) km s\({}^{-1}\)) Gaussian component to reproduce the BLR emission. The continuum emission was fitted with a linear function. The narrow components of both H\(\alpha\) and [Nii] were modeled with two Gaussian profiles each to take into account the presence of ionized outflows because using a singular Gaussian component was not sufficient to reproduce their asymmetric profiles. To disentangle between the outflow (broad) and galaxy (narrow) components of H\(\alpha\) and [Nii] doublets, we allowed the width of the narrow component to vary between \(0<\sigma<250\) km s\({}^{-1}\), while the outflow line width was free to vary between 250 km s\({}^{-1}\)\(<\sigma<1000\) km s\({}^{-1}\).
For each Gaussian component, we tied the centroid and line width of the [Nii] doublet to those of H\(\alpha\). The two emission lines of the [Nii] doublet, originating from the same upper level, were fitted with an intensity ratio I(6584)/I(6548) fixed at \(\sim\)2.94 (Storey & Zeippen 2000). Finally, the model spectrum, obtained by combining all components, was convolved with a Gaussian kernel with a dispersion of 49 km s\({}^{-1}\) to reproduce the line spread function of the instrument at the H\(\alpha\) wavelength. In Figure 1 we show the best-fit profiles for each component with dashed lines, while the best-fit values for the fluxes and the FWHMs are reported in Table 1.
We also analyzed the spectrum from the same aperture in the G235H/F170LP cube, which covers the rest-frame wavelengths from \(\sim 2900\)A to \(\sim 5500\)A. The spectrum around the H\(\beta\) - [Oiii] complex is shown in Figure 2. We did not identify any clear emission line, except for a tentative detection of the [Oiii]\(\lambda\)5007A with a \(S/N=2\), the low S/N of the line is possibly due to high dust extinction at bluer wavelengths. We performed a single Gaussian fitting to reproduce the [Oiii]\(\lambda\)5007A emission because the low signal-to-noise ratio (S/N) of the spectrum does not allow us to perform a multiple Gaussian fitting as performed for the H\(\alpha\) complex. We left the width of the line free to vary between 20 and 500 km s\({}^{-1}\) to allow for the possible presence of a broader line tracing an outflow, but the results of the fit are comparable with those obtained for the H\(\alpha\) narrow component. FWHM and flux of [Oiii] are reported in Table 1 together with the upper limit on the H\(\beta\) flux derived by assuming an FWHM as large as that of H\(\alpha\).
By using the ratio between the H\(\alpha\) flux and the upper limit on the H\(\beta\) flux, we measure a lower limit on the Balmer decrement of \(F_{\rm H_{\alpha,narrow+outflow}}/F_{\rm H\beta}>3.6\). This value is higher than the theoretical value for star-forming galaxies assuming case B recombination \(F_{\rm H\alpha}/F_{\rm H\beta}=2.86\) (Osterbrock & Ferland 2006), implying dust absorption as expected for SMG population. Assuming a Calzetti et al. (2000) curve we estimate a lower limit on
the extinction \(A_{V}>0.77\), but we expect much higher extinction as found for other SMG (\(A_{V}\sim 4\), Alvarez-Marquez et al. 2023).
### Spatially resolved emission
We performed a spaxel-by-spaxel fitting of the G395H/F290LP data cube by exploiting the model adopted to reproduce the spectrum from the central region. We allowed the spectral components to vary except for the BLR H\(\alpha\), which is spatially unresolved, and thus its centroid and FWHM were fixed to the best-fit results obtained from the analysis of the circular aperture presented in the previous section.
For each spaxel, two alternative models were adopted for the narrow H\(\alpha\) and [Nn] profiles: one with the outflow component and the second without it. We then selected the most suited model for each spaxel based on the Bayesian information criterion1 (BIC) test (Liddle 2007). For each spaxel, we estimated the BIC, and in those cases where the difference between BIC (without outflow) and BIC (with outflow) was larger than 2, we selected the model with two components as having a \(\Delta\)BIC\(>2\) is considered positive evidence in favor of the model with lower BIC value (Kass & Raftery 1995). In the other cases, we adopted the model with one component. We note that selecting the model with the lower BIC value allows us to select the best-fit model with the highest statistical significance without overfitting the data.
Footnote 1: BIC\(=x^{2}+k\ln N\), where \(k\) is the number of parameters in the fit, and N the number of data points used in the fit, we assumed Gaussian noise.
Figure 3 illustrates the flux maps for the narrower component of H\(\alpha\) and [Nn]\(\lambda\)6584A tracing the host-galaxy, the flux map of the [Nn]\(\lambda\)6584A broad component associated with outflows as it is much stronger than the one traced by H\(\alpha\) (see Figure 1 and Table 1) and the flux map of the BLR component. The [Nn]\(\lambda\)6584A-host emission is predominant in the central region reaching flux values comparable with the H\(\alpha\)-host line (see also Sec. 4). The H\(\alpha\) emission extends to a larger distance from the center compared to the [Nn]. Based on the BIC test, the additional second Gaussian profile is necessary only for the central region of the galaxy whose size is comparable to the PSF FWHM (see Appendix A). This indicates that the outflow emission is only marginally resolved by JWST and the region directly affected is limited to the central 1 kpc.
Figure 1: Spectrum in the central region from the G395H/F290LP cube. In the upper panel, we report in blue the spectrum extracted from a circular aperture of radius 0.15\({}^{\prime\prime}\)centered in the central region with the associated error (gray-shaded region). We show the wavelengths around the H\(\alpha\) complex. The solid black line is the best-fit model resulting in the sum of the dashed lines. Each dashed line represents the best-fit result of each Gaussian component or the best-fit continuum emission. In red, the emission lines associated with the narrow component tracing the host-galaxy, in green the broader component tracing the outflow, in blue the BLR, and in dark blue the best-fit polynomial continuum. The solid vertical lines on the top represent the expected position of the [Nn] H\(\alpha\) and [Sn] lines. In the lower panel, we report in as a solid gray line the residuals of the fit and as a gray-shaded region the errors associated with the data.
### Morphology of the host galaxy
We estimated the size of the H\(\alpha\) emission directly from the flux map obtained by collapsing the data cube in the wavelength range \(3.775-3.781\)\(\mu\)m that covers the FWHM of the H\(\alpha\) narrow emission (Figure 1) as the flux map of the narrow component created with the pixel-by-pixel fitting (see Section 3.2) has large uncertainties due to the low S/N.
We thus performed a two-dimensional multi-component photometric decomposition of the map. In particular, we used a 2D Gaussian profile to reproduce the emission from the unresolved BLR, the marginally resolved outflows (see Figure 3) and also taking into account the possibility of an unresolved bulge (Lelli et al. 2021 find that the bulge size is less than 300 pc, hence it is unresolved in our observations) and a 2D Sersic profile (Sersic 1963) with index equal to 1 (exponential disk) to describe the emission arising from the galactic disk. We also added a 2D constant to account for a possible residual of background emission. The combination of the three models was then convolved with a Gaussian point spread function of FWHM \(\simeq 0.202^{\prime\prime}\times 0.167^{\prime\prime}\)obtained from the BLR flux map (see Appendix A for a detailed analysis).
The fit was carried out by using Dynesty(Speagle 2020), a Dynamic Nested Sampling Python code that allows us to estimate the Bayesian evidence and the posterior distribution of the free parameters. For the 2D Gaussian model we assumed as free parameters the centroid position, the standard deviation along the RA and Dec directions, and the amplitude. For the 2D Sersic model the free parameters are the normalization constant, the position of the center, the effective radius, the ellipticity, and the position angle.
The data and best fitting model are shown in Figure 4, while the corner plots for the posterior distributions of the free parameters and their best fitting values are shown in Figure 1. The majority of the observed flux is coming from the AGN emission (BLR+outflows) with a 2D Gaussian size of \(\sim 0.01^{\prime\prime}\) that is five times smaller than the size of one spaxel and thus consistent with the emission from a point-like source. The effective radius of the disk is \(r_{\rm e}=0.460\pm 0.012^{\prime\prime}\) corresponding to an exponential scale radius of \(r_{\rm D}=1.8\pm 0.5\) kpc 2. The H\(\alpha\) disk scale radius is comparable within the errors with the disk radius measured with the [Cii] emission line (\(r_{D}=1.4-1.2\) De Breuck et al. 2014; Parlamti et al. 2023). We discuss the similarities and differences between the two tracers and the origin of such discrepancies in Section 8.
Footnote 2: \(r_{\rm e}\sim 1.68\times r_{\rm D}\)
The ellipticity of the galaxy is \(0.09\pm 0.02\) corresponding to an inclination angle of \(24\pm 3\) deg\({}^{3}\) assuming an infinitely thin disk.
Figure 2: Spectrum of the central region from the G235H/F170LP cube. In the upper panel, we report in blue the spectrum around the [Oiii]\(\lambda 5007\)Å emission, extracted from a circular aperture with a radius of \(0.15^{\prime\prime}\) in the central region with the associated error (gray-shaded region). In black we report the best-fit model resulting from the sum of the dashed lines. The red dashed line represents the best-fit result of the Gaussian component tracing [Oiii]\(\lambda 5007\)Å, the blue dashed line represents the best-fit continuum emission. The solid vertical lines on the top represent the expected position of the [Oiii] and H\(\beta\) lines. In the lower panel, we report in as a solid gray line the residuals of the fit and as a gray-shaded region the errors associated with the data.
We note that the centroids of the two components have different positions (see best fit results in Figure 1), with the Sersic component being shifted northward by 0.9 kpc with respect to the BLR+outflow flux map centroid. This offset is probably caused by a central concentration of dust in the galaxy that absorbs H\(\alpha\) emission from the core (see Lelli et al. 2021, Fig. 1a).
## 4 Excitation mechanism
The rest-frame optical emission lines can be used to characterize the primary source of excitation of the gas in the ISM of galaxies. We thus exploit the Baldwin-Phillips-Terlevich (BPT) diagram (Baldwin et al. 1981), [Oiii]\(\lambda\)5007A/H\(\beta\) versus [Nii]\(\lambda\)6584A/H\(\alpha\), to determine the dominant source of ionizing radiation and distinguish the regions mainly excited by young stars from those where the ionization mechanism is dominated by AGN radiation.
Figure 5 shows the line ratios of the narrow component for the integrated nuclear 1D spectrum, whose line fluxes are reported in Table 1. Since we do not detect the H\(\beta\) line, we can only report a lower limit on [Oiii]\(\lambda\)5007A/H\(\beta\) line ratio of 3.50. This is sufficient to conclude that the gas in the nuclear region of the galaxy is excited by the AGN radiation (Kauffmann et al. 2003; Kewley et al. 2013). For the outflow component, we have upper limits for both [Oiii] and H\(\beta\) and so we can report only a vertical line at the location of [Nii]\(\lambda\)6584A/H\(\alpha\) line ratio in the BPT diagram. The line ratio indicates that the photoionization
\begin{table}
\begin{tabular}{l l} \hline \hline Measurement & [km/s] \\ \hline \(FWHM\) H\(\alpha_{\rm most\ galaxy}\) & 519 \(\pm\) 58 \\ \(FWHM\) H\(\alpha_{\rm outflow}\) & 1491 \(\pm\) 296 \\ \(FWHM\) H\(\alpha_{\rm BLR}\) & 9008 \(\pm\) 407 \\ \(FWHM\) [Oiii]\(\lambda\)5007Å & 553 \(\pm\) 105 \\ \(\Delta v\) H\(\alpha_{\rm host\ galaxy,outflow}\) & – 441 \(\pm\) 129 \\ \(\Delta v\) H\(\alpha_{\rm host\ galaxy,BLR}\) & 427 \(\pm\) 201 \\ \hline \hline Measurement & [erg/s/cm\({}^{2}\)] \(\times\)10\({}^{-20}\) \\ \hline \(F\) H\(\alpha_{\rm most\ galaxy}\) & 174 \(\pm\) 34 \\ \(F\) H\(\alpha_{\rm outflow}\) & 123 \(\pm\) 91 \\ \(F\) H\(\alpha_{\rm BLR}\) & 2804 \(\pm\) 198 \\ \(F\) [Nii]\(\lambda\)6584Å\({}_{\rm host\ galaxy}\) & 252 \(\pm\) 44 \\ \(F\) [Nii]\(\lambda\)6584Å\({}_{\rm outflow}\) & 460 \(\pm\) 150 \\ \(F\) [Nii]\(\lambda\)6548Å\({}_{\rm host\ galaxy}\) & 83 \(\pm\) 14 \\ \(F\) [Nii]\(\lambda\)6548Å\({}_{\rm outflow}\) & 151 \(\pm\) 49 \\ \(F\) [Oiii]\(\lambda\)5007Å & 291 \(\pm\) 73 \\ \(F\) H\(\beta\) & \(<\) 83 \\ \hline \end{tabular} 1
\end{table}
Table 1: Results from the fitting of the spectrum extracted from a circular aperture of radius 0.15′′centered on the spatial peak of the emission.
Figure 3: Flux maps created from the results of the spaxel-by-spaxel multi-Gaussian fitting. Upper row: the flux of the H\(\alpha\) and [Nii]\(\lambda\)6584Å narrow components representing the host galaxy emission from left to right, respectively. Bottom left panel: the flux of the outflow component traced by the [Nii]\(\lambda\)6584Å broader component. Bottom right panel: the broad H\(\alpha\) component that traces the emission from the BLR. The x and y axes are the displacement in arcseconds from the galaxy center at RA = 03:32:29.3, Dec = -27:56:19.6.
by the central AGN dominates the gas ionization with no obvious sign of a major contribution from young stars. This supports the fact that the outflowing gas is likely accelerated by AGN radiation. The kinematics of this gas will be discussed in detail in Section 6.
We also investigate the spatially resolved excitation properties of the ionized gas but, since we do not detect H\(\beta\) and [Oiii] in the individual spacels of the data cube, we can only probe the [Nii]/H\(\alpha\) line ratio. In the upper left panel of Figure 6 we report the spaxel-by-spaxel value of log([Nii]/H\(\alpha\)). We observe a gradient from the central regions to the outskirts of the galaxy where the values of log([Nii]/H\(\alpha\)) decrease at increasing radii. Since we only have this diagnostic to infer the excitation mechanism of the ISM in the galaxy, we define the three following possible ranges of log([Nii]/H\(\alpha\)) based on the demarcation lines suggested by Kewley et al. (2001, 2013) and Kauffmann et al. (2003): region likely dominated by SF excitation (log([Nii]/H\(\alpha\)) \(\leq-0.25\)), region photoionized by AGN (log([Nii]/H\(\alpha\)) \(>0.25\)), and composite region \(-0.25<\) log([Nii]/H\(\alpha\)) \(\leq 0.25\)). In the BPT diagram in Figure 5 we show the number of spaxels as a function of log([Nii]/H\(\alpha\)) of the host galaxy component color-coded according to the aforementioned three categories, while in the lower left panel of Figure 6 we report their spatial distribution.
The majority of the spaxels show a line ratio consistent with "composite" excitation and only a few spaxels have high enough [Nii]/H\(\alpha\) flux to end up in the AGN region, but they reside in the outer region of the galaxy where the S/N is lower, also they are non-adjacent, so consistent with being due to S/N fluctuations shifting the category from composite to AGN. At large radii from the center, there is a number of spaxels with low (\(<0.6\)) [Nii]/H\(\alpha\) suggesting that in these regions the excitation mechanism is likely dominated by star-formation activity. In conclusion, the spatially resolved BPT diagram indicates that the excitation mechanism is due to both an AGN and a young stellar population. Based on the results obtained from the nuclear-integrated 1D spectrum, we speculate the central part of the galaxy is mainly AGN-dominated while the excitation by young stars dominates at large radii. On the other hand, if we compute [Nii]/H\(\alpha\) spaxel-by-spaxel for the outflow component, we find that most of the spaxels have log([Nii]/H\(\alpha\))\(>0.25\) (upper and
Figure 4: Observed H\(\alpha\) flux map, best fitting model and residuals, from left to right, respectively. In the left panel, we show the H\(\alpha\) flux, overlaid with the [Cii] contours at 3, 6, and 9 \(\sigma\). The two maps were aligned by centering them on the brightest spaxel. The gray ellipse represents the PSF size and shape. In the central panel, we show the best-fit model composed by the sum of a 2D Gaussian component that includes all the unresolved or marginally resolved components (BLR, outflow, bulge) and a 2D Sérsic component with the Sérsic index \(n=1\) that represent the galactic disk. In the right panel, we show the residuals, calculated as the observed flux minus the model, divided by the error. The color bar stretches between \(-3\sigma\) and \(+3\sigma\).
Figure 5: BPT diagram of the target. The star marker in magenta shows the position of the central spaxels of ALESS073.1in the BPT diagram for the narrow component. The position of the outflow component based only on the log([Nii]/H\(\alpha\)) detection is reported as a purple vertical line and the error associated with it is reported as the purples-shaded area. The dashed and the dotted lines are the predictions from Kewley et al. (2001) and Kauffmann et al. (2003), respectively, for the separation between star-forming (below) and AGN (above) dominated regions at \(z=0\). The dash-dotted line is the theoretical redshift evolution of the separation curve for galaxies at \(z=3\) by Kewley et al. (2013). The gray shaded area represents the position in the BPT of SDSS galaxies at \(z\sim 0\). Blue diamonds are the results from Cameron et al. (2023) for high-redshift (\(z>5\)) galaxies. Overplotted with the BPT diagram we report the histogram representing the number of spaxels of the narrow component as a function of log([Nii]/H\(\alpha\)).
lower right panels of Figure 6) indicating that the outflows are likely driven by AGN activity.
## 5 Black Hole Properties
Assuming that the gas in the BLR is virialized, we can estimate the BH mass by using the calibration by Greene and Ho (2005):
\[M_{\rm BH}=(2.0^{+0.4}_{-0.3})\times 10^{6}\left(\frac{L_{\rm Hr}}{10^{ 42}\ {\rm erg\ s^{-1}}}\right)^{(0.55\pm 0.02)}\] \[\times\left(\frac{FWHM_{\rm Hr}}{10^{5}\ {\rm km\ s^{-1}}} \right)^{(2.06\pm 0.06)}M_{\odot} \tag{1}\]
where \(L_{\rm Hr}\) and \(FWHM_{\rm Hr}\) are the dust-corrected luminosity and the FWHM of the broad H\(\alpha\) line associated with the BLR.
We note that the lack of detection of the H\(\beta\) line does not allow us to correct the H\(\alpha\) for the dust extinction of the galaxy and AGN torus. Hence the inferred luminosity for the BLR component is only a lower limit and consequently, we can only determine a lower limit on the BH mass: \(\log\left(M_{\rm BH}/M_{\odot}\right)>8.7\). Figure 7 shows the relation between the stellar mass and the black hole mass for ALESS073.1, where we use the stellar mass of \(M_{\star}=(4.7\pm 1.6)\times 10^{10}\ {\rm M_{\odot}}\) computed by Lelli et al. (2021) with a kinematic analysis, and \(M_{\star}=(2.15\pm 0.64)\times 10^{11}\ {\rm M_{\odot}}\) computed by Circosta et al. (2019) with the SED fitting. We compare our results with those obtained from other AGN-host galaxies both at high redshift (\(z>1\)) (Pensabene et al., 2020; Neeleman et al., 2020; Harikane et al., 2023; Ubler et al., 2023) and in the nearby Universe (\(z<0.055\)) (Reines and Volonteri, 2015), and the local massive "red-and-dead" elliptical galaxies and classical bulges (Kormendy and Ho, 2013). The estimated BH mass places ALESS073.1 above the relation by Reines and Volonteri (2015) yielding a \(M_{\rm BH}/M_{\star}\) ratio \(>10-30\) times higher than those estimated in local AGN by using the stellar mass estimated by SED and kinematical fitting, respectively. However, the inferred \(M_{\rm BH}/M_{\star}\) is consistent within the uncertainties with the relation determined for massive quiescent local galaxies and high redshift luminous quasars (Kormendy and Ho, 2013; de Nicola et al., 2019; Pensabene et al., 2020). As with other high-redshift AGN and quasars, it lies above the relation \(M_{\rm BH}=0.01\times M_{\star}\)(Decarli et al., 2010; de Nicola et al., 2019; Pensabene et al., 2020; Neeleman et al., 2021). These high-redshift observations suggest that the BH growth dominated early on, with the galaxy catching up later. This requires that feedback and self-regulation are somehow different at early times with respect to what is observed in local AGN.
We can also compute a limit on the Eddington luminosity by using:
\[L_{\rm Edd}=\frac{4\pi GM_{\rm BH}m_{p}c}{\sigma_{\rm T}} \tag{2}\]
Figure 6: Resolved BPT diagram for ALESS073.1 host galaxy (narrow) and outflow (broad) component. Upper panels: spatially resolved map of the observed values of \(\log\)([Nul]/H\(\alpha\))for the narrow and broad components from left to right, respectively. Lower panels: the spatially resolved BPT where every spaxel in the galaxy is color-coded according to its value of \(\log\)([Nul]/H\(\alpha\)) for the narrow and broad components, from left to right, respectively. as in the histogram in the upper right panel.
where \(m_{p}\) is the proton mass and \(\sigma_{\rm T}\) is the Thompson scattering cross-section. We obtain an Eddington luminosity of \(L_{\rm Edd}>7\times 10^{46}\) erg s\({}^{-1}\). Comparing it with the bolometric luminosity of the AGN of \(L_{\rm bol}=(1.01\pm 0.3)\times 10^{46}\) erg s\({}^{-1}\) computed in Circosta et al. (2019), we obtain an Eddington ratio of \(\lambda_{\rm Edd}<0.15\) implying that the BH is accreting at a much lower rate than the Eddington limit. This is consistent with an evolutionary sequence where obscured AGN, like ALESS073.1, are in an early phase of quasar evolution that lasts until the Eddington ratio reaches values close to unity and the AGN radiation is able to sweep away gas and dust from the galaxy, revealing the emission of the bright unobscured AGN (i.e, blue quasars). According to this evolutionary path, we might conclude that the outflow in ALESS073.1 is not yet energetic enough to affect the star-formation activity in the galaxy. We further investigate this possibility in Section 8.
## 6 Outflow
In this Section, we study the mass outflow rate and the energetics of the warm (\(T\sim 10^{4}\) K) ionized gas traced by the "outflow" component of H\(\alpha\) and [Nii] identified as the broader component associated with each line in the Gaussian fit. Figure 8 shows the kinematic maps of the outflow. The maps reveal a pattern that is not compatible with a rotating disk. Most of the spaxels in the velocity map (left panel) show negative values that are consistent with gas approaching along the line of sight. As often reported for other AGN-host galaxies (Fischer et al., 2013; Bae and Woo, 2014; Perna et al., 2017), a corresponding redshifted component of the outflow is missing in ALESS073.1, probably due to dust obscuration of the receding side of the outflow. The central panel of Figure 8 shows the \(v_{10}\) map, the velocity at the 10th percentile of the "outflow" component in each spaxel, which is usually adopted to trace the highest-velocity blueshifted gas in the outflows. We find regions in which the gas reaches a velocity as high as \(v_{10}=-700\) km s\({}^{-1}\) supporting the fact that this gas cannot be associated with the rotation of the disk, given that the maximum velocity of the rotating disk is 400 km s\({}^{-1}\)(Lelli et al., 2021; Parlanti et al., 2023).
The mass of gas expelled by the outflow can be estimated by following Cresci et al. (2023) as
\[M_{\rm out}=3.2\times 10^{5}\left(\frac{\rm L_{H\alpha,outflow}}{10^{40}\rm~{ }erg~{}s^{-1}}\right)\left(\frac{100~{}cm^{-3}}{n_{e}}\right)M_{\odot}, \tag{3}\]
where \(L_{\rm H\alpha,outflow}\) is the extinction-corrected H\(\alpha\) luminosity of the outflow and \(n_{e}\) is the electron density of the outflow. To determine the intrinsic H\(\alpha\) luminosity we use the color excess E(B \(-\) V) = 0.37 estimated from the SED fitting by Circosta et al. (2019). By using the Calzetti et al. (2000) reddening curve, we thus calculate a dust extinction \(A_{V}=1.5\) that is consistent with the lower limit \(A_{V}>0.77\) determined by the Balmer decrement measurement. In conclusion, by using the value in Tab. 1 for the flux of the H\(\alpha\) outflow we obtain an extinction-corrected H\(\alpha\) luminosity of \(L_{\rm H\alpha,outflow}=0.9\times 10^{42}\) erg s\({}^{-1}\). For the electron density, since the density-diagnostic [Sii]\(\lambda\)6716,31 line doublet is not detected in our observation, we have assumed the fiducial value of 1000 cm\({}^{-3}\)(Forster Schreiber et al., 2019; Perna et al., 2023; Ubler et al., 2023) and the uncertainties are computed assuming a range of densities between 200 and 2000 cm\({}^{-3}\), based on the outflow densities of densities measured in high-redshift galaxies (Isobe et al., 2023). We thus obtain a mass of the ionized outflow of \(M_{\rm out}=3^{+115}_{-1.5}\times 10^{6}\)M\({}_{\odot}\).
The ionized outflow rate is calculated as follows assuming time-averaged thin expelled shells (Lutz et al., 2020)
\[\dot{M}_{\rm out}=\frac{v_{\rm out}M_{\rm out}}{R_{\rm out}} \tag{4}\]
where \(v_{\rm out}\) and \(R_{\rm out}\) are the outflow velocity and radius, respectively. We adopt the prescription by Genzel et al. (2011) to estimate the velocity of the outflowing gas that takes into consideration that the emission line from the outflow outflow is spectrally broadened due to projection effects and the velocity of the line wing traces the velocity component of the outflow directed along the line of sight, hence tracing the outflow intrinsic velocity. Thus, we obtain: \(v_{\rm out}=|\Delta v_{\rm host~{}galaxy,outflow}|+2\sigma_{\rm out}=1710\) km s\({}^{-1}\) where for the values of the velocity shift and the outflow velocity dispersion we use the values obtained from the fit of the spatially integrated spectrum in Sec 3.1. For the outflow extent, we use the half width at half maximum (HWHM) of the
Figure 7: Relation between black hole mass and host galaxy stellar mass. The galaxy targeted in this work (ALESS073.1) is marked with a star, the star filled in red is by assuming the stellar mass derived from the kinematic fitting by Lelli et al. (2021), while the white filled star considers the stellar mass derived from SED fitting in Circosta et al. (2019). The dark gray diamonds and crosses represent \(z\sim 0\) broad line AGN presented in Reines and Volonteri (2015) and the massive black holes hosted in ellipticals and spirals at \(z\sim 0\) by Kormendy and Ho (2013), respectively. In light green diamond, we report the QSO at \(z\sim 5.5\) studied in Ubler et al. (2023). In green, we show the broad line AGN presented in Harilane et al. (2023) at \(4<z\sim 7\). Light blue squares are the results obtained by Neeleman et al. (2021) for QSOs at \(z\sim 6\). Light blue triangles are the results for QSOs at \(z>2\) from Pensabene et al. (2020). Light blue crosses are the results for two luminous QSOs at \(z\sim 6.8\) from Marshall et al. (2023). We note that Pensabene et al. (2020), Neeleman et al. (2021) and Marshall et al. (2023) report the dynamical mass rather than the stellar mass. The black and green solid lines are the best fit for Reines and Volonteri (2015) and Kormendy and Ho (2013), respectively, and the shaded areas are their \(1\sigma\) uncertainties. The black dashed line is the relation \(M_{BH}=0.01\times M_{\star}\).
JWST PSF at 3.78 \(\mu\)m given that the surface brightness emission of the "outflow" component is marginally resolved. We infer a mass outflow rate of \(\dot{M}_{\rm out}=8^{+32}_{-4}\) M\({}_{\odot}\) yr\({}^{-1}\). We note that the outflow radius might be smaller than the angular resolution, therefore the reported mass outflow rate may be considered a lower limit.
We also compute the upper limits on the kinetic and momentum rate of the ionized outflow as \(\dot{E}_{\rm out}=\frac{1}{2}\dot{M}_{\rm out}v_{\rm out}^{2}\) and \(\dot{P}_{kin}=\dot{M}_{\rm out}v_{\rm out}\), respectively. We obtain \(\dot{E}_{\rm out}=7.2\times 10^{42}\) erg s\({}^{-1}\) and \(\dot{P}_{out}=8.5\times 10^{34}\) g cm s\({}^{-2}\) with uncertainties of one order of magnitude. The outflow kinetic rate is 0.08% of the bolometric luminosity. This is \(\sim 60(6)\) times smaller than the theoretical values (\(\dot{E}_{\rm out}=0.05(0.005)L_{\rm bol}\)) expected for quenching massive galaxies (Di Matteo et al., 2005; Choi et al., 2012; Costa et al., 2018; Harrison et al., 2018). This can suggest that the obscured AGN has not yet reached its maximum activity.
We then calculate the mass loading factor, defined as \(\eta=\dot{M}_{\rm out}/\)SFR, where we adopt the SFR \(=864\pm 173\) M\({}_{\odot}\) yr\({}^{-1}\) for ALESS073.1 obtained by Circosta et al. (2019) with a SED fitting. We obtain a mass loading factor of the order of 1%, meaning that only a minor part of the gas present in the central region is expelled compared to the gas used to create stars, and the outflow strength is probably not sufficient to remove gas and halt the vigorous, ongoing star formation in the system. We note that using other SFR estimations result in a low mass loading factor as well (SFR\({}_{\rm[CII]}=450\pm 70\) M\({}_{\odot}\) yr\({}^{-1}\), SFR\({}_{\rm FIR}=1000\pm 15\) M\({}_{\odot}\) yr\({}^{-1}\); De Breuck et al., 2014).
## 7 Gas kinematics
In this Section, we investigate the gas kinematics traced by the H\(\alpha\) narrow component that maps warm ionized gas in the interstellar medium of the galaxy and we compare it with the results obtained by studying the [CII] emission.
The first two panels on the left of Figure 9 illustrate the velocity and velocity dispersion maps of the H\(\alpha\) narrow component. The velocity and velocity dispersion maps are computed as the displacement between the centroid position of the line in that spaxel with respect to the centroid position computed in the central region, and the standard deviation of the emission line deconvolved of the instrumental spectral resolution computed on a spaxel-by-spaxel level (see Section 3.2), respectively. The velocity map shows a velocity gradient that spans a range of velocities between \(-120\) km s\({}^{-1}\) and \(120\) km s\({}^{-1}\) with respect to the systemic redshift of the galaxy. The velocity pattern is consistent with that observed in [CII] by Lelli et al. (2021) and indicates the presence of a regularly rotating disk. We note that the velocity gradient is not symmetric on the red-shifted and blue-shifted sides, with the blue side having higher velocities closer to the center with respect to the red-shifted one, and the blue-shifted side being less extended. On the north-western side, we also note that the velocity pattern is irregular, similar to what is found in the [CII] kinematic analysis (Lelli et al., 2021) that might indicate the presence of a spiral arm.
Following Parlanti et al. (2023), we model the H\(\alpha\) kinematics using the publicly available python library KinMS (Davis et al., 2013) that creates mock data cubes based on flux, velocity, and velocity dispersion radial profiles. We set up K\(\rm{\ddot{u}}\)xMS to simulate our NIRSpec observations, setting an angular resolution of \(0.202\arcsec\times 0.167\arcsec\)(see Appendix A) and a spectral resolution of \(49\) km s\({}^{-1}\). We then generate the moment maps to compare directly with the observations and verify the accuracy of the model in reproducing the data. The best-fit parameter values for the kinematic model are found using the package mclee(Foreman-Mackey et al., 2013) that allows us to estimate the posterior probability distribution for the n-dimensional parameter space that defines our disk models by adopting a Markov Chain Monte Carlo (MCMC) algorithm. Differently from the analysis by Parlanti et al. (2023) we do not use a parametric function to reproduce the gas kinematics of ALESS073.1 as the [CII] observations reveal a complex radial profile of the velocity curve. Lelli et al. (2021) find indeed that the de-convolved velocity is as high as \(\sim 400\) km s\({}^{-1}\) in the nuclear region (\(<1\) kpc) of the galaxy which is comparable with what is observed in the outskirt of the galaxy. These kinematics cannot be reproduced by the velocity curve of an exponential disk, as we show and discuss in Appendix B. The flat profile of the [CII] velocity indeed suggests the presence of a bulge component in addition to the classic exponential kinematic component (Lelli et al., 2021; Parlanti et al., 2023).
Figure 8: Spatially resolved kinematic maps for the outflow component. From left to right, the velocity map of the outflow component, the v10 that corresponds to the velocity at which the outflow component is at the 10th percentile, and the velocity dispersion map of the outflow, respectively. Zero velocity corresponds to the velocity of the narrow line.
Thanks to the flexibility of KinMS, we adopt a non-parametric model to reproduce the observed velocity and velocity dispersion profile as a function of the radius. In particular, the H\(\alpha\) kinematics is modeled as a series of concentric circular rings having a width of 0.6 kpc (i.e. 0.09\(\arcsec\)), which is comparable to the HWHM of the PSF of NIRSpec IFS at the H\(\alpha\) wavelength. In each ring, we assume that the emitting clouds have the same radial velocity and velocity dispersion. For the velocity fitting we assume Gaussian priors with a standard deviation of 100 km s\({}^{-1}\) centered around the velocity of the previous ring. The velocity of the first ring was left free to vary between 0 and 1000 km s\({}^{-1}\). Similarly, for the velocity dispersion of each ring, we assume Gaussian priors with a standard deviation of 50 km s\({}^{-1}\) centered on the velocity dispersion of the previous ring leaving the first ring free to vary between 0 and 500 km s\({}^{-1}\). Using Gaussian priors allows us to ensure the continuity of the velocity and velocity dispersion profiles and that the discontinuities in the profiles are driven by a real increase in the likelihood. We note that assuming flat priors for each ring does not change the maps in Figure 9, but introduces discontinuities in the intrinsic profiles, that are then washed out by the beam smearing process when making the moment maps.
We let the disk position angle free to vary with flat priors between 0 and 90 deg, while we fixed the inclination angle of the galaxy to 22 deg as found by Lelli et al. (2021) and compatible within the errors with the value of inclination found in Section 3.3. The emitting clouds are distributed over the rings following the surface brightness profiles obtained from the best-fit results of the flux map (Section 3.3). We note that the resolution and sensitivity do not allow us to determine stringent constraints on the velocity and velocity dispersion profiles if we adopt a disk model with a ring size smaller than the PSF HWHM due to the beam smearing. We also observe that the last ring is only probed by the outer region of the redshifted side of the galaxy as it is more extended than the blueshifted one.
The best-fit results are reported in Table 2 and the best-fit model and residual maps of the velocity and velocity dispersion are reported in Figure 9. In Figures 10 and 11 we also report the corner plot to highlight the best-fit parameters and the posterior distributions that show degeneracy between the parameters. We note that the velocity and velocity dispersion of each ring degenerate with the ones of the previous ring. In particular if one increases the other one decreases, and vice versa. This is expected and consistent with the beam smearing that acts to average out these differences. The inferred disk position angle is in agreement with the one found for the [Cn] kinematics by Lelli et al. (2021).
The left panel of Figure 10 shows the best-fit velocity curve of H\(\alpha\), which is in agreement, within the errors, with the profile inferred from [Cii] observation. The velocities reach a value of \(\sim 350\) km s\({}^{-1}\) at small scales and slightly decrease at large radii down to values of the order of \(\sim 250\) km s\({}^{-1}\). This result supports the scenario that ALESS073.1 has already formed a bulge at its center that dominates the dynamics of the gas.
The velocity dispersion profile is reported in the right panel of Figure 10 and spans a range between 30 and 170 km s\({}^{-1}\). Differently from the velocity curve, here we note a discrepancy between [Cn] and H\(\alpha\) gas kinematics. The velocity dispersion of the warm ionized gas is, on average, higher than the velocity dispersion of the cold gas mapped by the carbon line. In the in
Figure 9: Non-parametric best-fit results for the modeling of the H\(\alpha\) velocity and velocity dispersion maps. From left to right on the top the observed velocity map, the best-fit model map, and the residuals. From left to right on the bottom the observed velocity dispersion map, the best fitting model dispersion map and the residuals. The colorbars of the residual range between \(-3\sigma\) and \(+3\sigma\), and the black lines indicate \(\pm 1\sigma\).
ner 1.5 kpc the velocity dispersion observed in H\(\alpha\) is even three times larger than the one traced by [Cn]. At larger radii, the discrepancy decreases and in the two rings between 1.5 and 3.0 kpc the velocity dispersion of [Cn] is comparable (within the uncertainties) with the one derived from H\(\alpha\).
As the discrepancies between our results and those found by Lelli et al. (2021) might depend on the different approaches and tools used to fit the data, we compare directly the velocity dispersion maps in the same regions of the galaxy. As the ALMA and NIRSpec data cubes of ALESS073.1 have a slightly different size PSF (ALMA: 0.17''\(\times\) 0.14'', JWST: 0.20''\(\times\) 0.17'') we used the Python library Photutils to create a matching kernel between ALMA and JWST PSFs. Hence we convolved the ALMA cube, which is the higher resolution one with the matching kernel to obtain 2 data cubes with the same PSF. After this process the two data cubes are affected by the same level of beam smearing, enabling us to compare the velocity dispersion maps directly. The [Cn] moment maps are created by fitting spaxel-by-spaxel singular Gaussian component to the ALMA spectrum. We also try to include a second component to verify the presence of outflows in [Cn] but the double Gaussian model returns a higher \(\chi^{2}\) in all spaxel and the BIC test supports the single component fit.
Figure 11 illustrates the velocity dispersion of the narrow H\(\alpha\) component and [Cn] emission line with the same velocity range. We stress that the line broadening due to the line spread function of NIRSpec is corrected during the H\(\alpha\) line fitting (Section 3.2) and the H\(\alpha\) velocity dispersion map does not include the broad blue-shifted H\(\alpha\) component associated with outflows. Comparing the kinematic maps, we find that the velocity dispersion mapped by the hydrogen line is on average higher than that traced by [Cn]. Both maps have high \(\sigma\) at the center (\(\sigma_{[CII]}\sim 130\) km s\({}^{-1},\sigma_{H\alpha}\sim 200\) km s\({}^{-1}\)) mainly due to beam smearing, but as shown in the \(\sigma_{\rm H\alpha}-\sigma_{\rm[CII]}\) map (right panel) the discrepancy between the two increases at large radii up to 0.3-0.4''. In particular, the difference in terms of velocity dispersion in the region is evident in the direction North-South from 0.1'' to 0.4'' from the center. In these regions (\(\sigma_{\rm H\alpha}-\sigma_{\rm[CII]}\)) = 100 km s\({}^{-1}\) that is more than two times larger than the uncertainties on the velocity dispersion estimates.
## 8 Discussion
Investigating galaxy dynamics at high redshifts is fundamental to understand how galaxies grow their stellar mass at their early stages of formation. Measurements of small ratios between the rotational velocity and the gas velocity dispersions are usually interpreted as evidence for turbulence in the disk due to past or ongoing strong feedback mechanisms and merging processes. Conversely, observations revealing a low velocity dispersion and \(V/\sigma\sim 10\) in the gas kinematics suggest a less turbulent gas accretion and evolution characterized by a limited number of extreme events.
The trends of velocity dispersion and the \(V/\sigma\) evolution with redshift at \(z>4\) are still a matter of debate today. On one hand, a large number of kinematic studies at \(1<z<4\) suggest that high redshift galaxies are more turbulent than local ones (Cresci et al., 2009; Forster Schreiber et al., 2009; Epinat et al., 2010; Gnerucci et al., 2011; Ianjamasimanana et al., 2012; Green et al., 2014; Wisnioski et al., 2015; Mogotsi et al., 2016; Di Teodoro et al., 2016; Harrison et al., 2017; Swinbank et al., 2017; Turner et al., 2017; Forster Schreiber et al., 2018; Johnson et al., 2018; Ubler et al., 2019; Girard et al., 2021) with \(V/\sigma\) reaching values close to unity at \(z\sim 3.5\). On the other hand, the results from kinematic studies at \(z>4\) lead to contrasting results showing the presence of both turbulent (Herrera-Camus et al., 2022; Parlanti et al., 2023; de Graaff et al., 2023) and kinematically cold galaxies (Sharda et al., 2019; Neeleman et al., 2020; Rizzo et al., 2020; Jones et al., 2021; Fraternali et al., 2021; Lelli et al., 2021; Rizzo et al., 2021; Posses et al., 2023; Pope et al., 2023) with values of \(V/\sigma\) ranging from 20 to 0.1 across the redshift range \(4<z<8\). However, we note that results at high redshift are limited either by small sample sizes for galaxies observed with high angular resolution or by significant uncertainties due to the low angular resolution observations used for the larger samples of galaxies.
We also note that so far galaxies up to \(z\sim 3.5\) are principally studied by exploiting H\(\alpha\) and [Oiii] emission lines that arise from HII regions around massive and young stars and trace the warm ionized medium with a temperature of \(\sim 10^{4}\) K. Studies at \(z>4\) target mainly the [Cn] line that principally arises from Photodissociation Regions (PDR) tracing the cold neutral medium at a temperature of \(\sim 100\) K and only \(\sim 30\%\) of its emission is associated with the warm diffuse ionized gas (e.g. Stacey et al., 1991, 2010; Croxall et al., 2017). Therefore, several studies have concluded that the cosmic evolution of the velocity dispersion obtained from the optical lines cannot be directly compared with that from the far-infrared [Cn] line because they are mapping different gas phases (Rizzo et al., 2022). One possible solution is to exploit the far-infrared [Oiii]\(\lambda\)88\(\mu\)m line that traces the warm ionized medium and can be observed with ALMA at \(z>6\). Unfortunately, long exposure times are necessary to obtain an accurate measurement of the gas velocity dispersion, and most of the current observations have coarse angular resolution resulting in large uncertainties (Parlanti et al., 2023). Moreover, [Oiii]\(\lambda\)88\(\mu\)m would still leave a gap in observations in the redshift range \(3.5<z<6\).
With the advent of JWST NIRSpec and in particular, thanks to the IFS mode observations, we can finally compare the kinematics determined from the rest-frame optical lines with that of [Cii] in the same \(z>4\) galaxies and verify if there is a discrepancy between these tracers or not. ALESS073.1 is the first massive (\(M_{\star}\sim 10^{11}\) M\({}_{\odot}\), Circosta et al., 2019) galaxy for which we have both ALMA and NIRSpec observations and the results presented in Section 7 show that the velocity curves of the two tracers are consistent within the errors. The data however highlight a difference between H\(\alpha\) and [Cn] in terms of velocity dispersion. The velocity dispersion inferred from H\(\alpha\) is systemically larger by more than 50 km s\({}^{-1}\) in the central 1.5 kpc than the one determined from the carbon line. This difference cannot be associated only with the difference in thermal velocity dispersion of the gas mapped by the two tracers as this is of the order of 20 km s\({}^{-1}\). The difference between the two gas kinematics
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Radius [kpc] & V [km s\({}^{-1}\)] & \(\sigma\) [km s\({}^{-1}\)] \\ \hline Ring 1 & \(0\leq r<0.6\) & \(337^{+91}_{-30}\) & \(166^{+33}_{-20}\) \\ Ring 2 & \(0.6\leq r<1.2\) & \(329^{+32}_{-30}\) & \(150^{+20}_{-20}\) \\ Ring 3 & \(1.2\leq r<1.8\) & \(321^{+32}_{-42}\) & \(164^{+19}_{-20}\) \\ Ring 4 & \(1.8\leq r<2.4\) & \(327^{+37}_{-37}\) & \(65^{+20}_{-26}\) \\ Ring 5 & \(2.4\leq r<3.0\) & \(330^{+30}_{-31}\) & \(36^{+20}_{-27}\) \\ Ring 6 & \(3.0\leq r<3.6\) & \(238^{+4}_{-4}\) & \(75^{+20}_{-31}\) \\ \hline \hline & \(v_{\rm sys}\) [km s\({}^{-1}\)] & \(-18\pm 1\) & \\ & PA [degree] & \(45\pm 1\) & \\ \hline \end{tabular}
\end{table}
Table 2: Best fitting kinematic parameters obtained by the non-parametric fitting.
is not uniform over the field of view but is higher at the center \(\langle\sigma_{\rm H\alpha}-\sigma_{\rm[CII]}\rangle=100\) km s\({}^{-1}\) and reaches values comparable with the errors (\(\sim 30\) km s\({}^{-1}\)) at larger radii.
To understand whether the galaxy is rotation or dispersion-supported we compute the ratio between the rotational velocity and the velocity dispersion. When the ratio is larger than \(\sqrt{3.36}\) the galaxy is considered supported by rotation, on the contrary, is supported by the turbulent random motion of the gas (Forster Schreiber et al., 2018). In Figure 12 we report the ratio \(V/\sigma\) computed for each ring for the H\(\alpha\) (red) and [Cii] (blue) as small symbols along with other high and low redshift galaxies. The big blue triangle and red circle represent the mean value of \(V/\sigma\) across all the rings for the [Cii] and H\(\alpha\) profiles, respectively. If we compare the value we obtain with the literature we see that the results with both tracers lie in the rotation-dominated region, even if the H\(\alpha\) and [Cii] points create two different clouds around the central value of \(V/\sigma=10\) for the [Cii], and \(V/\sigma=3.5\) for the H\(\alpha\). The ratio between the rotational velocity and the velocity dispersion derived from the H\(\alpha\) line is \(V/\sigma\sim 10\) in the outer regions as also found with the [Cii] tracer for this galaxy (Lelli et al., 2021) and other high-redshift dusty star-forming galaxies (e.g., Rizzo et al., 2020, 2021; Fraternali et al., 2021), and decreases down to values of \(\sim 2\) in the central region, consistent with a thick turbulent rotating disk.
The enhanced velocity dispersion of the warm ionized gas can be caused by the outflows driven by the central AGN. Theoretical models predict that outflows might remove gas from the galaxy as well as inject energy into the interstellar medium and kinematically heat the gas. The increase in the turbulence in the ionized phases of the ISM due to the feedback effect has been recently observed in Marasco et al. (2023) at \(z=0\), while a correlation between the increase of the turbulence in the galaxy and the presence of a central AGN that can power a nuclear outflow has been found in Ubler et al. (2019). The
Figure 11: Observed velocity dispersion maps and difference between H\(\alpha\) and [Cii]. In the left panel the H\(\alpha\) velocity dispersion map. In the central panel, the [Cii] velocity dispersion map rebinned to have the same spaxel size as H\(\alpha\)and matched to have the same PSF. In the right panel the spaxel-by-spaxel difference between the velocity dispersion of H\(\alpha\) and [Cii].
Figure 10: Velocity (left) and velocity dispersion (right) profile derived from H\(\alpha\) and [Cii] from Lelli et al. (2021). Blue points are the results with the [Cii] tracer by Lelli et al. (2021). Red solid lines are the results of the non-parametric fitting of the H\(\alpha\) maps from this work with associated uncertainties. In gray is the region affected by the PSF.
H\(\alpha\) gas might be affected by the galactic winds and its kinematics reflects the impact of the outflows on the host galaxy. We thus compare the kinetic energy of the gas with that injected by the outflows into the galaxy. We estimate the kinetic energy of the warm gas traced by the narrow H\(\alpha\) component, \(E_{\rm H\alpha}=M_{\rm H\alpha}\sigma_{\rm H\alpha}^{2}/2=1\times 10^{54}\) erg, by employing the H\(\alpha\) flux and velocity dispersion in the region with \(\sigma_{\rm H\alpha}-\sigma_{\rm[CI]}>70\) km s\({}^{-1}\) that is the region where the difference between the velocity dispersions in the third panel of Figure 11 is two times larger than the median error. We compare the inferred kinetic energies with the outflow energy, \(E_{\rm out}=\langle E_{\rm out}\rangle_{\rm out}=2.3\times 10^{57}\) erg, where we assume that the outflow kinetic rate is constant over time and the feedback mechanism started 10 Myr ago (\(\tau_{\rm out}=10\) Myr). We find that the ratio between the energy provided by the outflow, and the energy necessary to increase the turbulence of the ISM is \(E_{\rm out}/E_{\rm H\alpha}=0.05\%\). The result suggests that the energy of the outflows is powerful enough to provide the kinetic energy of the ionized gas observed in the line-width enhancement region. In the outer regions of the galaxy, instead, the warm and cold gas are coupled as the velocity dispersion of both tracers reaches values of 30 km s\({}^{-1}\) and we do not find evidence of the feedback from the weak outflow or the accreting BH enhancing the turbulence at larger radii. The level of turbulence in the outskirts of the galaxy can easily be sustained by star-formation feedback, gravitational instabilities due to the accretion of gas on the disk or due to the transport of gas from outer to inner radii (Krumholz and Burkhart, 2016; Krumholz et al., 2018; Ginzburg et al., 2022).
## 9 Conclusions
In this work, we have presented the JWST/NIRSpec Integral Field Spectrograph (IFS) observation of the AGN-host galaxy ALESS073.1 at \(z=4.755\). The observations of the high-resolution gratings have allowed us to study for the first time the rest-frame optical emission lines of a dusty-obscured SMG hosting an AGN. In particular, we exploited the H\(\alpha\) and [Nii] emission lines to trace the host galaxy kinematics, determine the presence of a BLR, and investigate the properties of ionized outflows. Our main results are the following:
* We observe a broad H\(\alpha\) component with a FWHM of \(\sim 9000\) km s\({}^{-1}\) arising from the BLR around the accreting supermassive black hole. The presence of the BLR unambiguously classifies the target as a type 1 AGN, in contrast to what was believed before due to the high observed column density of \(N_{\rm H}\sim 10^{24}\) cm\({}^{-2}\). The broad line emission implies a BH mass of \(\log(M_{\rm BH}/M_{\odot})>8.7\) that is slowly accreting at a smaller rate than the Eddington rate (\(\lambda_{\rm Edd}\)\(<\)0.15).
* On the \(M_{\rm BH}-M_{\star}\) plane, the target lies on the relation for massive quiescent local ellipticals, classic bulges, and luminous quasar at high redshift. But \(M_{\rm BH}/M_{\star}\) is more than one order of magnitude higher with respect \(M_{\rm BH}/M_{\star}\) observed in local AGN with similar stellar masses.
* We find hints of a weak, marginally resolved, ionized outflow with a mass loading factor of \(\sim 0.01\), implying that the outflow is not able to eject away a large amount of gas to halt the star formation ongoing in the galaxy.
* By measuring the ratio between [Nii] and H\(\alpha\) we have found that the AGN hard radiation is the dominant source of ionization of the ISM, especially in the central region of the galaxy and for the outflow component. At larger radii we find, instead, softer radiation, compatible with emission from young, bright stars.
* Despite the low mass loading factor, the ionized outflow seems to be sufficiently energetic to increase the turbulence in the system. In fact, the kinematic analysis of the H\(\alpha\) line shows that gas turbulence in the central region is 2-3 times higher than the rest of the galaxy. This increase in turbulence might be the initial effect of the outflow on the host galaxy. However, as such high-velocity dispersion is observed only in the H\(\alpha\) kinematics and not in the map of [Cii], we conclude that the outflow is injecting turbulence in the warm and diffuse ionized gas, but it is not sufficiently powerful to disrupt the dense gas and quench star formation.
The complex scenario of galaxy-black hole co-evolution is still far from being constrained and firmly established. In this work, we have highlighted how JWST with its high spatial resolution, spatially-resolved spectroscopy capability, and an infrared wavelength range probed with unprecedented sensitivity, significantly enhance previous studies based on observations at various wavelengths. In particular, its ability to probe the rest frame optical emission lines at high redshift with high spectral and spatial resolution has allowed us to study the first phases of the interplay between the accreting black hole and the host galaxy and connect kinematic measurements of high-\(z\) tracers ([Cii]) with the more traditional rest-frame optical emission line tracers at lower redshift. Future JWST IFS observations, alongside other ground-based facilities (e.g. ALMA) will allow us to
Figure 12: Evolution of the ratio between the rotational velocity and the velocity dispersion with redshift. Red and blue small symbols represent the value of \(V/\sigma\) for each ring (see also Figure 10 for H\(\alpha\) and [Cii], respectively. While large symbols are the mean \(V/\sigma\) value across the galaxy with the two tracers. Gray symbols are other literature results, in particular, circles represent the results from kinematic studies that exploited ionized gas tracers (i.e., H\(\alpha\), [Oiii] (Green et al., 2014; Turner et al., 2017; Forster Schreiber et al., 2018; Parlanti et al., 2023; de Graaff et al., 2023), while triangles represent the \(V/\sigma\) values for galaxies studied though the molecular or neutral emission lines (i.e., [Cii], CO) (Rizzo et al., 2020; Fraternali et al., 2021; Jones et al., 2021; Girard et al., 2021; Lelli et al., 2021; Rizzo et al., 2021). The dashed line is the demarcation between rotational supported (upper) and dispersion supported (lower).
better observe and understand the phenomena involved in the complex environment of dusty massive galaxies in the early Universe and their interaction with the massive black holes they host. ALES073.1 observations suggest that SMGs could be the evolutionary stage preceding the active quasar phase. The accreting BHs in SMG have not yet reached the Eddington limit and outflows are not powerful enough to remove gas from the galaxy, but they are injecting energy into the system and increasing the turbulence of less dense gas.
###### Acknowledgements.
EF is grateful to Fabio Vito for useful comments and discussions. SC and GV acknowledge support from the European Union (ERC, WINGS,101040227). PE, RM and JS acknowledge support by the Science and Technology Facilities Council (STFC), by the ERC Advanced Grant 695671 "QUENCHF", and by the UKRI Frontier Research grant RISEandFAIL. RM is further supported by a research professionally from the Royal Society. SA, BRAP, MF acknowledge grant PID2021-217788-INO-100 funded by the Spanish Ministry of Science and Innovation/State Agency of Research (MICIN/AEI/10.13039/S01100101033). MP acknowledges the Programa Arraco de Talento de la Comunidad de Madrid via grant grant 2018-T21-11715. II. acknowledges support from PID2022-140843N-DE-22 funded by AE1 DE1 10.13039/S01100011033 and BDC 20221289 funded by MCIN by the Recovery, Transformation and Resilience Plan from the Spanish State, and by NextGenerationEU from the European Union through the Recovery and Resilience Facility. GC acknowledges the support of the INAF Large Grant 2002-7 The metal circle: a new sharp view of the baryon cycle up to Cosmic Daw with the latest generation IFU facilities." AJB, AJC, GCJ and JC acknowledge funding from the "FirstGaixies" Advanced Grant from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 789056). HU gratefully acknowledges support by the Isaac Newton Trust and by the Kavli Foundation through a Newton-Kavli Junior Fellowship. This paper makes use of the following ALMA data: ADS/JA00.ALMA#2071.10471. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), M057 and ASTAIA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAJO.
|
2310.20433 | Simple and tight complexity lower bounds for solving Rabin games | We give a simple proof that assuming the Exponential Time Hypothesis (ETH),
determining the winner of a Rabin game cannot be done in time $2^{o(k \log k)}
\cdot n^{O(1)}$, where $k$ is the number of pairs of vertex subsets involved in
the winning condition and $n$ is the vertex count of the game graph. While this
result follows from the lower bounds provided by Calude et al [SIAM J. Comp.
2022], our reduction is simpler and arguably provides more insight into the
complexity of the problem. In fact, the analogous lower bounds discussed by
Calude et al, for solving Muller games and multidimensional parity games,
follow as simple corollaries of our approach. Our reduction also highlights the
usefulness of a certain pivot problem -- Permutation SAT -- which may be of
independent interest. | Antonio Casares, Marcin Pilipczuk, Michał Pilipczuk, Uéverton S. Souza, K. S. Thejaswini | 2023-10-31T13:11:04Z | http://arxiv.org/abs/2310.20433v1 | # Simple and tight complexity lower bounds for solving Rabin games+
###### Abstract
We give a simple proof that assuming the Exponential Time Hypothesis (ETH), determining the winner of a Rabin game cannot be done in time \(2^{o(k\log k)}\cdot n^{O(1)}\), where \(k\) is the number of pairs of vertex subsets involved in the winning condition and \(n\) is the vertex count of the game graph. While this result follows from the lower bounds provided by Calude et al [1], our reduction is considerably simpler and arguably provides more insight into the complexity of the problem. In fact, the analogous lower bounds discussed by Calude et al, for solving Muller games and multidimensional parity games, follow as simple corollaries of our approach. Our reduction also highlights the usefulness of a certain pivot problem -- Permutation SAT -- which may be of independent interest.
## 1 Introduction
We study Rabin games defined as follows. The arena of a Rabin game is a (finite) directed graph \(D\) whose vertices are divided among the two players involved: Steven and Audrey1. There is an initial vertex \(u_{1}\) on which a token is initially placed. The game proceeds in turns. Each turn, the player controlling the vertex \(u\) on which the token is currently placed chooses any outneighbour \(v\) of \(u\) and moves the token from \(u\) to \(v\). Thus, by moving the token, the players construct an infinite walk \(\rho=(u_{1},u_{2},u_{3},\ldots)\) in \(D\), called a _play_. To determine the winner, the play \(\rho\) is compared against the winning condition consisting of \(k\) pairs of vertex subsets \((G_{1},B_{1}),(G_{2},B_{2}),\ldots,(G_{k},B_{k})\) as follows: Steven wins if there exists \(i\in\{1,\ldots,k\}\) such that along \(\rho\), \(G_{i}\) is visited infinitely often while \(B_{i}\) is visited only a finite number of times; Audrey wins otherwise. The computational question associated with the game is to determine which player has a winning strategy.
Footnote 1: The right way to memorize the player names is **Steven** and **Oddrey**; the naming comes from the context of parity games.
Rabin conditions were first introduced by Rabin in his proof of decidability of S2S (monadic second order with two successors) [10]. They also naturally appear in the determinization of Buchi automata [11, 12], a key step in the synthesis problem for reactive systems with specifications given in Linear
Temporal Logic. Since then, algorithms for solving Rabin games have been extensively studied [1, 2, 10, 11]. They generalise the more well-known _parity games_, which differ by altering the winning condition as follows. Each vertex of the graph bears a _colour_, which is an integer from \(\{1,\ldots,k\}\). Steven wins a play \(\rho\) if the largest colour seen infinitely often in \(\rho\) is even, and otherwise Audrey wins. Indeed, to reduce a parity game with colours \(\{1,\ldots,k\}\) to a Rabin game with \(\lfloor k/2\rfloor\) pairs in the winning condition, it suffices to take the same graph \(D\) and set
\[G_{i}=\{\text{vertices with colours }\geqslant 2i\}\qquad\text{ and }\qquad B_{i}=\{\text{vertices with colours }\geqslant 2i+1\},\]
for all \(i\in\{1,\ldots,\lfloor k/2\rfloor\}\). Further, both parity games and Rabin games are generalised by _Muller games_, where again vertices have colours from \(\{1,\ldots,k\}\) (each vertex may bear multiple colours), and the winning condition is defined by simply providing a family \(\mathcal{F}\) of subsets of \(\{1,\ldots,k\}\) that are winning for Steven in the following sense: Steven wins a play \(\rho\) if the set of colours seen infinitely often in \(\rho\) belongs to \(\mathcal{F}\).
In a breakthrough paper, Calude, Jain, Khoussainov, Li, and Stephan [13] proved that solving all the three games discussed above is fixed-parameter tractable when parameterised by \(k\) (the number of colours, respectively the number of pairs in the winning condition). More precisely, determining the winner of the game can be done in \(k^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\) time, where \(n\) is the number of vertices of the arena. The recent work of Majumdark, Saglam and Thejaswini [14] provides a more precise analysis which results in an algorithm solving Rabin games in polynomial space and time \(k!^{1+o(1)}\cdot nm\), where \(m\) is the number of edges. While the work of Calude et al. also provided a quasipolynomial-time algorithm to solve parity games, it is known that solving Rabin games is already \(\mathsf{NP}\)-complete [1, 2], while solving Muller games is \(\mathsf{PSPACE}\)-complete [12]. Hence, for those games, the existence of (quasi)polynomial-time algorithms is unlikely.
In their work, Calude et al. [13] provided also complexity lower bounds based on the Exponential Time Hypothesis (ETH, the assumption that there exists \(\delta>0\) such that 3SAT problem cannot be solved in time \(\mathcal{O}(2^{\delta n})\)) for some of the games discussed above. They proved that assuming ETH, there are no algorithms with running time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\) for solving Muller games with priorities in \(\{1,\ldots,k\}\) or \(d\)-dimensional \(k\)-parity games (see preliminaries for a definition of the latter variant). Since every \(k\)-dimensional parity game can be reduced in polynomial time to a Rabin game with \(k\) pairs in the winning condition (see [1]), one can also derive, as a corollary, the same lower bound for solving Rabin games. The reduction provided by Calude et al. starts with the Dominating Set problem and is rather involved.
Our contribution.We provide a simple reduction that reproves the tight complexity lower bound for solving Rabin games that follows from the work of Calude et al. More precisely, we prove that assuming ETH, there is no algorithm for this problem with running time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\). The same lower bound for (the more general) Muller games follows as a direct corollary. By a minor twist of our construction, we can also reprove the lower bound for \(k\)-dimensional parity games reported by Calude et al.
We believe that our reduction is significantly simpler and more transparent than that of Calude et al. but more importantly, it gives a better insight into the origin of the \(2^{o(k\log k)}\) factor in the complexity of the problem. Analyzing the algorithms of [13, 14, 14], this factor stems from considering all possible permutations of the \(k\) pairs of vertex subsets involved in the winning condition. In our reduction, those permutations form the space of potential solutions of a carefully chosen pivot problem -- Permutation SAT, a special case of a temporal constraint satisfaction problem -- which we discuss below.
Temporal CSPs and Permutation SAT.A constraint satisfaction problem (CSP) is the problem of deciding if there exists a variable assignment that satisfies a given set of constraints. _Temporal problems
is a rich family of CSPs that model planning various events on a timeline. In a basic form, every variable corresponds to an event that needs to be scheduled at some point of time and constraints speak about some events being in specific order (e.g., one preceding another), at the same time, or at different times. This is usually modeled with \(\mathbb{Q}\) as the domain and constraints having access to predicates \(<\), \(\leqslant\), \(=\), and \(\neq\). A P vs NP dichotomy for finite languages within this formalism has been provided by Bodirsky and Kara [1].
An instance of such a temporal CSP with \(k\) variables and \(n\) constraints can be solved in time \(k^{k}\cdot(k+n)^{\mathcal{O}(1)}\) as follows: since variables are accessed only via comparisons \(<\), \(\leqslant\), \(=\), and \(\neq\), without loss of generality one can restrict to assignments with values in \(\{1,2,\ldots,k\}\), and there are \(k^{k}\) such assignments that can be all checked. An interesting and challenging question is: For which languages this running time can be significantly improved?
In this paper, we focus in a particular temporal CSP: Permutation SAT. An instance of this problem is given by a boolean combination of literals of the form \(x_{1}<x_{2}<\ldots<x_{\alpha}\); a solution for it is an assignment of variables to integers making it a valid formula. We say that such a problem is an instance of \((\alpha,\beta)\)-Permutation SAT if its constraints use at most \(\beta\) literals, and each of these literals involves at most \(\alpha\) variables. Observe that, without loss of generality, in Permutation SAT one can restrict attention to assignments being surjective functions from variables \(\{x_{1},\ldots,x_{k}\}\) to \(\{1,\ldots,k\}\), which can be interpreted as permutations of \(\{1,\ldots,k\}\); this justifies the choice of the problem name and yields a brute-force algorithm with running time \(k!\cdot(k+n)^{\mathcal{O}(1)}\).
Bonamy et al. [1] proved that \((3,\infty)\)-Permutation SAT admits no \(2^{o(k\log k)}n^{\mathcal{O}(1)}\) algorithm unless the Exponential Time Hypothesis (ETH) fails. Our main technical contribution is a similar lower bound for \((2,4)\)-Permutation SAT (Theorem 3.1). The proof of this result is a simple reduction from the \(k\times k\)-Clique problem considered by Lokshtanov, Marx, and Saurabh [12]. It is our belief that \((\alpha,\beta)\)-Permutation SAT is a problem with a very easy and robust formulation, hence its usefulness may extend beyond the application to Rabin games discussed in this work.
## 2 Preliminaries on games
For a positive integer \(p\), we denote \([p]\coloneqq\{1,\ldots,p\}\).
Rabin and Muller games are turn-based two-player games played on an _arena_ that is a directed graph \(D=(V,E)\) together with a partition of the vertices into those owned by player Steven and those owned by player Audrey. A token is initially placed on a designated starting vertex \(u_{1}\). In each consecutive turn, the owner of the vertex bearing the token moves the token along an edge of \(D\). Thus, the players jointly form an infinite sequence of vertices in consecutive turns, referred to as a _play_. An _objective_ is a representation of a subset of the set of all possible plays. We will consider three different objectives discussed below.
Muller objectives.In a Muller game, each vertex is labelled with a subset of colours from \([k]\) via a mapping \(c\colon V\to 2^{[k]}\), where \(V\) is the set of vertices of the arena \(D\). The Muller objective is specified by a family of subsets of colours \(\mathcal{F}\subseteq 2^{[k]}\). A play \(\rho\) is winning for Steven if the set of colours visited infinitely often, belongs to \(\mathcal{F}\), that is, if
\[\bigcup_{v\in\operatorname{Inf}(\rho)}c(v)\ \in\mathcal{F},\]
where \(\operatorname{Inf}(\rho)\) is the set of vertices appearing infinitely often in the play.
Rabin objective.A Rabin objective of _degree_\(k\) consists of \(k\) pairs of vertex subsets \((G_{1},B_{1}),\dots,(G_{k},B_{k})\); \(G_{i}\) is said to be the _good_ subset for index \(i\), and \(B_{i}\) is the _bad_ subset. A play \(\rho\) is winning for Steven if there exists an index \(i\in\{1,\dots,k\}\) such that \(\rho\) visits \(G_{i}\) infinitely often and \(B_{i}\) only a finite number of times.
Rabin objectives of degree \(k\) can be encoded as a Muller objective using \(2k\) colours. Indeed, for each \(0\leqslant i<k\), we associate \(2i\) with the subset \(G_{i}\) and \(2i+1\) with the subset \(B_{i}\). We define \(c\colon V\to 2^{[2k]}\) and \(\mathcal{F}\) as:
\[c(v)=\{2i\colon v\in G_{i}\}\cup\{2i+1\colon v\in B_{i}\}\ \ \ \text{and}\ \ \ \mathcal{F}=\{C\subseteq[2k]\colon\exists i\ 2i\in C\text{ and }2i+1\notin C\}.\]
Generalised parity objective.Generalised parity games were first considered in the work of Chaterjee, Henzinger, and Piterman [1]. In a \(d\)-dimensional \(k\)-parity condition, each vertex is labelled with a \(d\)-dimensional vector of integers from \(\{1,\dots,k\}\). An infinite play satisfies this objective for Steven if and only if there is some coordinate such that the highest number that occurs infinitely often at this coordinate is even. Audrey wins otherwise.
These games are inter-reducible with Rabin games, as shown by [1]. For one direction, since a \(d\)-dimensional \(k\)-parity objective is a disjunction of \(d\) distinct parity objectives, and each parity objective can be expressed as a Rabin objective of degree \(\lceil k/2\rceil\), the \(d\)-dimensional \(k\)-parity objective can therefore similarly be transformed into a Rabin objective of degree \(d\lceil k/2\rceil\), with \(\lceil k/2\rceil\) Rabin pairs for _each_ of the \(d\) parity objectives. Conversely, a Rabin objective with \(d\) pairs can be represented as a \(d\)-dimensional \(3\)-parity objective. Indeed, we use each pair \((G_{i},B_{i})\) to define the component \(p_{i}\) that assigns colour 3 to \(v\) when \(v\in B_{i}\), colour 2 if \(v\in G_{i}\setminus B_{i}\) and 1 otherwise.
Calude et al. [1] showed that generalised parity games cannot be solved in time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\) assuming the ETH, even when the dimensions is \(d=2\).
Strategies and winners.For a given game, with any of the objectives discussed above, a _strategy_ of Steven is a function from the set of plays ending at a Steven vertex to the set of vertices. A play \(v_{0},v_{1},\dots,v_{i},\dots\) is said to respect this strategy if for every vertex \(v_{i}\) which belongs to Steven, the vertex \(v_{i+1}\) is the one proposed by the strategy on the finite prefix of this play ending at \(v_{i}\). For a fixed objective, a game is said to be winning for Steven if he has a strategy such that plays respecting this strategy satisfy the objective.
Positional strategies.We say that a strategy (for Steven) is _positional_ (or _memoryless_) if it can be represented by a function assigning an outgoing edge to each vertex owned by Steven. That is, a positional strategy always makes the same decision over the same vertex, and this decision depends only on the current vertex and not on the history of the play. It is well known that Rabin games are positional for Steven in the following sense.
**Lemma 2.1** ([1, 1]).: _Rabin games are positional for Steven. That is, if Steven wins a Rabin game, then he has a positional winning strategy._
Exponential Time Hypothesis.The Exponential Time Hypothesis is a complexity assumption introduced by Impagliazzo, Paturi and Zane [11] that postulates the following: there exists \(\delta>0\) such that the 3-SAT problem cannot be solved in time \(\mathcal{O}(2^{\delta n})\), where \(n\) is the number of variables of the input formula. We refer the reader to [1, Chapter 14] for an introduction to the applications of ETH for lower bounds within parameterized complexity.
## 3 Permutation SAT
Fix integers \(\alpha\geqslant 2\) and \(\beta\geqslant 1\) and let \(X\) be a finite set of variables. An _\(\alpha\)-literal_ is a predicate of the form \(x_{1}<x_{2}<\ldots<x_{\alpha^{\prime}}\) (being a shorthand for \((x_{1}<x_{2})\wedge(x_{2}<x_{3})\wedge\ldots\wedge(x_{\alpha^{\prime}-1}<x_{ \alpha^{\prime}})\)) for some \(2\leqslant\alpha^{\prime}\leqslant\alpha\) and variables \(x_{1},x_{2},\ldots,x_{\alpha^{\prime}}\) belonging to \(X\); a _literal_ is a \(2\)-literal (i.e., a predicate of the form \(x_{1}<x_{2}\)). An _\((\alpha,\beta)\)-clause_ is a disjunction of at most \(\beta\)\(\alpha\)-literals, and an _\((\alpha,\beta)\)-formula_ is a conjunction of \((\alpha,\beta)\)-clauses. By \(\beta\)-clauses and \(\beta\)-formulas we mean \((2,\beta)\)-clauses and \((2,\beta)\)-formulas, respectively.
If \(\phi\) is a formula with variable set \(X\), then for a permutation \(\pi\) of \(X\) we define the satisfaction of (literals and clauses of) \(\phi\) by \(\pi\) in the obvious manner. In the \((\alpha,\beta)\)-Permutation SAT problem we are given an \((\alpha,\beta)\)-formula \(\phi\) and the task is to decide whether there exists a permutation of the variables of \(\phi\) that satisfies \(\phi\). \(\beta\)-Permutation SAT is a shorthand for \((2,\beta)\)-Permutation SAT.
In this section we prove the following hardness result.
**Theorem 3.1**.: _Assuming ETH, there is no algorithm for \(4\)-Permutation SAT that would work in time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\), where \(k\) is the number of variables and \(n\) is the number of clauses._
To prove Theorem 3.1 we use the problem \(k\times k\)-Clique considered by Lokshtanov, Marx, and Saurabh [10]. They showed that, unless ETH fails, this problem cannot be solved in \(2^{o(k\log k)}\)-time. We first define \(k\times k\)-Clique below and then reduce \(k\times k\)-Clique to \(4\)-Permutation SAT.
An instance of the \(k\times k\)-Clique problem is an undirected graph \(G\) with the vertex set \(\{1,\ldots,k\}\times\{1,\ldots,k\}\) (which we can represent as a grid). This graph \(G\) is a positive instance of \(k\times k\)-Clique if there is one vertex from each _row_ of the grid that forms a \(k\)-clique, that is, a \(k\)-clique in which no two vertices share the same first component.
**Theorem 3.2** ([10, Theorem 2.4]).: _Assuming ETH, there is no \(2^{o(k\log k)}\)-time algorithm for \(k\times k\)-Clique._
The reduction.We now reduce \(k\times k\)-Clique to \(4\)-Permutation SAT. Suppose \(G\) is an instance of \(k\times k\)-Clique. We construct a \(4\)-formula \(\phi_{G}\) over variable set \(X\coloneqq\{x_{1},\ldots,x_{k},x_{k+1},y_{1},\ldots,y_{k}\}\) as follows.
Recall that the vertices of the graph \(G\) are of the form \((i,j)\) for \(i,j\in\{1,\ldots,k\}\). We say that vertex
Figure 1: The construction in Section 3. The highlighted clique corresponds to permutation \(x_{1}<y_{4}<x_{2}<y_{1}<y_{3}<x_{3}<x_{4}<y_{2}<x_{5}\) (with \(y_{1}\) and \(y_{3}\) possibly swapped). The dashed non-edge \(((4,3),(3,4))\) is disallowed by the clause \(\neg\left((x_{4}<y_{3}<x_{5})\wedge(x_{3}<y_{4}<x_{4})\right)\) which ensures if \(y_{4}\) appears between \(x_{3}\) and \(x_{4}\), then \(y_{3}\) does not appear between \(x_{4}\) and \(x_{5}\).
\((i,j)\) is in the \(i^{th}\) row and \(j^{th}\) column. To construct \(\phi_{G}\), we first write the following \(3k\) many \(1\)-clauses:
\[x_{1}<x_{2},\quad x_{2}<x_{3},\quad\ldots,\quad x_{k}<x_{k+1},\] \[x_{1}<y_{1},\quad x_{1}<y_{2},\quad\ldots,\quad x_{1}<y_{k}\] \[y_{1}<x_{k+1},\quad y_{2}<x_{k+1},\quad\ldots,\quad y_{k}<x_{k+1}\]
The conjunction of these clauses ensures that in any permutation satisfying \(\phi_{G}\), the variables \(x_{1},\ldots,x_{k+1}\) are ordered exactly in this way, while variables \(y_{1},\ldots,y_{k}\) are sandwiched between \(x_{1}\) and \(x_{k+1}\). In other words, the \(y\)-variables that are placed between \(x_{j}\) and \(x_{j+1}\) indicate the rows that choose their clique vertices from the \(j^{\text{th}}\) column; and for some j's, this set may be empty as well.
Next, we introduce clauses that restrict the placement of variables \(y_{1},\ldots,y_{k}\) within the chain \(x_{1}<x_{2}<\ldots<x_{k+1}\). The intention is the following: placing \(y_{i}\) between \(x_{j}\) and \(x_{j+1}\) corresponds to choosing the vertex \((i,j)\) to the clique. Hence, it remains to introduce clauses ensuring that vertices chosen in this way in consecutive rows are pairwise adjacent. To this end, for every pair \((a,b),(c,d)\) of vertices non-adjacent in \(G\), we construct the following \(4\)-clause:
\[(y_{a}<x_{b})\vee(x_{b+1}<y_{a})\vee(y_{c}<x_{d})\vee(x_{d+1}<y_{c}).\]
Note that logically, this \(4\)-clause is equivalent to the following:
\[\neg\left((x_{b}<y_{a}<x_{b+1})\wedge(x_{d}<y_{c}<x_{d+1})\right).\]
Thus, intuitively speaking, the \(4\)-clause forbids simultaneously choosing \((a,b)\) and \((c,d)\) to the clique.
This concludes the construction of the formula \(\phi_{G}\). It remains to verify the correctness of the reduction.
**Lemma 3.3**.: _The graph \(G\) admits a \(k\)-clique with one vertex from each row if and only if \(\phi_{G}\) is satisfiable._
Proof.: First, suppose \(G\) contains a \(k\)-clique \(K=\{(1,b_{1}),\ldots,(k,b_{k})\}\). Consider any permutation \(\pi\) of \(X\) such that
* \(x_{1}<x_{2}<\cdots<x_{k}<x_{k+1}\), and
* \(x_{b_{i}}<y_{i}<x_{b_{i}+1}\), for all \(j\in\{1,\ldots,k\}\).
(Note that \(\pi\) is not defined uniquely, the relative placement of \(y_{i}\) and \(y_{i^{\prime}}\) can be arbitrary whenever \(b_{i}=b_{i^{\prime}}\).) It can be easily seen that \(K\) being a clique, implies that all clauses in \(\phi_{G}\) are satisfied. The \(1\)-clauses are satisfied trivially, while every \(4\)-clause constructed for a non-adjacent \((a,b),(c,d)\) is satisfied because \((a,b)\) and \((c,d)\) cannot simultaneously belong to \(K\).
Suppose now that there is an ordering of \(X\) that satisfies \(\phi_{G}\). Clearly, it must be the case that \(x_{1}<x_{2}<\cdots<x_{k}<x_{k+1}\). Further, for every \(i\in\{1,\ldots,k\}\) we have \(x_{1}<y_{i}<x_{k+1}\) and therefore, there exists \(j_{i}\) such that \(x_{j_{i}}<y_{i}<x_{j_{i}+1}\). We let \(K\coloneqq\{(i,j_{i})\colon i\in\{1,\ldots,k\}\}\); note that \(K\) contains one vertex from each row. We claim that \(K\) is a clique in \(G\). Indeed, since in \(\phi_{G}\) there is a clause disallowing that \(((x_{b}<y_{a}<x_{b+1})\wedge(x_{d}<y_{c}<x_{d+1}))\) whenever there is no edge between \((a,b)\) and \((c,d)\), all vertices of \(K\) must be pairwise adjacent.
This concludes the proof of Theorem 3.1. We remark that establishing the complexity of \(2\)- and \(3\)-Permutation Sat remains an interesting and challenging open problem. Eriksson in his MSc thesis [1] shows that \(2\)-Permutation Sat can be solved in time \(((k/2)!)^{2}\cdot(k+n)^{\mathcal{O}(1)}\), which gives roughly a \(2^{k/2}\) multiplicative improvement over the naive algorithm.
For a broader context, we also remark that a more general variant of Permutation Sat is Permutation MaxSAT, where we ask for an assignment that satisfies as many constraints as possible (instead of asking
to satisfy all of them). Observe that \((2,1)\)-Permutation SAT is equivalent to a problem of checking if a given directed graph is acyclic (and thus solvable in polynomial time) while \((2,1)\)-Permutation MaxSAT is equivalent to finding a maximum acyclic subdigraph (which is NP-hard). A simple folklore dynamic programming algorithm solves \((2,1)\)-Permutation MaxSAT in \(2^{\mathcal{O}(k)}n^{\mathcal{O}(1)}\) time and this algorithm can be generalised to \((3,1)\)-Permutation MaxSAT [12]. Kim and Goncalves [13] proved that \((4,1)\)-Permutation MaxSAT admits no \(2^{o(k\log k)}n^{\mathcal{O}(1)}\) algorithm unless the Exponential Time Hypothesis fails.
## 4 Lower bound for Rabin games
Finally, in this section, we prove the main result of this paper, stated as Theorem 4.1 below.
**Theorem 4.1**.: _Assuming the Exponential Time Hypothesis, there is no algorithm that solves Rabin games with \(n\) vertices and degree \(k\) in time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\)._
As mentioned earlier, we reduce from \(4\)-Permutation SAT.
The reduction.Let \(\phi=C_{1}\wedge C_{2}\wedge\cdots\wedge C_{m}\) be an instance of \(4\)-Permutation SAT over \(k\) variables \(\{y_{1},\ldots,y_{k}\}\), where \(C_{1},\ldots,C_{m}\) are \(4\)-clauses. We construct an instance of Rabin Game such that, in this instance, there is a strategy for Steven iff \(\phi\) is satisfiable.
We first define the game graph \(D\); see Figure 2. There is an initial vertex \(\Delta\), as well as vertices \([C_{1}],\ldots,[C_{m}]\), one for each of the \(m\)\(4\)-clauses in \(\phi\). Further, for each possible literal \(x_{i}<x_{j}\), where \(i,j\in\{1,\ldots,k\}\) and \(i\neq j\), there is a vertex \([x_{i}<x_{j}]\). Vertex \(\Delta\) belongs to Audrey, while all other vertices belong to Steven.
The intention is that whenever Audrey moves the token currently placed at \(\Delta\), she chooses a clause that she wishes to see satisfied. To facilitate this, we add edges \(\Delta\rightarrow[C_{\ell}]\) for all \(\ell\in\{1,\ldots,m\}\). Once the token is at a vertex \([C_{\ell}]\), Steven needs to respond with a literal present in \(C_{i}\); the intention is for it to be a true literal in \(C_{i}\). Therefore, for every clause \(C_{\ell}\) and literal \(x_{i}<x_{j}\) present in \(C_{\ell}\), we add the edge \([C_{\ell}]\rightarrow[x_{i}<x_{j}]\). Finally, to allow Audrey checking further clauses, we add edges back to \(\Delta\): for every literal \(x_{i}<x_{j}\), there is an edge \([x_{i}<x_{j}]\rightarrow\Delta\).
Next, we define subset pairs constituting the winning condition. For each \(i\in\{1,\ldots,k\}\), we set
\[G_{i}=\{[x_{j}<x_{i}]\colon j\in\{1,\ldots,k\}\setminus\{i\}\}\qquad\text{and} \qquad B_{i}=\{[x_{i}<x_{j}]\colon j\in\{1,\ldots,k\}\setminus\{i\}\}.\]
Before we proceed to the formal verification of the correctness of the reduction, let us give some intuition. It is easy to see that every third turn, the token is placed at vertex \(\Delta\). At each such moment, turn Audrey chooses to move the token to any vertex corresponding to a clause \(C_{\ell}\), with the intention of challenging Steven about the satisfaction of \(C_{\ell}\). Then Steven has to declare the literal that satisfies \(C_{\ell}\). If Steven tries to "cheat" by picking literals that cannot be extended to a full ordering of the variables, then the winning condition is designed in such a way that the play will be losing for him. Consider the illustration in Figure 2, where for an instance \(\phi\) of \(4\)-Permutation SAT which consists of \(m\) clauses such that the clause \(C_{3}\) is \((x_{1}<x_{4})\vee(x_{2}<x_{1})\vee(x_{2}<x_{3})\vee(x_{n-1}<x_{2})\). The vertices in \(G_{1}\) are highlighted in green and the vertices in \(B_{1}\) are highlighted in blue.
**Lemma 4.2**.: _The instance \(\phi\) of \(4\)-Permutation SAT is satisfiable if and only if Steven has a winning strategy in the constructed Rabin game._
Proof.: First suppose \(\phi\) is satisfiable, consider a satisfying permutation \(\pi\). This gives rise to a (positional) winning strategy for Steven: For each vertex \([C_{\ell}]\), Steven picks the edge leading to the vertex \([x_{i}<x_{j}]\) corresponding to any literal of \(C_{\ell}\) that is satisfied in \(\pi\). Consider now any infinite play \(\rho\) where Steven obeys this strategy. Let \(L\) be the set of literals visited infinitely often by \(\rho\), and let \(i_{\max}\) be such that \(x_{i_{\max}}\) is the variable that is the largest in \(\pi\) among variables appearing in the literals of \(L\). We argue that \(\rho\) satisfies the constructed Rabin condition with the index \(i_{\max}\) as a witness. This is because \(L\) intersects \(G_{i_{\max}}\) as \(\rho\) visits \([x_{i}<x_{i_{\max}}]\) infinitely often for some \(i\), while the intersection of \(L\) with \(B_{i_{\max}}\) is empty, as \(\rho\) never visits any vertex \([x_{i_{\max}}<x_{i}]\) for any \(i\).
Suppose now \(\phi\) is not satisfiable. Then we need to show that Audrey can win against any positional strategy of Steven. Indeed, consider a fixed positional strategy of Steven: for each Steven vertex \([C_{\ell}]\) the strategy picks an edge \([C_{\ell}]\to[x_{a_{\ell}}<x_{b_{\ell}}]\) for some literal \(x_{a_{\ell}}<x_{b_{\ell}}\) appearing in \(C_{\ell}\). Since \(\phi\) is not satisfiable, the set \(\{x_{a_{\ell}}<x_{b_{\ell}}\colon\ell\in[m]\}\) of all selected literals has a cycle. That is, there are variables \(x_{c_{1}},\ldots,x_{c_{\rho}}\) such that literals \(x_{c_{1}}<x_{c_{2}},x_{c_{2}}<x_{c_{3}},\ldots,x_{c_{p-1}}<x_{c_{p}},x_{c_{p}}< x_{c_{1}}\) are among those selected by Steven's strategy. Observe now that for the fixed Steven's positional strategy, Audrey may set up a counter strategy that repeatedly visits each of the vertices \([c_{1}<c_{2}],[c_{2}<c_{3}],\ldots,[c_{p-1}<c_{p}],[c_{p}<c_{1}]\) in a cycle, so that these are exactly the literal vertices visited infinitely often in the play. Then this play does not satisfy the constructed Rabin condition, since for each \(i\in\{1,\ldots,k\}\), the set of vertices occurring infinitely often either intersects both \(B_{i}\) and \(G_{i}\) (if \(i\in\{c_{1},\ldots,c_{p}\}\)), or is disjoint with both \(B_{i}\) and \(G_{i}\) (if \(i\notin\{c_{1},\ldots,c_{p}\}\)). Hence, Audrey may win against any fixed positional strategy of Steven.
Using the reductions shown in the preliminaries, we obtain similar corollaries for Muller and generalised parity objectives.
**Corollary 4.3**.: _Assuming the Exponential Time Hypothesis, there is no algorithm that solves Muller games with \(n\) vertices and \(k\) colours in time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\)._
**Corollary 4.4**.: _Assuming the Exponential Time Hypothesis, there is no algorithm that solves \(d\)-dimensional \(3\)-parity games with \(n\) vertices in time \(2^{o(d\log d)}\cdot n^{\mathcal{O}(1)}\)._
We conclude by remarking that we can also extend our result to \(2\)-dimensional \(k\)-parity games. Indeed, consider the following assignment of colours to the same game graph \(D\): for each vertex of the form \([x_{j}<x_{i}]\), we assign the two-dimensional colour \((2j+1,2i)\). The correctness of this reduction is similar to that for Rabin games presented above, hence we leave the verification to the reader.
**Corollary 4.5**.: _Assuming the Exponential Time Hypothesis, there is no algorithm that solves \(2\)-dimensional \(k\)-parity games with \(n\) vertices in time \(2^{o(k\log k)}\cdot n^{\mathcal{O}(1)}\)._
Acknowledgements.A large part of the results presented in this paper were obtained during Autoboz 2023, an annual research camp on automata theory. The authors thank the organisers and participants of Autoboz for creating a wonderful research atmosphere.
|
2309.06597 | Rank2Tell: A Multimodal Driving Dataset for Joint Importance Ranking and
Reasoning | The widespread adoption of commercial autonomous vehicles (AVs) and advanced
driver assistance systems (ADAS) may largely depend on their acceptance by
society, for which their perceived trustworthiness and interpretability to
riders are crucial. In general, this task is challenging because modern
autonomous systems software relies heavily on black-box artificial intelligence
models. Towards this goal, this paper introduces a novel dataset, Rank2Tell, a
multi-modal ego-centric dataset for Ranking the importance level and Telling
the reason for the importance. Using various close and open-ended visual
question answering, the dataset provides dense annotations of various semantic,
spatial, temporal, and relational attributes of various important objects in
complex traffic scenarios. The dense annotations and unique attributes of the
dataset make it a valuable resource for researchers working on visual scene
understanding and related fields. Furthermore, we introduce a joint model for
joint importance level ranking and natural language captions generation to
benchmark our dataset and demonstrate performance with quantitative
evaluations. | Enna Sachdeva, Nakul Agarwal, Suhas Chundi, Sean Roelofs, Jiachen Li, Mykel Kochenderfer, Chiho Choi, Behzad Dariush | 2023-09-12T20:51:07Z | http://arxiv.org/abs/2309.06597v2 | # Rank2Tell: A Multimodal Driving Dataset for Joint Importance
###### Abstract
The widespread adoption of commercial autonomous vehicles (AVs) and advanced driver assistance systems (ADAS) may largely depend on their acceptance by society, for which their perceived trustworthiness and interpretability to riders are crucial. In general, this task is challenging because modern autonomous systems software relies heavily on black-box artificial intelligence models. Towards this goal, this paper introduces a novel dataset, Rank2Tell\({}^{1}\), a multi-modal ego-centric dataset for **Ranking** the importance level and **Tell**ing the reason for the importance. Using various close and open-ended visual question answering, the dataset provides dense annotations of various semantic, spatial, temporal, and relational attributes of various important objects in complex traffic scenarios. The dense annotations and unique attributes of the dataset make it a valuable resource for researchers working on visual scene understanding and related fields. Furthermore, we introduce a joint model for joint importance level ranking and natural language captions generation to benchmark our dataset and demonstrate performance with quantitative evaluations.
## 1 Introduction
Effective and accurate understanding of visual scenes is an important prerequisite for safe navigation of autonomous vehicles and advanced driver assistance systems, especially in complex and highly crowded urban scenarios. Despite significant advancement in the development of self-driving and driver-support technologies, public acceptance of these systems remains limited. A survey conducted by Partners for Automated Vehicle Education (PAVE) in 2020 [14] reported that 60% of respondents would be more likely to trust autonomous vehicles if they have a better understanding of the underlying rationale of the models. To improve the transparency of these systems, intelligent vehicles must have the ability to identify critical traffic agents whose behaviors can influence their own decision making [24, 25, 29, 50]. Identifying these important agents allows for a more efficient allocation of computation resources toward predicting the actions of a subset of critical objects and identifying potential risks. To establish trust, the autonomous system must provide human-interpretable reasoning about the important agents in the scene, through voice or visual interfaces.
Accurately identifying important agents within the environment and providing human-interpretable reasoning requires a scene understanding model to capture several essential scene features effectively, including 3D mapping, the semantics of the scene, spatial and temporal relations, agents' importance level, actions, intentions, and attention. Additionally, the ability to reason about important agents in a human-interpretable manner is crucial for capturing the essence of the scene. By integrating these comprehensive features, the model's ability to understand and reason about the scene is greatly enhanced. One key to successfully applying these approaches is the availability of traffic datasets with rich human annotations of object importance and reasoning to address the interpretability and trustworthiness of autonomous systems operating in an interacting environment. However, there currently exists no comprehensive real-world driving dataset that provides all these features.
To address this challenge and facilitate future research, we propose a novel ego-centric, multi-modal dataset for visual scene understanding in urban traffic scenarios. The dataset uses 2D image features and 3D LiDAR features to provide dense semantics, temporal and relational annotations of important agents that influence the ego vehicle's decision making. In addition, it provides diverse natural language explanations to enable reasoning about why a particular agent in a scene is of importance. We aim to improve the transparency and interpretability of the visual scene understanding modules of autonomous systems. The proposed
dataset has the potential to assist drivers in conveying important decisions, improve their situational awareness, and warn passengers about potential safety hazards present in the surrounding environment. We also introduce a model that uses multi-modal 2D+3D features to jointly predict the importance level and generate captions of important agents.
The contributions of this paper are as follows. **First**, we introduce the first multi-modal dataset, _Rank2Tell_, for importance level ranking and natural language explanation tasks in urban traffic scenarios. The data is annotated using visual question answering (VQA) that combines video and object-level attributes. **Second**, we propose a model that uses multi-modal features for joint importance level classification and natural language captioning, and also establish a benchmark suite on these tasks. **Third**, we introduce the key features of the dataset that can potentially be used to enhance scene understanding for safety-critical applications.
## 2 Related Work
### Object Importance Classification
Identifying important objects in images or videos has attracted a great deal of research interest in various domains. There are several visual saliency studies that have created a pixel-level importance mapping for tasks such as visual question answering [3, 48], scene understanding [38], video summarising [17], driver attention [26], and AV steering control [19]. However, these methods do not treat objects as distinct instances. Several studies focus on identifying the single most important object in a scene, such as the most important person in a social situation [27]. Several other studies make a binary classification between "important" and "unimportant" for multiple objects in a scene [23, 24, 51, 15]. However, only one of these studies [24] attempts to couple natural language with importance estimation but this work only provides a description of a single important object. Other studies [31, 42] classify agents in the scenes at three different importance levels: high, medium, and low. However, none of these studies explains the underlying reasoning for the importance estimation for a specific object.
### Dense Captioning
There has been a large amount of research on generating natural language to describe visual images or videos. The idea of self-attention was proposed in the visual captioning task [43], which has been the basis for many further visual captioning research including [5, 12]. Some dense visual captioning methods aim to generate a caption for multiple regions in the same image [18, 45, 47] using the Visual Genome dataset [22]. The ScanRefer dataset was proposed for object localization and language description tasks [7]. Our problem setting is similar to dense image captioning but we focus on generating explanations for the important objects in traffic scenarios with spatio-temporal observations.
### Datasets
In recent years, many traffic scene datasets have been proposed to stimulate progress in the analysis of the important objects in driving scenes. HDD [34], ROI-A [1] and KITTI [31] are object localization datasets that benchmark importance localization and anomaly detection. HAD [20], BDD-X [21], BDD-OIA [44] are captioning driving datasets that provide reasons for the ego vehicle's actions in natural language descriptions. DRAMA [30] provides important object labels with captions from the ego car's perspective while considering spatio-temporal relationships from videos. While DRAMA [30] is the most relevant dataset to our proposed Rank2Tell, the major differences are shown in Table 1.
## 3 Rank2Tell Dataset
The dataset is collected using an instrumented vehicle equipped with three Point Grey Grasshopper video cameras with a resolution of \(1920\times 1200\) pixels, a Velodyne HDL-64E S2 LiDAR sensor, and high precision GPS. Additionally, Vehicle Controller Area Network (CAN) data is collected for analyzing how drivers manipulate steering, breaking, and throttle. All sensor data are synchronized and timestamped using ROS and customized hardware and software. The dataset contains a diverse set of traffic scenes captured at intersections in different environments, including urban areas. We selected 116 clips of approximately 20 seconds each, focusing on intersections from several hours of data. These video clips capture both the entering and exiting of the ego vehicle from the intersection.
### Annotation Methodology
The important agent identification is subjective in nature and can vary based on factors such as age, gender, and driving experience. To account for this diversity of opinions, each video was annotated by five annotators with varying levels of driving experience and age. The detailed statistics of annotators are introduced in the supplementary material.
We first stitch images from the three cameras, i.e. front, left, and right, which provide a wide FOV (horizontal 134 degrees). Each frame of a video is overlayed with the ego vehicle's speed, and intention (left/right/straight) while leaving the intersection using an arrow. Annotators annotated every 4th frame of the 10 fps video. To avoid bias in important agent identification due to prior knowledge of other agents' future intentions, the annotators had access to only 40 historical frames (i.e., 4 seconds of the video) while annotating a certain frame.
We divide our annotation scheme into three parts: _Important Agent Identification_, _Important Agent Localization_
and Ranking_, and _Captioning_. The annotation schema is shown in Figure 2.
**Important Agent Identification.** The first step is to identify whether there exist important agents in the scene that have the potential to influence the ego vehicle. To this end, we overlay the ego vehicle's intention (i.e., straight, left, right) and speed obtained using CAN bus data, as shown in Figure 1. Then we overlay this information on the stitched images before passing it on to the annotators, which provides important context information to annotators about driver behavior and intent which is crucial to identify important agents. This is in contrast with other works in that [31] does not provide this information at all and [30] asks the annotators to label the ego vehicle's intention along
\begin{table}
\begin{tabular}{c|c c c c c|c c c c|c c c} & \multicolumn{6}{c|}{Important Agents Identification only} & \multicolumn{6}{c|}{Captioning only} & \multicolumn{6}{c}{Both} \\ \hline Dataset & HDD [34] & EF [49] & SA [6] & DoTA [46] & KITTI [31] & A-SASS [42] & HAD [20] & BDD-X [21] & BDD-OIA [44] & T2C [11] & DRAMA [30] & Ours \\ \hline \# Scenarios & 137 & 3,000 & 1733 & 4,677 & - & 10 & 5,744 & 6,984 & 11,303 & 850 & 17,785 & 116 \\ \# Frames annotated/video & \(\geq\)1 & \(\geq\)1 & \(\geq\)1 & \(\geq\)1 & - & \(\geq\)1 & \(\geq\)1 & \(\geq\)1 & \(\geq\)1 & 1 & 1 & \(\geq\)1 \\ Avg. video duration (in sec) & 2700 & 4 & 5 & - & - & - & 20 & 40 & 5 & 20 & 2 & 20 \\ Importance Localization & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ \\ Importance Captioning & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ \\ Importance Ranking & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \# Important objects/frame & 1 & 1 & \(\geq\)1 & \(\geq\)1 & \(\geq\)1 & \(\geq\)1 & - & - & - & - & - & 1 & \(\geq\)1 \\ \# Captions/frame & - & - & - & - & - & - & 1 & 1 & 1 & - & 1 & \(\geq\)1 \\ \# Captions/object & - & - & - & - & - & - & 1 & 1 & 1 & - & 1 & \(\geq\)1 \\ Avg caption length & - & - & - & - & - & - & 11.05 & 8.90 & 6.81 & 11.01 & 17.97 & 31.95 \\ RGB & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ LiDAR & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ \\
3D boxes & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ \\ Field of View (RGB) & C & C & C & C & C & LCR & C & C & LCR & C & LCR \\ Object Tracking & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ Reasoning & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ & ✓ \\ Free Form Captions & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \end{tabular}
\end{table}
Table 1: Comparison of our proposed dataset with other datasets.
Figure 1: **Overview of Rank2Tell.** Rank2Tell is an ego-centric dataset with three camera images and point cloud features for visual scene understanding in complex urban traffic scenarios. For each scenario, five annotators are asked to identify important objects in the scene with three importance levels: high, medium, and low. They also compose natural language descriptions to explain their reasoning behind the importance ranking of each important object in the scene, which leads to diverse annotations of explanations. In this exemplar scenario, the important objects are indicated by bounding boxes. The importance and natural language annotations of the traffic light and the crossing pedestrian are shown as illustrative examples.
with importance, for which they need to watch the entire clip which is not a realistic setting in the real world because the driver does not have access to future information. Additionally, while [30] filters raw videos based on the activation of vehicle braking using CAN information, we manually filter data, which is more accurate. The annotators then watch the video and are asked to imagine themselves as the driver and determine if there are important agents present in the scene that may affect their driving.
**Important Agent Localization and Ranking.** The annotators are instructed to localize each agent in the scene that is important to the ego vehicle by creating a bounding box. At a high level, we asked the annotators to identify agents to which the ego vehicle should be attentive for safe driving. Once an object is marked as important, annotators draw a 2D bounding box around that agent, and rank its importance level and relevance they would have given the object in real driving. In this work, we use three levels of importance: _Low_, _Medium_, _High_.
The purpose is to handle ambiguity which is hard to avoid in case of binary categorization, i.e. important or non-important [24], and also to reduce confusion and guesswork which is inevitable in case of a continuous ranking score. Inspired by [31, 42], we posit that two levels of importance (i.e., important and nonimportant) could be overly restrictive and inadequate for handling ambiguous cases. Moreover, [42] shows that multiple levels could aid drivers' situational awareness in real time, with the minimal distraction of multiple important objects. Due to the subjectivity of the task, we used five annotators to label each scene to reduce ambiguity and reach some level of consensus. Since different annotators may perceive importance differently, there are cases where the same agent has different levels of importance or is not considered important at all. We show the consistency analysis in Table 2.
**Captioning.** Our proposed dataset emphasizes explainabil
Figure 2: The annotation schema of Rank2Tell dataset.
ity as another aspect. In addition to identifying and ranking important agents, we aim to provide an explanation for why these objects are deemed significant. Therefore, after annotators identify and localize the important objects, we request them to annotate certain object-level attributes and utilize them to elaborate on why they regard the object as important via a free-form caption. Specifically, as demonstrated in Fig. 2, we request them to annotate the following:
* _What_: What class (type and importance level) does the important agent belong to?
* _Which_: Which visual and motion attribute belongs to the agent?
* _Where_: Where is the agent (location + direction)?
* _How_: How does the ego car respond to the agent?
* _Why_: Why is the agent of High/Medium/Low level of importance?
The first four questions require single-choice answers from a pre-defined set of options, while the last question is an open-ended caption that combines the answers to the previous four questions (what, which, where, how). This allows annotators to use free-form captions while incorporating all essential information captured in the 3W+1H format.
A unique benefit of having multiple captions for a single object is the ability to evaluate caption diversity. Although caption diversity has been explored in image datasets in previous works [10, 36, 41], it has rarely been investigated in a video setting, particularly in traffic scenes. These earlier studies concentrated on measuring semantic diversity by various concepts in the same image. In contrast, we focus on diversity based on how humans perceive importance. Due to the subjective nature of this task, it is crucial to assess how different individuals explain and perceive importance. As far as we know, our dataset is the first to provide _diverse_ captions for _multiple_ objects in traffic scenes.
### Features and Applications of Rank2Tell
**Scene Graphs.** Our proposed dataset introduces several attributes that offer significant potential for generating informative scene graphs by leveraging spatial (i.e., which and where), temporal (i.e., tracking with 2D+3D point cloud features), and semantic (i.e., what and how) features. These relational attributes are captured by scene graphs, using comprehensive scene information recorded by road users. They are valuable for performing downstream tasks.
**Situational Awareness.** Enhancing situational awareness is critical for safe and efficient navigation in complex traffic scenarios. In a recent study on A-SASS, Wu et al. [42] demonstrated the effectiveness of highlighting important agents in the scene to improve drivers' situational awareness. Motivated by this study, our proposed dataset can potentially be used to identify important agents in the scene and develop an adaptive user interface for improving the driver's situational awareness in real time. Additionally, our dataset offers unique attributes such as _attention_ and _communicative aspects_ which can be beneficial in Advanced Driver Assistance Systems (ADAS) applications.
**Interpretable Models.** Human-interpretable models are important for safety-critical applications like Autonomous Driving or Advanced Driver Assistance Systems (ADAS). The model's interpretability can be evaluated using comprehensive attributes of _Rank2Tell_, and can potentially be used to address several tasks towards providing explanations of driving risks associated with important agents. Some of these tasks are a) _important agent localization and tracking_, b) _importance level ranking_, c) _caption generation_, and d) _diverse captions generation_. Our dataset also enables joint handling of thesetasks, which is another unique aspect. We discuss details of these tasks in supplementary materials.
In this paper, we benchmark the performance on two of these tasks: important agents ranking and captions generation, and also provide a model that jointly addresses them, as discussed in Section 5.
### Dataset Analysis
#### 3.3.1 Dataset Statistics
Figure 4 shows the distribution of labels obtained using video-level question answering in Rank2Tell. Figure 2(a) shows the distribution of agent types with their importance levels in the scene. This is answered using the _what_ in the question answering. Since the dataset comprises all scenarios focused on four-way intersections, the majority of the infrastructures such as traffic lights (3048), and stop signs
Figure 3: Statistical Analysis of Rank2Tell Dataset
(668) are of high importance and there is consensus among annotators. Figure 2(b) demonstrates the distribution of location level-1 of various important levels objects given the intention of the ego car. It shows that the majority of agents situated on the left and right of the ego car are of high importance when the ego car's intention is left and right, respectively. However, when the ego car's intention is to go straight, the agents' location is equally distributed in three lanes- left, right, and ego lane. This makes the task of importance ranking classification nontrivial and difficult to estimate by only using the ego car's intention as a feature.
To describe the visual attributes of significant objects, annotators usually provide free-form responses that integrate information about the object's what, where, how, and which. In Figure 4, we illustrate the distribution of the top 30 words utilized in captioning the "why" question. This demonstrates that annotators effectively conveyed the intention, motion direction, and location of important agents while generating natural language captions.
#### 3.3.2 Consistency Analysis
We conduct an inter-annotator consistency analysis among five annotators, based on the mode of various importance levels they selected, as presented in Table 2. In cases with multiple modes, we set the highest importance level as the final importance to obtain these consistency scores. A \(40\%\), \(60\%\), \(80\%\), and \(100.00\%\) consistency implies that 2, 3, 4, and 5 annotators provided the same importance levels for an object. Results show that for data instances with _High_ as the majority importance, \(88.23\%\) of data samples exhibit more than \(60\%\) consistency. Similarly, for data instances with _Not-Important_ as the majority voting, \(98.61\%\) data samples have more than \(60\%\) consistency. This indicates that annotators highly agreed on objects selected as _High_ and _Not-Important_ importance based on mode. To assess the quality of the dataset, we compute intra-class correlation (ICC) [37], which is widely used [2, 35] for the assessment of consistency made by different observers measuring the same quantity. The ICC for our annotations is 0.92, which shows excellent inter-rater agreement [9]. However, we use the mode of only importance level classes to obtain the ground-truth importance level of an object for various tasks, such as object importance level classification (Section 5.1). That is, if 2 out of 5 annotators deem an agent as important, we use the ground truth as the mode of the two importance levels instead of all 5. This method aims to reduce falsely underestimating an agent's importance. Please refer to the supplementary materials for more details.
## 4 Methodology
We introduce a model to jointly address objects' importance classification and caption generation. The architecture of the model is illustrated in Figure 5, which leverages multiple modalities (i.e., 2D and 3D features). It consists of five main components: a) a 2D deep feature extraction module that extracts object-level features from frontal-view visual observations and ego vehicle state information, b) a 3D deep feature extractor that extracts object-level features from 3D point cloud observations, c) a relational graph module that learns the enhanced objects (node) features and relational (edge) features, d) an importance classification module that takes in the objects and objects relational features to predict the importance level for each important objects in the scene, and e) a context-aware attention captioning module that generates descriptions from the object features and relational features.
### Feature Extractor
We adopt a 2D deep feature extractor from [24]. The deep feature extractor uses a sequence of RGB images, depth images, semantic maps, the ego vehicle's inertial data, and a 2D bounding box of each object in the scene. The
\begin{table}
\begin{tabular}{c|c|c|c|c} Consistency & Not Important & Low & Medium & High \\ \hline
40 & 1.37 & 38.33 & 33.77 & 11.75 \\
60 & 4.52 & 51.54 & 42.39 & 30.93 \\
80 & 7.86 & 9.46 & 17.79 & 28.95 \\
100 & 86.23 & 0.65 & 6.03 & 28.35 \\ \hline \(\geq\)60 & 98.61 & 61.65 & 66.21 & 88.23 \\ \end{tabular}
\end{table}
Table 2: Data (%) for different inter-annotator consistency (%) of agent’s importance annotation based on the mode of all 5 annotators’ importance levels.
Figure 4: Statistical analysis of Rank2Tell dataset.
depth images are obtained by projecting the point cloud to the stitched camera view and the segmentation maps are obtained using DeepLabv3 on the stitched RGB images. To extract the 3D features of each object, we adapt the Pointnet++ [33] backbone and the voting module in VoteNet [13] to aggregate all object candidates in a scene to individual clusters. To capture the mutual influence and relations among objects, we use a graph-based approach that models objects as nodes and their relations as edges in a relational graph module. The module takes in the concatenated 2D and 3D object features and extracts both object features and relational features between objects. To model the relational (edge) features, the module considers only the \(K\) nearest objects surrounding the target object to limit the computation complexity. The final object features are obtained by concatenating the graph node features with the object relations, global features, ego features, and ego intentions. These are then fed into the importance classifier and the captioning decoder, respectively. More details can be found in the supplementary materials.
### Training Objective
The loss function for our joint model consists of importance classification loss and caption generation loss. We apply a conventional cross-entropy loss on the generated token probabilities, as in previous works [24, 8]. We used a weighted sum of both loss terms as the final loss:
\[\mathcal{L}=\alpha\mathcal{L}_{\text{imp}}+\beta\mathcal{L}_{\text{cap}}, \tag{1}\]
where \(\alpha\) and \(\beta\) are the weights for the loss terms. Further, to enforce the model to reduce the instances of falsely underestimating an agent's importance, we penalize the \(\mathcal{L}_{imp}\) corresponding to different ground truth (GT) and predictions (P) for different importance levels of high (H), medium(M), low(L), and not-importance(NI) as follows:
\[\mathcal{L}_{imp} =\Sigma_{i=1}^{N}\mathcal{L}_{i} \tag{2}\] \[\mathcal{L}_{i} =\begin{cases}\lambda_{k}\mathcal{L}_{i},&\text{if }(GT-P)=k>0,\\ \mathcal{L}_{i},&\text{otherwise},\end{cases} \tag{3}\]
where \(\mathcal{L}_{i}\) is the cross-entropy loss for each object \(i\).
## 5 Experiments
We evaluate the performance of our model on the proposed Rank2Tell dataset by comparing it with various baselines on two tasks: importance level ranking (classification) and natural language captions generation. For fair comparisons, we use the same backbone modules and hyperparameters across different baselines. More details can be found in the supplementary materials.
### Importance Level Classification: Baselines
All the importance classification baselines take 2D image features and object features as inputs and infer the importance level of objects in the scene.
**Goal-oriented object importance estimation (OIE) [16]:** OIE is a two-stage framework that firstly generates object tracklets from videos as object proposals and then classifies the proposals as important objects.
**Interaction graphs based object importance estimation (INTERACT) [50]:** INTERACT is a graph convolutional network based method that extracts appearance features from objects and represents them as nodes in the graph.
Figure 5: **Architecture of Joint Model. The framework consists of five components: a 2D feature extractor, a 3D feature extractor, a relational graph, an importance classifier, and a captioning decoder.**
\begin{table}
\begin{tabular}{c|c c c} Method & F1 (I) & F1 (NI) & Accuracy \\ \hline OIE [16] & 55.78 & 87.74 & 80.80 \\ INTERACT [50] & 56.42 & 88.02 & 81.21 \\ IOI [24] & 64.01 & 89.06 & 83.22 \\ \hline Ours & **78.44** & **92.97** & **89.39** \\ \end{tabular}
\end{table}
Table 3: Quantitative evaluation of F1 scores for two importance levels across baselines (I: important, NI: nonimportant).
These features are then updated by interacting with other nodes through graph convolution based on the learned interaction graph edges. The updated features are used to estimate object importance using a multi-layer perceptron.
**Important Object Identification (IOI) [24]:** IOI is a graph convolutional network based method that explicitly models both the appearance and motion of objects in the scene and also uses the ego vehicle state information in the current frame for importance classification. It reasons about the relations between objects in the scene.
### Captioning: Baselines
**Scan2Cap [8]:** Scan2Cap is an end-to-end method to perform dense captioning on 3D point clouds to densely detect and describe 3D objects in RGB-D scans. The method first employs a detection pipeline to obtain object proposals and then applies a relational graph and context-aware attention captioning module to learn object relations and generate tokens, respectively.
**Show, and Tell (S\(\&\)T) [40]:** This baseline generates captions using 2D features (i.e., global RGB image features and target object features). The visual features are extracted using a ResNet-101 pre-trained on the ImageNet dataset. The global features are concatenated with the target object features, which are used to generate the captions.
### Metrics
To measure the quality of the predicted importance ranking of each object in the scene and their corresponding captions, we evaluate the performance of the baselines using standard classification metrics: F1 score, and accuracy for the importance estimation, and standard metrics such as BLEU-4 (B4) [32], METEOR (M) [4], ROGUE (R) [28], and CIDER (C) [39].
### Results
#### 5.4.1 Quantitative Comparison
Table 3, Table 4, and Table 5 show the quantitative results of our model and baselines for importance classification and caption prediction tasks. The importance baselines are trained and evaluated with two classes (Important and Non-Important), and four classes (High, Medium, Low, and Non-Important). For the two classes, high, medium, and low important objects are merged as one important class. The INTERACT [50] baseline outperforms OIE [16] as it models agent interactions in the scene using graph convolutional networks. This makes it more efficient in predicting the importance levels of multiple agents in the scene with lesser input features than OIE [16]. Furthermore, IOI [24] outperforms the previous two baselines as it utilizes the ego vehicle's speed, acceleration, and intention features. While the agent's importance level is highly influenced by the ego vehicle's intention (e.g., straight, left, right) at the intersection, the ego's speed and acceleration are influenced by the importance level of an agent. This interaction behavior is explicitly incorporated in IOI [24], which makes it outperform other methods. Since there is currently no importance ranking estimation model that uses 3D features, we benchmark results on baselines using only 2D features.
For the captioning baselines, Scan2cap [8] outperforms S\(\&\)T [40] because Scan2Cap uses 3D point clouds and 3D bounding box features, which are more informative than 2D features. The 3D point cloud provides information on the distance between agents and their heights, which captures the dependence of the agents' location with the words in the captions, such as _left side_ or _right side_ of the ego car. Our proposed joint model leverages both 2D and 3D features and outperforms the uni-modal baselines in several metrics. Further, the joint training of importance classification and captions prediction complements each other, thereby outperforming the baselines trained with a single task (more details in supplementary material).
To demonstrate the usefulness of Rank2Tell annotations, we conduct an ablation study where we integrate action attributes (\(Which\)) with object features for all baselines, and the detailed results are shown in supplementary materials.
## 6 Conclusion
We present a novel multi-modal dataset (Rank2Tell) in urban traffic scenarios, which enables joint importance level prediction and reasoning in traffic scenes with frame-level annotations. The dataset includes object-level questions on important objects and language interpretations of the scene from the ego driver's perspective, integrating spatial, temporal, and relational features. This offers new avenues for improving visual scene comprehension and advancing autonomous systems' interpretability and trustworthiness.
\begin{table}
\begin{tabular}{c|c c c c c} Method & F1 (L) & F1 (M) & F1 (H) & F1 (NI) & Accuracy \\ \hline OIE [16] & 10.25 & 14.32 & 49.68 & 86.56 & 74.95 \\ INTERACT [50] & 14.04 & 27.56 & 44.04 & 87.45 & 75.09 \\ IOI [24] & 14.09 & 16.19 & 49.55 & 87.85 & 76.84 \\ \hline Ours & **20.49** & **28.94** & **58.84** & **92.62** & **80.93** \\ \end{tabular}
\end{table}
Table 4: Quantitative evaluation comparing the F1 scores for 4 importance levels across baselines. L:LOW, M:MEDIUM, H:HIGH, NI:NON-IMPORTANT
\begin{table}
\begin{tabular}{c|c c c c} Method & C & B-4 & M & R \\ \hline S\(\&\)T [40] & 47.67 & 30.24 & 34.91 & 53.31 \\ Scan2Cap [8] & 56.32 & **49.59** & **38.36** & 66.35 \\ \hline Ours & **100.15** & 45.83 & 36.21 & **68.56** \\ \end{tabular}
\end{table}
Table 5: Quantitative evaluation comparing the performance of baselines for captions predictions. C: CIDER, B-4: Bleu-4, M: Meteor, R: Rouge |
2309.07689 | Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-Generated
Text | While recent advancements in the capabilities and widespread accessibility of
generative language models, such as ChatGPT (OpenAI, 2022), have brought about
various benefits by generating fluent human-like text, the task of
distinguishing between human- and large language model (LLM) generated text has
emerged as a crucial problem. These models can potentially deceive by
generating artificial text that appears to be human-generated. This issue is
particularly significant in domains such as law, education, and science, where
ensuring the integrity of text is of the utmost importance. This survey
provides an overview of the current approaches employed to differentiate
between texts generated by humans and ChatGPT. We present an account of the
different datasets constructed for detecting ChatGPT-generated text, the
various methods utilized, what qualitative analyses into the characteristics of
human versus ChatGPT-generated text have been performed, and finally, summarize
our findings into general insights | Mahdi Dhaini, Wessel Poelman, Ege Erdogan | 2023-09-14T13:05:20Z | http://arxiv.org/abs/2309.07689v1 | # Detecting ChatGPT: A Survey of the State of Detecting
###### Abstract
While recent advancements in the capabilities and widespread accessibility of generative language models, such as ChatGPT (OpenAI, 2022), have brought about various benefits by generating fluent human-like text, the task of distinguishing between human- and large language model (LLM) generated text has emerged as a crucial problem. These models can potentially deceive by generating artificial text that appears to be human-generated. This issue is particularly significant in domains such as law, education, and science, where ensuring the integrity of text is of the utmost importance. This survey provides an overview of the current approaches employed to differentiate between texts generated by humans and ChatGPT. We present an account of the different datasets constructed for detecting ChatGPT-generated text, the various methods utilized, what qualitative analyses into the characteristics of human versus ChatGPT-generated text have been performed, and finally, summarize our findings into general insights.
## 1 Introduction
LLMs have been showing remarkable abilities in generating fluent, grammatical, and convincing text. The introduction of ChatGPT (OpenAI, 2022) has been widely regarded as a significant and controversial milestone for LLMs. Models such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022) already demonstrated the power of LLMs in many natural language processing (NLP) tasks. ChatGPT is the first model that has seen widespread adoption outside NLP research.
The increased performance of LLMs raises important questions regarding their potential societal impact. The risks of LLMs are numerous, from confidently presenting false information to generating fake news on a large scale (Sheng et al., 2021; Weidinger et al., 2022). ChatGPT is no exception in this regard (Zhuo et al., 2023).
Instances of the misuse of ChatGPT have already been documented in various domains, including education (Cotton et al., 2023), scientific writing (Gao et al., 2022), and the medical field (Anderson et al., 2023). Given this context, the detection of machine-generated text is gaining considerable attention. This detection is part of a larger push towards responsible and appropriate usage of generative language models (Kumar et al., 2023).
In addition to academic interest, a growing number of commercial parties are trying to solve this task. Recent work from Pegoraro et al. (2023) gives an overview of commercial and freely available online tools. They come close to the current work. However, we limit our scope to academic work and provide additional background information on methods, datasets, and qualitative insights.
Many approaches, datasets and shared tasks1 have been put forth recently to tackle the _general_ (i.e., not specific for ChatGPT) task of detecting machine-generated text (Jawahar et al., 2020). Given the enormous use and cultural impact of ChatGPT, we limit our review to datasets and methods developed directly for ChatGPT. We discuss these methods in the context of the controversial position ChatGPT is in, namely that it is a closed-source system with very little information available regarding its training setup or model architecture at the time of writing. We outline what general methods exists for this task and review recent work that directly focuses on datasets and methods for ChatGPT.
Footnote 1: For instance, AuTexTification or CLIN33 Shared Task.
Given the peculiar place ChatGPT is in, we also consolidate qualitative insights and findings from the works we discuss that might help humans to detect ChatGPT-generated text. These include lin
guistic features or writing styles to look out for. Lastly, we present outstanding challenges for this detection task and possible future directions.
## 2 Related Work on Detecting Machine-Generated Text
LLMs have become a driving force in many language processing-related benchmarks and tasks (Radford et al., 2018; Brown et al., 2020; Chowdhery et al., 2022). LLMs can solve complex NLP tasks and generate convincing and trustworthy-looking text. However, they are also prone to generating false and misleading information, generally referred to as _hallucinating_(Lin et al., 2022). Additionally, misuse of these models can pose significant risks in academia, journalism, and many other areas. Currently, human judges are decent at spotting machine-generated text from older LLMs such as GPT-2 (Ippolito et al., 2020; Dugan et al., 2020, 2023). Still, the increasing abilities of LLMs give rise to the need for more sophisticated detection tools and models.
A recent survey by Crothers et al. (2023) provides a thorough overview of risks, approaches, and detection methods. They discuss interesting aspects such as the effect of domains on the detection task, adversarial attacks, and societal impacts of generated texts. Work done by Jawahar et al. (2020) inspects the field of machine-generated text detection. It outlines three main detection methods: a classifier trained from scratch, zero-shot detection by a language model, and a fine-tuned language model as a classifier. Recently, detection methods from computer vision have also been tried on language models, such as watermarking (Kirchenbauer et al., 2023, 2023) or trying to find model-identifying artifacts in generated content (Tay et al., 2020). To use and evaluate these methods, fine-grained access to the source model is required in training and inference time. Both these preconditions are not the case with ChatGPT at the time of writing.
When discussing detection methods, an essential factor to consider is access to the log probability output of a model. This is the probability distribution over the vocabulary of a model for the next token to be generated. Numerous successful detection methods evaluate the average log probability per token combined with a threshold in a zero-shot setting (Gehrmann et al., 2019; Ippolito et al., 2020; Mitchell et al., 2023). This method is model agnostic and generally performs quite well. At the time of writing, users of ChatGPT do not have access to these probabilities. Without this access or knowledge about model internals, detection methods are limited to using just the generated text in a binary classification setting, with the options being _human_ or _machine_. These methods use simple classifiers trained on n-grams (Solaiman et al., 2019; Ippolito et al., 2020) or fine-tuned pre-trained language models (Uchendu et al., 2020; Ippolito et al., 2020; Zellers et al., 2020).
Another group of detection tools we want to mention are the _human-machine collaboration systems_, as Jawahar et al. (2020) labels them. These tools do not necessarily classify a passage directly but assist a human in making that decision. The previously mentioned work by Gehrmann et al. (2019) visualizes tokens in different colors, depending on where a given token ends up in the top-\(k\) most probable tokens from the model. This can also assist a human judge in spotting which part of a larger text might be machine-generated, such as possibly rephrased or copied sections for example. As mentioned, this method requires access to output probabilities, so it is not usable for ChatGPT. Another tool to help humans in the detection task is to outline the linguistic properties and characteristics of machine-generated text. This was one of the main goals of the _Real or Fake Text?_ (RoFT) game created by Dugan et al. (2020, 2023). This game asked players to decide if a machine partially wrote a piece of text, and if yes, where the transition point from human to machine is in the text. This resulted in a considerable dataset of annotations and indicators humans look for in detecting machine-generated text.
Another area of research that might help humans to make this decision is explainable AI. As we will see, some papers we discuss use explainability methods, such as SHAP (Lundberg and Lee, 2017), in their approaches. These methods help to better understand how detectors make their predictions. Such methods can help provide insights on the input features that most contribute to a prediction, which, in turn, can facilitate analyses of the differences between human and ChatGPT writing styles.
As far as we know, the previously mentioned work by Crothers et al. (2023) and Jawahar et al. (2020) come closest to ours. They discuss detection methods and datasets but not ChatGPT. The work from Pegoraro et al. (2023) does mention ChatGPT, among other models, but focuses mainly on online
detection tools.
Our contributions are the following:
* We provide an overview of _general_ approaches to machine-generated text detection.
* We outline research that specifically addresses the detection of ChatGPT-generated text and how this relates to the general approaches.
* We show the datasets that are created and used for this detection task.
* We summarize the qualitative analyses that these recent works provide and try to give general insights.
## 3 Review of Approaches for Detecting ChatGPT-Generated Text
### Datasets
Table 1 shows datasets that can be used to perform analyses or train models to distinguish between human and ChatGPT written text. We describe how they were collected and provide further information on their domains and public availability.
#### 3.1.1 Guo et al. 2023 (HC3)
Available in both Chinese and English, the Human ChatGPT Comparison Corpus (HC3) contains question-answer pairs collected from different datasets such as OpenQA (Yang et al., 2015) and Reddit ELI5 (Fan et al., 2019). These questions are then given to ChatGPT with context-sensitive prompts (e.g., asking ChatGPT to answer _like I am five_ for the Reddit ELI5 dataset) so that each question has one human-generated and one ChatGPT-generated answer.
#### 3.1.2 Yu et al. 2023 (Cheat)
The ChatGPT-written Abstract (CHEAT) dataset contains human- and ChatGPT-generated title-abstract pairs for computer science papers, with the titles and human-written abstracts fetched from IEEE Xplore. Artificial abstracts are generated in three ways:
* _Generate_: ChatGPT is directly prompted to write an abstract given the title and keywords.
* _Polish_: ChatGPT is given human-written abstracts and is told to "polish" them.
* _Mix_: Text from human-written and polished abstracts are mixed at the sentence level.
The CHEAT dataset also covers adversarial scenarios as the _Polish_ and _Mix_ methods correspond to methods a malicious user might try to evade detection.
#### 3.1.3 He et al. 2023 (MGTBench)
The Machine Generated Text Benchmark (MGTBench) uses three question-answering datasets: TruthfulQA (Lin et al., 2022), SQuaD1 (Rajpurkar et al., 2016), and NarrativeQA (Kocisky et al., 2018). Questions are randomly sampled from each dataset, and ChatGPT is prompted to answer them with the appropriate context (e.g., with a relevant passage and instructions for NarrativeQA).
Although our primary focus is ChatGPT, MGTBench contains text generated by different language models and thus can be used to benchmark detection methods across models.
#### 3.1.4 Liu et al. 2023 (ArguGPT)
The ArguGPT dataset contains prompts and responses from various English learning corpora, such as WECCL (Zhi-jia, 2008), TOEFL11 (Blanchard et al., 2013), and hand-picked from graduate record examinations (GRE) preparation material. The texts are from essay writing assignments about a given topic or standpoint. GPT models are prompted to write responses, but their output is processed for grammatical errors and to remove obvious signs of ChatGPT-generated text (e.g., "As a large language model...").
#### 3.1.5 Vasilatos et al. 2023
The dataset used in Vasilatos et al. (2023) for detection builds on Ibrahim et al. (2023), a dataset of questions with metadata and student answers from various university courses. ChatGPT is directly prompted with the questions three times to obtain three human and ChatGPT answers for each question.
#### 3.1.6 Mitrovic et al. 2023
Attempting to build a classifier to detect ChatGPT-generated restaurant reviews, Mitrovic et al. (2023) build on the Kaggle restaurant reviews dataset2 and prompt ChatGPT to generate reviews of various kinds (e.g., "write a review for a bad restaurant"). Additionally, ChatGPT is prompted to rephrase the human-written reviews to create an adversarial set.
#### 3.1.7 Weng et al. 2023
Weng et al. (2023) expand on Narechania et al. (2022)'s dataset of title-abstract pairs fetched from top data visualization venues by prompting Chat-GPT to write abstracts given the titles. Compared to another dataset of title-abstract pairs, CHEAT (Yu et al., 2023), Weng et al. (2023)'s dataset contains more examples but lacks the adversarial samples included in CHEAT.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset (name)** & **Domain** & **Public** & **OOD** & **Size and Setup** \\ \hline \multirow{4}{*}{Guo et al. 2023 (HC3-English)} & \multirow{4}{*}{Multi-domain} & \multirow{4}{*}{✓} & \multirow{4}{*}{\(\times\)} & \(Q\&A\) \\ & & & & & Questions: 24,322 \\ & & & & & Human-A: 58,546 \\ & & & & & ChatGPT-A: 26,903 \\ \hline \multirow{4}{*}{Guo et al. 2023 (HC3-Chinese)} & \multirow{4}{*}{Multi-domain} & \multirow{4}{*}{✓} & \multirow{4}{*}{\(\times\)} & \(Q\&A\) \\ & & & & & Questions: 12,853 \\ & & & & & Human-A: 22,259 \\ & & & & & ChatGPT-A: 17,522 \\ \hline \multirow{4}{*}{Yu et al. 2023 (CHEAT)} & \multirow{4}{*}{Scientific} & \multirow{4}{*}{\(\times\)} & \multirow{4}{*}{✓} & \(Abstracts\) \\ & & & & & Human: 15,395 \\ & & & & & ChatGPT: 35,304 \\ \hline \multirow{4}{*}{He et al. 2023 (MGTBench)} & \multirow{4}{*}{General} & \multirow{4}{*}{✓} & \multirow{4}{*}{\(\times\)} & \(Q\&A\) \\ & & & & Human: 2,817 \\ & & & & ChatGPT: 2,817 \\ \hline \multirow{4}{*}{Liu et al. 2023 (ArguGPT)} & \multirow{4}{*}{Education} & \multirow{4}{*}{✓} & \multirow{4}{*}{\(\times\)} & \(Essays\) \\ & & & & Human: 4,115 \\ & & & & ChatGPT: 4,038 \\ \hline \multirow{4}{*}{Vasilatos et al. 2023} & \multirow{4}{*}{Education} & \multirow{4}{*}{Human*} & \(Q\&A\) \\ & & & & Questions: 320 \\ & & & & Human-A: 960 \\ & & & & ChatGPT-A: 960 \\ \hline \multirow{4}{*}{Mitrović et al. 2023} & \multirow{4}{*}{General} & \multirow{4}{*}{Human*} & \multirow{4}{*}{✓} & \(Reviews\) \\ & & & & Human: 1,000 \\ & & & & ChatGPT-query: 395 \\ & & & & ChatGPT-rephrase: 1,000 \\ \hline \multirow{4}{*}{Weng et al. 2023} & \multirow{4}{*}{Scientific} & \multirow{4}{*}{Human} & \multirow{4}{*}{\(\times\)} & \(Tile\)_-Abstract pairs_ \\ & & & & Human: 59,232 \\ & & & & ChatGPT: 59,232 \\ \hline \multirow{4}{*}{Antoun et al. 2023a} & \multirow{4}{*}{General} & \multirow{4}{*}{✓} & \multirow{4}{*}{✓} & \(Q\&A\) \\ & & & & HC3-English \\ & & & & OOD-ChatGPT: 5,969 \\ \hline \multirow{4}{*}{Liao et al. 2023} & \multirow{4}{*}{Medical} & \multirow{4}{*}{Human} & \multirow{4}{*}{\(\times\)} & \(Abstracts\) \\ & & & & Human: 2,200 \\ \cline{1-1} & & & & ChatGPT: 2,200 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets used in ChatGPT-generated text detection, with public availability information (if a dataset is available, it can be accessed by clicking on its _Public_ column entry). The _Human_ entry in the _Public_ column signals that only human-written text datasets are made public. The _OOD_ (out-of-domain) column signals if a dataset contains examples generated in a different way than the main part (e.g., rephrasing of human-written text). *Authors state it will be made available at a future date.
#### 3.1.8 Antoun et al. 2023a
Antoun et al. (2023a) extend HC3 (Guo et al., 2023) by translating its English part to French using Google Translate and add further French out-of-domain (OOD) examples to make models trained on this data more robust. The OOD dataset consists of direct French responses by ChatGPT and BingChat to translated questions from the HC3 dataset (as opposed to translating the answers as done originally), question-answer pairs from the French part of the multi-lingual QA dataset MFAQ (De Bruyn et al., 2021), and sentences from the French Treebank dataset (Le Monde corpus). Finally, the dataset also contains a small number of adversarial examples written by humans with access to ChatGPT to obtain a similar style to that of ChatGPT.
#### 3.1.9 Liao et al. 2023
Focusing on the medical domain, Liao et al. (2023) build on two public medical datasets: a set of medical abstracts from Kaggle3 and radiology reports from the MIMIC-III dataset (Johnson et al., 2016). ChatGPT is given parts of an example medical abstract or a radiology report for the machine-generated samples and is prompted to continue writing it. The authors state that text continuation can generate more human-like text compared to rephrasing or direct prompting.
Footnote 3: [https://www.kaggle.com/datasets/chaitanyakck/medical-text](https://www.kaggle.com/datasets/chaitanyakck/medical-text)
### Methods
In this section, we report on the various methods proposed for detecting ChatGPT-generated text. The scope of this review does not include the evaluation or comparison of the results obtained from these methods. This limitation primarily arises from the absence of a common experimental setup and the utilization of different datasets and metrics. Table 2 provides an overview of these recent approaches.
Some previous works have utilized transformer-based models to classify text generated by ChatGPT and human-written text, as demonstrated by Mitrovic et al. (2023). Their approach consists of two components: a detection model and a framework to explain the decisions made by this model. They first fine-tune an uncased version of DistillBERT Sanh et al. (2019) and then employ SHAP to provide local explanations in the form of feature importance scores to gain insights into the significance of different input features of the model's results. As a baseline comparison, they implement a perplexity-based classifier that categorizes text based on its perplexity score, where GPT-2 is used for calculating perplexity scores. Their results show that the DistilBERT-based detector outperforms the perplexity-based classifier. However, its performance decreases when considering the rephrased dataset by ChatGPT.
In Liao et al. (2023), different models are proposed to detect medical text generated by ChatGPT: a fine-tuned BERT model Devlin et al. (2019), a model based on Classification and Regression Trees (CART), an XGBoost model Chen and Guestrin (2016) and a perplexity classifier that utilizes BioGPT Luo et al. (2022) for calculating text perplexity. Predictions by the BERT model are explained by visualizing the local features of the samples, where it can be seen that using conjuncts is an essential feature for the model classifying a medical text as machine-generated.
Liu et al. (2023) fine-tune RoBERTa to detect argumentative essays generated by different GPT models, including ChatGPT, and evaluate its performance on document, paragraph, and sentence-level classification. The essays are broken down into paragraphs and sentences for paragraph and sentence-level classification. They train and compare the performance of SVM models using different linguistic features. These models serve as a baseline to compare with the RoBERTa model and to understand which linguistic features differentiate between human and ChatGPT-generated text.
Guo et al. (2023) implement a machine learning and deep learning-based detector. They utilize a logistic regression model trained on the GLTR Test-2 dataset Gehrmann et al. (2019) and two deep classifiers based on fine-tuning the pre-trained transformer model RoBERTa. One deep classifier is designed explicitly for single-text detection, while the other is intended for QA detection. The authors construct various training and testing datasets versions to assess the models' robustness. They create full-text, sentence-level, and mixed subsets of the collected corpus. Each subset has both a raw version and a filtered version where prominent indicating words referring to humans (such as "Nope" and "Hmm") or ChatGPT words (such as "AI assistant") are removed. The evaluation of the models reveals that the RoBERTa-based models outper
form GLTR in terms of performance and exhibit more robustness against interference. Moreover, the RoBERTa-based models are not influenced by indicating words.
Building upon the work of Guo et al. (2023), Antoun et al. (2023) propose an approach for developing robust detectors able to detect ChatGPT-generated text in different languages, with a focus on French. Their approach consists of fine-tuning pre-trained transformer-based models on English, French, and multilingual datasets. They train RoBERTa and ELECTRA Clark et al. (2020) models on the English dataset, CamemBERT Martin et al. (2020) and CamemBERTa Antoun et al. (2023) on the French datasets and XLM-R Conneau et al. (2020) on the combined English and French dataset. They evaluate the robustness of these models against adversarial attacks, such as replacing characters with homoglyphs and adding misspelled words. Considering in-domain text, their results show that French models perform well in detecting machine-generated text. Still, they were outperformed by the English models, while XLM-R provides the best and most resilient performance against adversarial attacks for both English and French. However, this performance decreases when evaluated on out-of-domain text.
Another method proposed for detecting ChatGPT-generated text is a metric-based approach proposed by Vasilatos et al. (2023) to detect machine-generated student assignments by calculating perplexity scores using GPT-2. They show that having category-wise thresholds (derived from dataset metadata) results in better detection performance than only having one threshold value.
### Analysis of Human and ChatGPT-Generated Text
The textual characteristics of ChatGPT-generated text as well as its syntactic and linguistic features, are of significant focus in the works we reviewed. These linguistic and stylistic features are compared to the human-written texts in the datasets. In this section, we summarize and provide an overview of the findings of such analyses for the different domains and datasets we reviewed.
* **Medical domain:** Medical texts generated by ChatGPT have lower text perplexity and are more fluent, neutral, positive, and logical but more general in content and language style,
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Paper** & **Dataset** & **Approaches** & **Explainability** & **Code** \\ \hline \multirow{2}{*}{Mitrović et al. 2023} & Mitrović et al. 2023 &
\begin{tabular}{l} DistilBERT \\ PBC \\ \end{tabular} & SHAP & ✗ \\ \hline \multirow{3}{*}{Liao et al. 2023} & \multirow{3}{*}{Liao et al. 2023} & BERT & \multirow{3}{*}{transformer-interpret} & \multirow{3}{*}{\(\times\)} \\ & & PBC & & \\ \cline{1-1} & & XGBoost & & \\ \cline{1-1} & & CART & & \\ \hline \multirow{2}{*}{Liu et al. 2023} & \multirow{2}{*}{Liu et al. 2023 (ArguGPT)} & RoBERTa-large & \multirow{2}{*}{\(\times\)} & \multirow{2}{*}{\(\times\)} \\ & & SVM & & \\ \hline \multirow{3}{*}{Guo et al. 2023} & \multirow{3}{*}{Guo et al. 2023 (HC3)} & GLTR & \multirow{3}{*}{\(\times\)} \\ & & RoBERTa-single & & \\ \cline{1-1} & & RoBERTa-QA & & \\ \hline \multirow{6}{*}{Antoun et al. 2023a} & \multirow{6}{*}{Antoun et al. 2023a} & CamemBERTa & \multirow{6}{*}{\(\times\)} \\ & & CamemBERTa & & \\ \cline{1-1} & & RoBERTa & & \\ \cline{1-1} & & ELECTRA & & \\ \cline{1-1} & & XLM-R & & \\ \hline Vasilatos et al. 2023 & Ibrahim et al. 2023 & PBC & ✗ & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Methods proposed in the literature for detecting ChatGPT-generated text. PBC: Perplexity-based classifier. Publicly available models can be accessed by clicking on the ✓ character. *Authors indicate it will be made available at a future date.
while medical texts written by humans are more diverse and specific [14].
* **English argumentative essays:** ChatGPT produces syntactically more complex sentences than English language learners, but ChatGPT-authored essays tend to have lower lexical diversity [13].
* **Multi-domain question answering:** ChatGPT writes in an organized and neutral way, offers less bias and harmful information, and refuses to answer questions where it believes it does not know. ChatGPT answers are formal, less emotional, and more objective than human answers [1].
* **Scientific abstracts:** ChatGPT has a better choice of vocabulary, can generate more unique words, uses more connecting words, and has fewer grammatical errors [23].
* **Language-agnostic characteristics:** The linguistic and syntactic characteristics of ChatGPT-generated text tend to be language-agnostic. Text generated in different languages, such as English, French, and Chinese, shows similar characteristics where ChatGPT tends to produce didactic and impersonal text without errors. Such errors can indicate human text, like grammatical, spelling or punctuation mistakes [15, 1].
### General Insights
Based on trends and regular mentions we encountered during the creation of our review, we now report some general insights on the state of detecting ChatGPT-generated text.
Role of explainable AI:Explainability techniques such as SHAP are helpful with detection models. These techniques provide insights into the most important features and words that contribute to classification, thus allowing a better understanding of the writing styles of humans and ChatGPT. This is also valuable in debugging detectors as they can highlight the main words contributing to the misclassification and thus enable better analysis of such models.
Humans versus ChatGPT in detection task:Another insight is that humans are worse at detecting machine-generated text by ChatGPT compared to ChatGPT itself. With additional training, humans would achieve better results.
Robustness of detectors:The robustness of detectors improves when they are trained on datasets that are extended to include also perturbed data, such as homoglyphs and misspellings. This might help the detectors focus more on writing style than writing errors. When evaluated on out-of-domain texts, the performance of detectors tends to decrease, especially when adversarial text is included.
Impact of text length on detection:The shorter the text length, the more challenging and less reliable detection becomes. Models trained on datasets containing full text and question-answer subsets (including answer contexts) do not perform well when evaluated on short texts such as sentences or smaller QA subsets.
Lack of special prompts in ChatGPT-generated text:Some conclusions and analyses in the reviewed papers have been made based on considering text generated by ChatGPT using its most general style and state, i.e., without using any special prompts that could ask ChatGPT to pretend to be a certain writer or to write in a special style. This could be an interesting area of investigation for future work, where new datasets are constructed, and the robustness of detectors against this type of text is tested.
Perplexity-based detectors:perplexity-based detectors depend on using open-source LLMs like GPT-2 and BioGPT to calculate perplexity scores. As ChatGPT generates the target text, calculating these scores using ChatGPT could benefit a lot in this task, as seen with other models using this method. However, this is not possible due to the unfortunate fact of it being a closed-source model.
Cost of constructing machine-generated datasets:Constructing and utilizing large-scale ChatGPT-generated datasets is important for drawing more generalized and precise conclusions. Therefore using ChatGPT's API is essential for this sake. However, the costs of doing so can be prohibitive.
Multilinguality:Our sample of papers has English dominance and performance for other languages is worse. Just as in NLP in general [1], the authors have shown that the
et al., 2020), we call for more work in this area. This could help explain why some detectors are less reliable in detecting machine-generated text when the text is translated into different languages.
## 4 Conclusion and Future Work
The impressive capabilities of ChatGPT in producing high-quality and convincing text have brought attention to the risks associated with its improper usage across different domains. Consequently, the reliable detection of ChatGPT-generated text has become an important task. To address this concern, numerous datasets and detection methods have been proposed. In this paper, we provided a concise overview of the diverse datasets created, proposed methods, and qualitative insights of comparing human-written text with text generated by ChatGPT.
We see a wide variety of approaches and datasets in the papers we discussed. On the one hand, this is good to see since many factors, such as the domain, language, or format, influence the detection task. On the other hand, we also see a big diversity in experimental and dataset setups. Some works use adversarial examples, and others do not. Some allow the rephrasing of human text by ChatGPT, while others use purely human versus machine-generated text. Some works include the prompts and ChatGPT versions they used to generate the data; others do not. These, among other differences, make comparisons difficult, which is one reason we do not include scores in this survey. This also highlights important future work, namely to test _methods across datasets_ and _datasets across methods_.
Another factor to consider is the domain of the text. The datasets we have discussed are in diverse domains and cover at least two important ones affected by ChatGPT's risks: health and education. One notable domain we did not encounter is (fake) news. Although this is a big NLP field on its own, we expected more attention for it in the context of ChatGPT. Future work can definitely help in this area. The format of the text is related to the domain and is another important factor to consider. For example, the shared tasks we mentioned provide tweets, news articles, or reviews as their formats. A systematic look at format and domain influence concerning ChatGPT could be valuable future work.
Multilinguality is another open problem. As with virtually all NLP tasks, we have seen that English is, unfortunately, the dominant language in the datasets. Experiments and gathering datasets across different languages are important future directions. The current task could also draw inspiration from the field of machine translation. It has a long and ongoing history of trying to detect (badly) translated text, so-called _translationese_Baroni and Bernardini (2006), which could be used or adapted to detect general machine-generated text.
Lastly, an important factor we have not seen discussed much is the temporal aspect of ChatGPT. Outputs might change over time, especially since it is a closed-source system. This calls for repeated tests over time to ensure detection methods are not regressing in their performance. Machine-generated text detection is also a cat-and-mouse game; since models are optimized to mimic human language, detection becomes harder and harder.
## 5 Limitations
A limitation of our work is that recent methods proposed for detecting ChatGPT-generated text are pre-prints published in arXiv, due to the rapid pace of work in this area. Additionally, we limit our scope to academic papers and exclude online non-academic tools as we do not know how those tools were trained or how they work internally.
This is also a big problem when discussing ChatGPT in general. Since it is a closed-sourced system without detailed information about its training and dataset, it is impossible to know if the results are reproducible. Models can change at any moment in the background, models can be decommissioned, or the price of access can change drastically. We are well aware of and concerned about these developments, but given the significant opportunities and risks ChatGPT poses, we believe a survey like this one is valuable.
## Acknowledgements
We thank Florian Matthes and the Software Engineering for Business Information Systems (SEBIS) chair at TUM for their funding and support. We also thank the anonymous reviewers for their helpful and insightful comments.
|
2309.04380 | Impact of Galaxy Clusters on UHECR propagation | Galaxy clusters are the universe's largest objects in the universe kept
together by gravity. Most of their baryonic content is made of a magnetized
diffuse plasma. We investigate the impact of such magnetized environment on
ultra-high-energy-cosmic-ray (UHECR) propagation. The intracluster medium is
described according to the self-similar assumption, in which the gas density
and pressure profiles are fully determined by the cluster mass and redshift.
The magnetic field is scaled to the thermal components of the intracluster
medium under different assumptions. We model the propagation of UHECRs in the
intracluster medium using a modified version of the Monte Carlo code {\it
SimProp}, where hadronic processes and diffusion in the turbulent magnetic
field are implemented. We provide a universal parametrization that approximates
the UHECR fluxes escaping from the environment as a function of the most
relevant quantities, such as the mass of the cluster, the position of the
source with respect to the center of the cluster and the nature of the
accelerated particles. We show that galaxy clusters are an opaque environment
especially for UHECR nuclei. The role of the most massive nearby clusters in
the context of the emerging UHECR astronomy is finally discussed. | Antonio Condorelli, Jonathan Biteau, Remi Adam | 2023-09-08T15:22:30Z | http://arxiv.org/abs/2309.04380v1 | # Impact of Galaxy Clusters on Propagation of Ultra-High-Energy Cosmic Rays
###### Abstract
Galaxy clusters are the universe's largest objects in the universe kept together by gravity. Most of their baryonic content is made of a magnetized diffuse plasma. We investigate the impact of such magnetized environment on ultra-high-energy-cosmic-ray (UHECR) propagation. The intracluster medium is described according to the self-similar assumption, in which the gas density and pressure profiles are fully determined by the cluster mass and redshift. The magnetic field is scaled to the thermal components of the intracluster medium under different assumptions. We model the propagation of UHECRs in the intracluster medium using a modified version of the Monte Carlo code _SimProp_, where hadronic processes and diffusion in the turbulent magnetic field are implemented. We provide a universal parametrization that approximates the UHECR fluxes escaping from the environment as a function of the most relevant quantities, such as the mass of the cluster, the position of the source with respect to the center of the cluster and the nature of the accelerated particles. We show that galaxy clusters are an opaque environment especially for UHECR nuclei. The role of the most massive nearby clusters in the context of the emerging UHECR astronomy is finally discussed.
cosmic ray astronomy (324) -- galaxy clusters (584) -- ultra high energy cosmic rays (1733) 0000-0002-4880-8870]Antonio Condorelli
0000-0002-4880-7888]Jonathan Biteau
0000-0002-4880-7888]Remi Adam
## 1 Introduction
Even though their existence has been known for more than a century, the nature and origin of cosmic rays at the highest energies remains elusive. Observations have allowed us to explore their spectral behavior and composition in terms of atomic mass on Earth (Coleman et al., 2023), but the sources of ultra-high-energy cosmic rays (UHECRs), i.e. cosmic rays above \(10^{18}\) eV, still remain unknown.
Rapid progress in computational high-energy astrophysics has dramatically advanced the study of acceleration mechanisms in systems ranging from the jets of stellar-sized objects such as gamma-ray bursts (GRBs, Sudilovsky et al., 2013) to the large-scale shocks surrounding galaxy clusters (Norman et al., 1995; Ryu et al., 2003; Kang et al., 1997). Galaxy clusters are the largest virialized structures, having typical radii \(R_{\rm cl}=1-2\,{\rm Mpc}\) and total masses \(M\simeq 10^{14}-10^{15}M_{\odot}\), including both baryonic and dark matter. Strong turbulent magnetic fields, with root mean square values \(B\simeq\) few \(\mu\)G, are present inside clusters, having typical coherence lengths of 5-30 kpc (Bonafede et al., 2010; Donnert et al., 2018). This implies that cosmic rays accelerated in candidate sources inside the clusters, e.g. by hypernovae or GRBs in star-forming galaxies, or in the accretion shocks, jets and radiolobes of active galactic nuclei (AGNs), can be confined for long times within clusters. They can undergo interactions with the enhanced baryonic content of the intracluster medium, whose profile is determined by Bremsstrahlung emission in X rays (Sarazin, 1986).
Upper limits on the flux of neutrinos and gamma rays at ultra-high energies rule out a dominant origin of UHECRs from exotic particles (Abreu et al., 2023, 2022), which should then originate from extragalactic astrophysical sources. An extragalactic origin is corroborated by the observation of a dipolar anisotropy above 8 EeV (Aab et al., 2018) and an evidenced correlation of UHECRs above 40 EeV with extragalactic objects in the nearby universe (Aab et al., 2018; Abreu et al., 2022). Some of these extragalactic sources could be hosted or shadowed by clusters.
UHECR propagation in a specific cluster (e.g. the Virgo cluster) has been already treated in different works (Dolag et al., 2009; Kotera et al., 2009; Harari et al., 2016; Fang and Murase, 2018). Although some of these theoretical works suggested that galaxy clusters are efficient UHECR calorimeters, some authors recently claimed excesses of UHECRs from these structures (Ding et al., 2021; Abbasi et al., 2021). Revisiting the propagation of UHECRs in galaxy clusters is thus a timely topic. In the following, we evaluate whether UHECRs can escape from such environments and how clusters should be accounted for in UHECR astronomy. We provide in particular a single parametrization of the escaping flux, which depends on the mass of the cluster and on the UHECR features, such as energy and atomic mass.
The paper is organized as follows: we introduce the relevant properties of galaxy clusters and detail the way we compute the most important macroscopic quantities for our study in Section 2; the microphysics of UHECR propagation in such environments is detailed in Section 3; we present our results and discuss the impact of our assumptions in Section 4. We finally draw our conclusions in Section 5.
## 2 Intracluster Medium Modeling
Clusters of galaxies and the filaments that connect them are the largest structures in the present universe in which the gravitational force due to the matter overdensity overcomes the expansion of the universe. Massive clusters have typical total masses of the order of \(10^{15}\)\(M_{\odot}\), mostly in the form of dark matter (\(70-80\,\%\) of the total mass), while baryonic matter is harbored by galaxies (few %) and composes the hot (\(T\sim 10^{8}\,\mathrm{K}\)) and tenuous (\(n_{\mathrm{gas}}\approx 10^{-1}-10^{-4}\,\mathrm{cm}^{-3}\)) gas (\(15-20\,\%\)) that forms the intracluster medium (ICM, Voit, 2005). To model UHECR propagation in this environment, we need estimates of the gas density profile, of the magnetic field profile and of the coherence length. While the gas density is well understood and routinely derived from X-ray observations, this is not the case for the magnetic field, for which only a handful of measurements - sometimes model-dependent - are available in the literature (Vacca et al., 2018). From theoretical arguments, however, the magnetic-field strength is often assumed to scale with the ICM thermal density or pressure.
### Density profile
An interesting feature of galaxy clusters is that they are self-similar objects at first order, so that their physical properties can be fully described given their mass and redshift (Kaiser, 1986). For instance, their universal pressure profiles (UPP) and universal density profiles (UDP) are now well constrained from observations (e.g., Arnaud et al., 2010; Pratt et al., 2022). Following Arnaud et al. (2010), we use the UPP expressed as
\[P(x)=\frac{P_{0}\times P_{500}(M_{500},z)\times f(M_{500},z)}{(c_{500}x)_{ \mathrm{VPP}}\cdot(1+(c_{500}x)_{\mathrm{VPP}})^{\frac{\theta_{\mathrm{UPP}} -\gamma_{\mathrm{UPP}}}{\alpha_{\mathrm{UPP}}}}}, \tag{1}\]
with \(P_{500}(M_{500},z)\) the self-similar normalization (Nagai et al., 2007), \(f(M_{500},z)\) a small mass-dependence correction, and where \(P_{0}\), \(c_{500}\), \(\alpha_{\mathrm{UPP}}\), \(\beta_{\mathrm{UPP}}\), \(\gamma_{\mathrm{UPP}}\) are parameters that describe the shape of the profile as a function of the scaled radius \(x=r/R_{500}\).1
Footnote 1: The mass \(M_{500}\) is defined within \(R_{500}\), the radius within which the cluster density is 500 times the critical density of the universe at the cluster redshift.
Similarly, we use the UDP as measured by Pratt et al. (2022), which can be expressed as
\[n(x)=\frac{A(M_{500},z)\times f_{0}}{(x/x_{s})\left(1+(x/x_{s})^{\gamma_{ \mathrm{UDP}}}\right)^{\frac{3\theta_{\mathrm{UDP}}-\alpha_{\mathrm{UDP}}}{ \gamma_{\mathrm{UDP}}}}}\,. \tag{2}\]
The quantity \(A(M_{500},z)\) describes the normalization as a function of mass and redshift, and the parameters \(f_{0}\), \(x_{s}\), \(\alpha_{\mathrm{UDP}}\), \(\beta_{\mathrm{UDP}}\), \(\gamma_{\mathrm{UDP}}\) describe the shape (see also Ghirardini et al., 2019, for another calibration of the UDP).
The gas density and pressure profile are expected to be connected. This provides us with an alternative way to describe the ICM thermal density given the pressure profile. Assuming a polytropic relation between gas density and pressure, using a sample of massive nearby clusters, Ghirardini et al. (2019) measured
\[P(x)=C\times n(x)^{k}, \tag{3}\]
where \(k=1.19\) and \(C\) is a normalization constant.
With the gas density in hand, we can derive the electron-, proton- and helium-density profiles by scaling through the mean molecular weights \(\mu_{\mathrm{gas}}=0.61\), \(\mu_{e}=1.16\), \(\mu_{\mathrm{p}}=1.39\) and \(\mu_{\mathrm{He}}=14.6\)(see, e.g., Adam et al., 2020). The proton density profile of the Coma cluster, as obtained from the best-fit model describing the ROSAT data (Briel et al., 1992), is shown in Figure 1. It is compared to the model derived from our methodology, using the mass and redshift from the MCXC catalog (Piffaretti et al., 2011). The red line gives our reference model, i.e., the one obtained using the UPP profile combined with the polytropic relation. For further comparison, the UDP profile, as calibrated by Pratt et al. (2022) and Ghirardini et al. (2019), are given in green and orange, respectively. We can observe that the main differences between the data and the models, and among the models themselves, arise in the central part of
the cluster. This reflects the increased intrinsic scatter among the cluster population relative to the self-similar approximation in the cluster cores, while the consistency significantly improves at \(r\in[0.2R_{500},R_{500}]\)(see, e.g., Ghirardini et al., 2019, for details). More specifically in Figure 1, the Coma cluster is a merging system with a very flat core, thus presenting a smaller central density than that given in our mean model (we also refer to the appendix for further examples). The impact of the choice of the reference density model on our final results is discussed in Section 4.2.
### Magnetic-field profile
The profile of magnetic-field strength can be scaled to the thermal-gas density under several assumptions. Assuming the magnetic energy density to be proportional to the thermal energy, we have
\[\langle B^{2}(r)\rangle=2\mu_{0}P(r)/\beta_{\rm pl}, \tag{4}\]
with \(\mu_{0}\) the vacuum permeability. For the plasma, we set \(\beta_{\rm pl}=200\) following the results by Walker et al. (2017) on the Perseus cluster. By combining the central magnetic field of the Coma cluster measured by Bonafede et al. (2010) and the central pressure obtained by Planck Collaboration et al. (2013), we would instead estimate \(\beta_{\rm pl}=77\). Alternatively, assuming that the magnetic field is frozen into the plasma and amplifies under pure adiabatic compression with magnetic flux conservation, we have
\[\langle B^{2}(r)\rangle=B_{\rm ref}^{2}\left(\frac{n_{\rm gas}(r)}{n_{\rm gas} (r_{\rm ref})}\right)^{4/3}. \tag{5}\]
The normalization \(B_{\rm ref}\), taken at the radius \(r_{\rm ref}\) is defined using the reference Coma cluster, for which detailed measurement are available in Bonafede et al. (2010).
In Figure 2, we compare the magnetic-field profile of the Coma cluster estimated from Faraday rotation measures (Bonafede et al., 2010) to our models. The red line gives the profile estimated using Equation 4, with \(\beta_{\rm pl}=200\), combined with the UPP from Arnaud et al. (2010). The orange line is based instead on \(\beta_{\rm pl}=77\). The green line uses Equation 5 with the density estimated from the UDP calibrated by Pratt et al. (2009). We observe that despite strong assumptions involved in our modeling, the prediction follows relatively well the measurement. This is also the case in the inner region of the cluster, where the environment is expected to play a major role for UHECR propagation.
In the following work, we use as a reference the UPP in Equation 1 to derive the density through the polytropic relation in Equation 3. The reference magnetic field is derived assuming constant magnetic to thermal energy density (Equation 4) with \(\beta_{\rm pl}=200\). We compare the different assumptions in appendix for a set of clusters with different morphologies and discuss the impact of these assumptions on UHECR propagation in the following section.
Figure 1: Thermal proton density profiles for the Coma cluster: the blue line gives the best-fit model to the ROSAT data (Briel et al., 1992), the red line gives the density obtained from the Ghirardini et al. (2019) polytropic relation combined with the Arnaud et al. (2010) UPP profile, the green and orange lines give the UDP from Pratt et al. (2022) and Ghirardini et al. (2019), respectively.
Figure 2: Magnetic field strength profiles for Coma cluster: the blue line (and shaded region) show the best-fit model (and the constrained range) as obtained from Faraday rotation measure (Bonafede et al., 2010), the red and orange lines give the model obtained when scaling the magnetic-field strength to the UPP and using \(\beta_{\rm pl}=200\) and \(\beta_{\rm pl}=77\), respectively, the orange line give our model when scaling the magnetic-field strength to the gas density, with the UDP from Pratt et al. (2022).
## 3 Uhecr propagation in galaxy clusters
### Interactions and diffusion in a cluster
We compute the typical timescales for photo-hadronic and hadronic interactions of UHECRs in the cluster environment from a modified version of the Monte Carlo code _SimProp_(see Aloisio et al., 2012, 2015, 2016). We account for interactions with photons of the cosmic microwave and infrared backgrounds (CMB, CIB), as well as for hadronic interactions within the ICM.
Under the assumption of a monochromatic photon field of number density \(n_{\gamma}\), the typical interaction rate between a relativistic atomic nucleus (\(A\)) and a low energy photon is approximately \(\tau_{A\gamma}^{-1}\simeq c\sigma_{A\gamma}n_{\gamma}\), where \(\sigma_{A\gamma}\) represents the cross section of the process and \(c\) is the speed of light in vacuum. If a more realistic spectral energy distribution for the photon field is considered and the dependence of the cross section on the energy is taken into account, the interaction rate reads (Aloisio et al., 2013):
\[\frac{dN_{\rm int}}{dt}=\frac{c}{2\Gamma}\int_{\epsilon_{\rm th}^{\prime}}^{ \infty}\sigma_{A\gamma}(\epsilon^{\prime})\epsilon^{\prime}\int_{\epsilon^{ \prime}/2\Gamma}^{\infty}\frac{n_{\gamma}(\epsilon)}{\epsilon^{2}}\,d\epsilon \,d\epsilon^{\prime}, \tag{6}\]
where \(\Gamma\) is the Lorentz factor of the interacting nucleus. Note that primed symbols (e.g. \(\epsilon^{\prime}\)) refer to quantities in the nucleus rest frame, whereas unmarked symbols refer to quantities in the laboratory frame.
Though spallation processes between UHECRs and gas have negligible impact in the extragalactic medium, their role can be substantial in the ICM considering the effective time that relativistic particles spend in this environment. The timescale for the spallation process reads:
\[\tau_{\rm spal}=n_{\rm ICM}\,\sigma_{\rm sp}\,c^{-1}, \tag{7}\]
where \(n_{\rm ICM}\) is the ICM gas density and \(\sigma_{\rm sp}\) is the cross section for proton-proton or proton-nucleus interactions. This process has been implemented in _SimProp_ making use of the most recent hadronic model, Sibyll 2.3d (Riehn et al., 2020), a hadronic event generator. Details on the interface between the hadronic interaction model (HIM) and the in-source version of SimProp can be found in Condorelli et al. (2023).
In addition to interactions, diffusion in the magnetic field has to be taken into account. In fact, charged particles populating an astrophysical environment can be confined for a long time before escaping. The diffusion timescale reads: \(t_{\rm D}=R^{2}/D\), where \(R\) is the radius of the environment, and where \(D\) is the UHECR diffusion coefficient computed in the context of quasi-linear theory (Lee et al., 2017). The expression of the diffusion coefficient is: \(D\simeq cr_{L}^{2-\delta}\,l_{c}^{\delta-1}/3\), where \(r_{L}=E/qB\) is the particle Larmor radius, \(l_{c}\) is the coherence length of the magnetic field and \(\delta\) is the slope of the turbulence power-spectrum, while \(B\) is the strength of the turbulent magnetic field. We assume \(\delta=5/3\) as prescribed for a Kolmogorov turbulence cascade. Following Subedi (2017) and Reichherzer et al. (2022), we additionally consider the transition in the diffusion regime taking place when \(r_{L}\gtrsim l_{c}\). In this energy range, the diffusion coefficient is estimated as \(D=D_{0}(r_{L}/l_{c})^{2}\), where \(D_{0}\) is the value of the diffusion coefficient computed at the energy \(E_{0}\) such that \(r_{L}(E_{0})=l_{c}\). At the highest energies, the particle propagates ballistically so that the diffusion time tends to \(R/c\).
Figure 3 summarizes the typical length-scales for interactions and escape in the source environment for a prototype cluster (see caption). The interplay between length-scales governs the shape of the UHECR flux as well as the nuclear composition at the escape from the cluster. The shortest length-scale for protons is always dictated by diffusion; this means that some protons can escape from the environment. For nuclei (e.g. nitrogen in Figure 3), photo-interaction lengths are the shortest at high rigidities for the chosen parameters of the cluster (see caption). Clusters with larger magnetic fields also present higher target densities, which reduces the hadronic interaction length and makes hadronic interactions predominant at lower rigidities.
Figure 3: Interaction and escaping lengths as a function of magnetic rigidity at the center of a prototypical galaxy cluster: photo-hadronic interaction times (dashed-dot lines), spallation times (dashed lines) and diffusion times (solid lines) for protons (red) and nitrogen nuclei (green). The Hubble radius (corresponding to the age of the universe) is shown as a long-dashed line. Length scales have been calculated assuming the following parameters: \(R_{500}=1\) Mpc, \(B=1\)\(\mu\)G, \(l_{c}=10\) kpc, \(n_{\rm ICM}=1\cdot 10^{-4}\) cm\({}^{-3}\).
### Implementation of ICM propagation in SimProp
In order to model the UHECR transport in clusters, we have developed an extension of _SimProp_. This software has been used so far in the context of the extragalactic propagation of UHECRs (see for instance Aab, 2017; Halim et al., 2023; Luce et al., 2022). SimProp implements different photo-disintegration cross sections and different models for the CIB. In this work, we adopt TALYS (Koning et al., 2005; Koning and Rochman, 2012) for the photo-disintegration cross sections and the CIB model of Gilmore et al. (2012), which are both representative of the state of the art. SimProp is a monodimensional propagator. Assuming spherical symmetry, all the particles are propagated along an axis of the cluster until they reach \(3\times R_{500}\), a distance beyond which the ICM has negligible impact with respect to the extragalactic medium.
We also consider the impact of the magnetic field on UHECR propagation. Charged particle moving through a uniform magnetic field undergo an angular deflection upon traversing a distance, \(l_{c}\), of \(\simeq\frac{l_{c}}{r_{L}}\). A particle of energy \(E\) and charge \(q=Ze\) traversing a distance \(L\) suffers an overall angular deflection given by \(\theta(E,Z)\simeq\left(\frac{L}{l_{c}}\right)\cdot\theta\)(Hooper et al., 2007), which depends on the properties of the environment (\(B\), \(L\) and \(l_{c}\)) and of the particles (\(E\), \(Z\)). Such deflections result in an increase in the effective propagation length, \(L_{\rm eff}\), in the ICM given by (Armengaud et al., 2005):
\[\frac{L_{\rm eff}}{L}\simeq 65\left(\frac{E/Z}{10^{20}\,{\rm eV}/26}\right)^{- 2}\!\!\left(\frac{L}{1\;{\rm Mpc}}\right)\!\!\left(\frac{l_{c}}{10\;{\rm kpc} }\right)\!\!\left(\frac{B}{1\;\mu{\rm G}}\right)^{2} \tag{8}\]
Knowing the properties of the cluster, it is possible to compute the effective length and therefore the effective time that a particle spends in the environment.
The propagation inside the cluster environment is determined according to the following methodology: 1) The propagation axis is divided in a given number of steps, \(n_{\rm steps}\geq A\), with \(A\) atomic mass of the injected nuclei, sufficiently large to sample the interactions; 2) UHECRs are injected at a given point in the cluster and the propagation is performed only along the chosen axis; 3) The typical length-scales are dependent on the position, according to the magnetic-field and gas density profiles. The probability of interaction or escape changes as a function of the radius; 4) Particles are moved to the following step if the interaction probability is smaller than the escape one, otherwise they lose energy and their byproducts are accounted for in the following steps of the propagation; 5) Once a particle has reached the border, if the diffusion probability is larger than the interaction one, this particle escapes from the cluster environment and is propagated through the extragalactic medium; 6) Particles that spend a time greater than the age of the universe in the environment are considered trapped and are not propagated anymore. This is a conservative assumption: the dominant time is the minimum between the age of the cluster and the age of the oldest accelerator inside it, both smaller than the age of the universe.
## 4 UHECR Flux escaping the ICM
Once particles escape from the magnetized environment, it is possible to evaluate what is the impact of the ICM on the UHECR spectrum as a function of the injection point. We inject \(10^{4}\) particles logarithmically distributed in the energy range \(10^{17}-10^{21}\) eV. The results are shown in Figure 4, where the escaping fluxes are represented as a function of rigidity. The spectra are normalized to the spectrum expected if interactions and diffusion in the ICM were neglected. One can notice how the closer the injection is to the nearest edge of the environment (at \(\approx+1\,\)Mpc), the more the escaping flux coincides with the injection spectrum: the gas and magnetic field densities are low and the propagating particles are less affected. If instead the UHECRs cross the center of the cluster (\(y\leq 0\)), the flux is reduced at low energies due to the trapping by the magnetic field (the so-called magnetic horizon for extragalactic propagation, e.g. Lemoine, 2005; Gonzalez et al., 2021). At the highest energies, fluctuations in the ICM transmission are an artifact of the normalization procedure, in a regime where interactions of UHECRs with the CMB are important.
More massive clusters present more intense magnetic field at the center of the cluster, which shortens the magnetic horizon of UHECRs. The impact of the cluster magnetic field on the propagation of UHECRs is illustrated in Figure 5, where the escaping fluxes are shown as a function of rigidity, in this case assuming only protons at the injection.
### Parametrization of the UHECR escaping flux
We provide a parametrization of the escaping fluxes as a function of the mass of the cluster \(M\), of the position of injection point \(y\) and of the nature of the accelerated particles (protons or nuclei) in order to describe the escaping flux above the ankle. Four representative nuclear masses are studied: \({}^{1}\)H, \({}^{4}\)He, \({}^{14}\)N, \({}^{28}\)Si. The contribution from iron nuclei is neglected, as few, if any, are expected from simple cosmological models that describe data from the Pierre Auger Observatory (Aab, 2017; Halim et al., 2023; Luce et al., 2022).
We notice in Figure 4 that a cluster mostly affects the escaping spectrum when UHECRs cross its center. In fact, it is the place where the magnetic field is most intense and where the target density is the highest. For this reason, sources placed at \(y\leq 0\) would have an escaping fluxes shaped by the propagation in the cluster environment, while the effect is weaker for host sources placed at \(y>0\) where the traversed magnetic field is milder. For this reason, we assume clusters to be transparent for accelerators placed at \(y>0\), while we provide a single parametrization of the transparency of the clusters for \(y\leq 0\). We define the transparency \(f(R)\) of a given cluster as the escaping flux divided by the one expected without interactions in the ICM. We approximate the transparency as a function of rigidity \(R\) by a broken power-law, with full transparency at the highest energies:
\[\log\ f(R)=\begin{cases}\Gamma\ \log(R/\rho)&R\leq\rho,\\ 0&R\geq\rho.\end{cases} \tag{9}\]
We notice that, in our equation above, the break rigidity, \(\rho\), depends on the mass of the cluster, \(M\), following to first order:
\[\log\rho=\log\rho_{0}+\xi\log(M/10^{15}M_{\odot}). \tag{10}\]
We parametrize the low-rigidity slope, \(\Gamma\), of the transparency function so that it reaches a maximum value of 2 at high cluster masses and softens at lower masses:
\[\Gamma=\frac{2}{1+\left(\frac{M}{M_{\rm free}}\right)^{-\sigma}}. \tag{11}\]
We find that \(\log(M_{\rm free}/M_{\odot})=14.4\pm 0.5\) is consistent with the transparency functions of both nuclei and protons. The parameter \(\sigma\) governs the evolution of the index with cluster mass. Also in this case, we find a common value \(\sigma=0.25\pm 0.10\) for both nuclei and protons.
The parameters are determined by fitting the model in Equation 9 to the escaping fluxes for different position of the sources at \(y\leq 0\) and for different cluster masses, considering either protons only or nuclei only. We find best parameter values of \(\log(\rho_{0}/{\rm V})=20.0\pm 0.2\) for protons and \(\log(\rho_{0}/{\rm V})=24.3\pm 0.3\) for nuclei, while \(\xi=0.6\pm 0.1\) for protons and \(\xi=1.7\pm 0.2\) for nuclei. The comparison to simulated data is shown in Figure 6, for an injection at the center of the cluster.
The two parameters that influence the rigidity at which the transition happens, i.e. \(\rho_{0}\) and \(\xi\), are larger for nuclei than for protons. This is due to the fact that the nuclei interact more than protons in the ICM, as shown in Figure 3; for this reason the transition to \(f(R)=1\) happens at higher rigidities for nuclei.
Our simulations show that clusters of mass \(M=10^{14}M_{\odot}\) or \(M=10^{15}M_{\odot}\), with central magnetic fields of 3 and 9 \(\mu\)G respectively, are able to trap nearly all protons up to the ankle. For lower magnetic fields, the effect of the ICM on protons is quite negligible above the ankle. Similar conclusions can be drawn for nuclei; nonetheless it is important to stress that they are fully disintegrated up to at least the ankle for clusters with central magnetic fields larger than 1 \(\mu\)G.
The proposed parametrization describes well the impact of the galaxy cluster on the escaping flux above
Figure 4: Escaping proton spectra from a cluster of \(M=10^{14}\ M_{\odot}\) as a function of the injection point. Positive (negative) \(y\) valued correspond to positions closer to the nearest (furthest) edges of the cluster. The spectra are normalized to that expected without interactions in the ICM. The vertical line shows the ankle energy.
Figure 5: Escaping proton spectra from cluster of different magnetic field values at the center, taken at 1 kpc (see legend), assuming injection at the center of the cluster. The spectra are normalized to that expected without interactions in the ICM. The vertical line shows the ankle energy.
the ankle. The approximation of considering the environment as transparent for sources at \(y>0\) describes well the results of the simulation for weakly magnetized clusters. For clusters with \(B\geq 3\)\(\mu\)G, the proposed parametrization for \(y>0\) overestimates the escaping fluxes on average by \(0.4\,\mathrm{dex}\).
Overall, we can conclude that only a few percent of nuclei can escape from clusters with \(B\geq 1\,\mu\)G up to energies of \(10^{19}\,\mathrm{eV}\) for He and \(10^{19.5}\,\mathrm{eV}\) for N. Protons are strongly suppressed as well in the most massive clusters: only \(40\%\) escape at \(10^{19}\) eV for a central magnetic field of \(3\,\mu\)G while practically none escape at this energy for \(B\geq 9\,\mu\)G. Galaxy clusters are thus hostile environments for UHECRs. The filtering is more intense for nuclei, which are fully disintegrated in the most massive clusters even in the outer regions of the environment.
### Impact of our assumptions
In this investigation, many assumptions have been made. This section aims to discuss their impact.
The most impacting assumption is the parametrization of the diffusion time, which is based in this work on the scaling laws expected from diffusion theory (see Sec. 3.1). Using instead the equation 20 of Harari et al. (2014), which is based on Monte-Carlo simulations, results in an even more opaque environment, with a transparency reduced by a factor \(\simeq 2\) above the ankle for protons injected in the center of the cluster.
The coherence length of the magnetic field in the ICM also influences our results. In this work, all the clusters are assumed to have the same coherence length, \(7\,\mathrm{kpc}\), based on observations of Coma. More detailed constraints on this quantity would be instrumental in determining the UHECR transparency of clusters on a case-by-case basis.
Another important topic to be discussed is the assumption of the magnetic-field and gas density profiles. Instead of the reference model detailed in Section 2, we performed the same analysis as in Section 4.1 using the best-fit models shown as blue lines in Figure 1 and 2 (see also appendix) for three different clusters: Virgo (Planck Collaboration et al., 2016), Coma (Adam et al., 2021) and Perseus (Churazov et al., 2004). In the three examined clusters, the differences in transparency are at maximum of the order of \(1\%\), irrespective of their morphology, therefore it is possible to affirm that the assumption of a UPP does not influence the main results of this work. The self-similar framework is largely driven by the cluster mass, which can be a difficult quantity to measure. The accuracy of cluster-mass estimates is thus expected to be a primary source of uncertainty. One should nonetheless note that, under our reference approach, the magnetic-field strength scales to first order as \(B\propto\sqrt{P_{500}}\propto M_{500}^{1/3}\), so that only an order-of-magnitude uncertainty has a strong impact on the UHECR transparency illustrated in Figure 6.
We only considered interactions with CMB and CIB photons, neglecting the contribution of stellar and dust-grain emission in the cluster. Harari et al. (2016) estimated such galactic contributions to be comparable to the CIB. Neglecting the galactic emission does not affect our results. This can be understood looking at Figure 3, where the change of slope at around \(10^{19.6}\) eV in the curves labeled "photo-interaction" corresponds to the transition from lower-energy interactions with CIB to higher-energy interactions with CMB. For protons, the CMB is the only relevant field within the Hubble time. For nuclei, we investigated the impact of doubling the CIB density to model the galactic emission. The transparency changes only by few percents. For this reason, as in Harari et al. (2016); Hussain et al. (2021), the galactic emission is neglected.
In this work, we adopt Sibyll2.3d (Riehn et al., 2020) as HIM. The systematics related to the use of this specific HIM cannot be explored currently, because no other HIM are currently implemented in SimProp. This investigation should be discussed in future works.
## 5 Discussion
In this work, we develop a detailed model to explore the extent to which galaxy clusters impact UHECR propagation. In particular, the modelling of the cluster
Figure 6: Transparency as a function of energy for protons, helium and nitrogen nuclei for different cluster magnetic field (see legend), assuming an an injection point at the center of the environment. The points show the results obtained from the simulations with errors resulting from the number of injected particles. The solid lines display the proposed parametrization. The vertical line shows the ankle energy.
environment and the use of a HIM for propagation in this environment represent novelties for UHECR propagation studies.
We work under the assumption of self-similarity. From this assumption, it is possible to derive the important quantities for UHECR propagation, namely the magnetic-field and the gas density profiles given the mass and the redshift of the clusters. We find that the cluster environment acts as a high-pass filter, allowing a fraction of UHE protons to escape while the UHE nuclei interact with the gas and photons present in the ICM.
This work presents some advances with respect to the previous literature. The use of a software dedicated to the treatment of the cluster environment is new in UHECR physics; the conclusions of this work are in line with other works which predicted that galaxy cluster are hostile environment for UHE nuclei (Kotera et al., 2009; Harari et al., 2016; Fang and Murase, 2018), while they have a weaker although non-negligible effect on the propagation of UHE protons. For example, Harari et al. (2016) suggest in their Figure 6 that, for a cluster with a central magnetic field of \(B=1\)\(\mu G\) and coherence length of \(l_{\rm c}=10\) kpc, the environment is completely transparent for protons above the ankle energy (\(>5\cdot 10^{18}\) eV), while it affects slightly the escaping flux of intermediate-mass nuclei (70% transparency for carbon nuclei at 10 EeV) and heavy nuclei (9% for iron nuclei at 10 EeV). In our case, neglecting the differences in the profiles and parametrization of the diffusion coefficient, we can compare these results with those obtained for a structure of mass \(M=10^{13}\)\(M_{\odot}\), which corresponds to a central magnetic field of \(\simeq\) 1 \(\mu G\). The two results are comparable for protons, while our model predicts a significantly larger depletion of intermediate nuclei: 10% transparency in our work instead of 70% in Harari et al. (2016). We confirmed through simulations that this difference arises from the different treatment of the hadronic interactions; the use of a HIM, instead of a simple analytical model, can strongly influence the propagation of the cascade, affecting both the fraction of energy lost and the fragmentation of heavy nuclei. No direct comparison can be performed with the work of Fang and Murase (2018), which shows the cumulative spectra of UHECRs at Earth that escaped from a population of sources, nor with that of Kotera et al. (2009), where the UHECR flux escaping from a single cluster is arbitrarily normalized.
The present work leads to important conclusions for the emerging field of UHECRs astronomy. Two different trends can be observed in the mass composition of UHECRs measured with the Pierre Auger Observatory (Yushkov, 2020): a transition from heavy to light mass composition is observed up to \(10^{18.3}\) eV, while data at higher energies suggest a transition to intermediate-heavy masses. Based on our simulations, we should not observe UHE nuclei coming from the inner regions of massive galaxy clusters above the ankle energy. This includes in particular the Virgo cluster, the closest galaxy cluster to us (\(d\simeq 16\) Mpc, \(M\simeq 1.2\cdot 10^{14}\)\(M_{\odot}\), from Planck Collaboration et al., 2016). The Pierre Auger Collaboration indeed does not see any indication of excess in this direction, which could have an important implication as pointed in Biteau et al. (2021). Assuming that the UHECR production rate follows the star-formation rate or stellar mass of nearly half million of galaxies, Biteau (2021) found that the computed skymaps should show some excess in the direction of the Virgo cluster, not present in the observed skymaps (Abreu et al., 2022). Our work confirms that this tension is lifted by magnetic trapping of UHECRs in Virgo, as was already hypothesized in Biteau et al. (2021) through a more naive argument (confinement time greater than the ballistic one). The result of our work reduces the discrepancies between the arrival direction model and the data, justifying the lack of UHE nuclei in the directions of the galaxy clusters and thus suggesting interesting pathways to investigate composition anisotropies.
Another application of our work is related to the dipole observed by the Pierre Auger Observatory above 8 EeV, whose direction is qualitatively explained from the distribution of local extragalactic matter and UHECR deflections in the Galactic magnetic field (Aab et al., 2017). The strong contribution to the dipole from the Virgo cluster inferred e.g. by Ding et al. (2021) assuming that the UHECR production rate follows the distribution of matter should be significantly lowered when accounting for magnetic trapping and shadowing in the ICM. This statement is true also for the Perseus cluster (\(d\simeq 74\) Mpc, \(M\simeq 5.8\cdot 10^{14}\)\(M_{\odot}\), from Urban et al., 2014), in the direction of which the Telescope Array collaboration claims an indication of excess above \(5.7\cdot 10^{19}\) eV (Telescope Array Collaboration et al., 2021). From the analysis, it cannot be excluded that the UHECRs come from the vicinity or outer shocked region of the cluster. This work tends to exclude the possibility that Telescope Array and the Pierre Auger Observatory see UHE nuclei accelerated by a host source close to the center of the Perseus or Virgo cluster; either they have to come from the cluster outskirts or they have to be UHE protons, both primaries or secondaries due to the fragmentation of heavy nuclei in the environment surrounding the accelerator.
An interesting result of this work concerns the role of filaments of the cosmic web (Kotera and Lemoine, 2008).
In these regions (Carretti et al., 2022), the turbulent component (inferred to be at the \(\simeq\) nG level) is weaker than the regular magnetic field (\(\simeq\) 30 nG), both much weaker than central magnetic fields of clusters. This means that while UHECRs are trapped in the central regions of galaxy clusters, they can escape from filaments as stated in Kim et al. (2019). If, as suggested by the authors, UHECRs are correlated with filaments connected to the Virgo cluster, they should escape from galaxies in the filaments.
A possible critical aspect beyond the scope of this work concerns the secondary production. In fact, interactions of UHECRs lead to an excess of secondaries, namely secondary cosmic rays, neutrinos and photons, which can escape from the environment and could be in tension with the current measurements. It should be noted that secondary protons produced by the fragmentation of heavy nuclei would remain trapped in the environment, so that they would not show up at lower energies. A natural step forward in this analysis would concern the multi-messenger connection, by taking into account the emission and propagation of photons and neutrinos in the environment. In this way, it would be possible to compare the escaping gamma rays with the possible excess observed by _Fermi_-LAT in the direction of the Coma cluster (\(d\simeq\) 100 Mpc, \(\textit{M}\simeq 7\cdot 10^{14}\) M\({}_{\odot}\), from Planck Collaboration et al., 2013), as well as to determine the expected sensitivity of upcoming gamma-ray and neutrino facilities at higher energies.
The authors would like to thank the reviewer for constructive suggestions, which helped to improve the quality of this work. AC and JB gratefully acknowledge funding from ANR via the grant MultI-messenger probe of Cosmic Ray Origins (MICRO), ANR-20-CE92-0052. AC acknowledges co-developers of SimProp for useful feedback in the development of the software.
In this appendix, we show how the models of the gas-density and magnetic-field profiles discussed in Section 2 match the available measurements. We use reference clusters for which magnetic field profile estimates are available in the literature (Vacca et al., 2018), using the mass and redshift taken from the MCXC catalog (Piffaretti et al., 2011). For a sufficiently large sampling of mass and dynamical state, we select the Perseus, Coma, A 194, Hydra A, A 2634 clusters. For each of them, a best-fit model (or a model that was matched to the data, in the case of the magnetic field) is reported: Churazov et al. (2004); Taylor et al. (2006); Walker et al. (2017) for Perseus, Briel et al. (1992) for Coma, Govoni et al. (2017) for A 194, Vogt & Ensslin (2003); Ensslin & Vogt (2006); Laing et al. (2008); Kuchar & Ensslin (2011) for Hydra A and Schindler & Prieto (1997) for A 2634. These are compared to experimental data from the Plank/ROSAT project (Eckert et al., 2012), the XCOP project (Eckert et al., 2017), and the ACCEPT data base (Cavagnolo et al., 2009) whenever available and to the models outlined in Section 2. The red lines correspond to the reference model of this work: UPP and polytropic assumption for determining the thermal gas density, constant magnetic to thermal energy density for magnetic field. The other possible choices are detailed in Section 2.
In Figure 5, the best-fit models of the thermal-gas density profiles are matched to X-ray data. These models are beta models (or the sum of beta models), which present a flat core by construction and thus may be over-simplistic. Magnetic-field estimates derive from the matching of cluster simulations to radio data and/or Faraday-rotation measure, both generally assuming a scaling between the magnetic-field strength and the thermal-gas density. The uncertainties in these estimations are often not well defined. Some of the magnetic-field profiles from the literature may not be fit to the data. We conclude that our reference model is in acceptable agreement with measurements from the literature in the explored mass and redshift range, given the above-mentioned caveats in the measurement uncertainties and simplifying modeling assumptions. |
2310.00085 | PEACE: Prompt Engineering Automation for CLIPSeg Enhancement in Aerial
Robotics | Safe landing is an essential aspect of flight operations in fields ranging
from industrial to space robotics. With the growing interest in artificial
intelligence, we focus on learning-based methods for safe landing. Our previous
work, Dynamic Open-Vocabulary Enhanced SafE-Landing with Intelligence
(DOVESEI), demonstrated the feasibility of using prompt-based segmentation for
identifying safe landing zones with open vocabulary models. However, relying on
a heuristic selection of words for prompts is not reliable, as it cannot adapt
to changing environments, potentially leading to harmful outcomes if the
observed environment is not accurately represented by the chosen prompt. To
address this issue, we introduce PEACE (Prompt Engineering Automation for
CLIPSeg Enhancement), an enhancement to DOVESEI that automates prompt
engineering to adapt to shifts in data distribution. PEACE can perform safe
landings using only monocular cameras and image segmentation. PEACE shows
significant improvements in prompt generation and engineering for aerial images
compared to standard prompts used for CLIP and CLIPSeg. By combining DOVESEI
and PEACE, our system improved the success rate of safe landing zone selection
by at least 30\% in both simulations and indoor experiments. | Haechan Mark Bong, Rongge Zhang, Antoine Robillard, Ricardo de Azambuja, Giovanni Beltrame | 2023-09-29T18:44:27Z | http://arxiv.org/abs/2310.00085v4 | # PEACE: Prompt Engineering Automation for CLIPSeg Enhancement
###### Abstract
From industrial to space robotics, safe landing is an essential component for flight operations. With the growing interest in artificial intelligence, we direct our attention to learning based safe landing approaches. This paper extends our previous work, DOVESEI, which focused on a reactive UAV system by harnessing the capabilities of open vocabulary image segmentation. Prompt-based safe landing zone segmentation using an open vocabulary based model is no more just an idea, but proven to be feasible by the work of DOVESEI. However, a heuristic selection of words for prompt is not a reliable solution since it cannot take the changing environment into consideration and detrimental consequences can occur if the observed environment is not well represented by the given prompt. Therefore, we introduce PEACE (Prompt Engineering Automation for CLIPSeg Enhancement), powering DOVESEI to automate the prompt generation and engineering to adapt to data distribution shifts. Our system is capable of performing safe landing operations with collision avoidance at altitudes as low as 20 meters using only monocular cameras and image segmentation. We take advantage of DOVESEI's dynamic focus to circumvent abrupt fluctuations in the terrain segmentation between frames in a video stream. PEACE shows promising improvements in prompt generation and engineering for aerial images compared to the standard prompt used for CLIP and CLIPSeg. Combining DOVESEI and PEACE, our system was able improve successful safe landing zone selections by **58.62%** compared to using only DOVESEI. All the source code is open source and available online. 1
Footnote 1: [https://github.com/MISTLab/PEACE](https://github.com/MISTLab/PEACE)
## I Introduction
Logistics stands as a pivotal element across diverse sectors, ranging from e-commerce operations to complex military undertakings. Application of Uncrewed [1, 2] Aerial Vehicles (UAVs) is becoming part of research and industrial interest. Within this context, autonomous robots have emerged as an extensively sought-after solution. Notably, in modern urban environments, aerial robots are being explored as a compelling avenue to enhance last-mile delivery efficiency and reduce carbon footprint. However, challenges concerning safety have significantly hindered the widespread adoption of flying robots in more densely populated areas. When not adequately designed and operated, they can represent a possible threat to structures, vehicles and the public in general, especially if problems arise with their geolocation and other sensory information such that it could impede safe landing. Therefore, our aim is to achieve secure emergency landings without the need for external communication, relying solely on onboard computational capabilities and perceptual abilities of compact, lightweight cameras.
The goal of UAV safe self-landing is to identify and descend onto a designated Safe Landing Zone (SLZ), such as stable ground, relatively flat grassland, or open fields / areas / parks, while avoiding pedestrians, vehicles, and certain structures. In the event of a localization (e.g., GPS) or remote control communication failure, the operator should be able to regain control after landing. Existing common automatic landing systems employ either traditional localization and perception methods using Simultaneous Localization and Mapping (SLAM), constrained by sensor performance and computational resources, or utilize conventional deep learning-based image segmentation models, encountering domain adaptation challenges.
This work extends our previous work (DOVESEI [3]2), a system capable of running even with only a monocular RGB camera, that can "dynamically focus", by masking the received raw segmentation according to the system's current state. Thanks to the advancements in large language models (LLMs), DOVESEI can leverage open vocabulary models to allow it to be easily "tuned" only using language without extensive data collection. LLMs for aerial robotics is a relatively underdeveloped area of research and the objective of DOVESEI was to introduce the possibility of using an open vocabulary model as a basis for segmentation and dynamically focus (mask) it to improve the final system performance. However, DOVESEI is based on the open vocabulary segmentation model, CLIPSeg [4], which requires appropriate prompt inputs, and the selection performance of SLZ is closely related to the input prompts. CLIP [5] demonstrated that prompt engineering using "A photo of \(\{\}\)." (where \(\{\}\) represents the terms defining the class) already yields better results. Therefore, we used their standard prompt engineering as a baseline and refer to CLIP's original prompt engineering as _default_ and DOVESEI-DEF for a DOVESEI setup using that. The initial motivation of our work is due to the worse performance of aerial prompt from DOVESEI compared to the DOVESEI-DEF. DOVESEI's original prompt engineering was created using CLIP Interrogator [6] to heuristically produce better prompts for the model. However, for the specific task of safe landing, this approach is not rigorous enough. Given the constantly changing aerial images, therefore data distribution shift, unreasonable prompts generated by CLIP Interrogator may lead to SLZ selection failures. To address this, we propose PEACE (Prompt Engineering Automation for CLIPSeg Enhancement), aimed at enhancing the model's adaptability and stability for the application in ever-evolving real-world scenarios. PEACE
enables the possibility of dynamically generating prompts that are specifically optimized for a given image. We believe that this is an important step towards developing more robust autonomous UAV systems.
To demonstrate the competitiveness of PEACE, our work takes into account limitations imposed by local authorities (e.g. altitude, noise levels) and focuses on safe landing zone selection starting at altitudes of 100 meters. We are aiming for 100 meters, which is an ambitious attempt given that most preceding works operate at altitudes up to 30 meters [7, 8, 9, 10, 11] to align with the capabilities of small stereo cameras. However, it is evident that our system can be easily used in parallel with other methods that provide odometry and 3D mapping, facilitating further improvement. Similar to DOVESEI, our work offloads the remaining 20m to be navigated using conventional 3D path planning methods since stereo cameras with small baselines still have enough range to work in this situations.
## II Related Work
### _Safe Self-landing for UAV_
Many previous systems dealt with automatic UAV landing, but they would limit the maximum altitude [7, 8, 10, 11, 12] (e.g. the OAK-D Lite [13] has a small baseline, 75mm, limiting its functional maximum depth range to less than 19m), use more expensive and heavier sensors [14, 12, 15] (e.g. 3D LiDARs), or scan and create a map of the environment using Simultaneous Localization and Mapping (SLAM) [7, 8, 9] before deciding where to land. As the system in an emergency landing scenario may not be able to receive or send data, external positioning like a Global Navigation Satellite System (GNSS) may not be available. On the other hand, it is expected the UAV can be controlled by an extra onboard system through velocity commands and it is capable of rudimentary 3D position hold (e.g. barometer, range finder, optical flow, or equivalent sensor) to avoid major drifting. Consequently, our method does not use odometry or a map of the environment. An inherent advantage of the proposed methodology lies in its adaptability across diverse scenarios. By requiring only minimal parameter adjustments, this approach can cater to varying environments and operational conditions without necessitating extensive data collection or recalibration.
### _Aerial Image Segmentation_
Using aerial image segmentation to identify SLZ is a crucial aspect of a UAV's safe landing task. Past use of machine learning models [16][17] had shown those systems would not tolerate domain shifts, and were trained and tested for very specific scenarios. Thus, the emergence of Vision-Language model, exhibiting zero-shot performance, has garnered increasing attention. The key factor for the success of this model lies in accurate prompts, which require meticulous, usually humanly non-obvious, textual adjustments and proper design. In this process, fine tuned prompts are crucial, as we observe that even a minor variation in wording can ultimately have a noticeable positive or negative impact on final segmentation performance. Inspired by prompt learning in language tasks, [18] introduced context optimization to automate prompt engineering for small-sample classification. Recent work [19] extended [18] to open vocabulary object detection, devising fine-grained automatic prompt learning and specialized background explanations to identify the necessary prompts. However, these methods [18][19] are primarily tailored for specific object detection tasks and yield sub-optimal results when applied to particular tasks such as aerial image semantic segmentation.
### _Open Vocabulary Model for Landing Zone Selection_
Safe landing zone selection through segmentation needs to work in different scenarios. In our previous work [3], we introduced an "open vocabulary" based model, DOVESEI, which allows to fine-tune any internal model by changing only its prompt, without needing extensive collection of data. DOVESEI enabled adaptability to various scenarios to select SLZ with minimal adjustments to parameters, bypassing the necessity for extensive data accumulation for refining internal models through the use of CLIPSeg. However, as a crucial component of its potential safety landing zone segmentation, DOVESEI employs heuristically selected prompts to describe images. Our hypothesis is that optimal segmentation results for safe landing zones and the appropriateness of prompt selection are highly contingent.
## III System Design
### _Main System Architecture_
We develop our system based on DOVESEI and extend it using a ROS 2 [20] package that encompasses three discrete yet interconnected processes. These processes are the PEACE Prompt Generation, Landing Heatmap Generation Service and the Main
Fig. 1: Our safe-landing system was implemented in ROS 2 and it is composed of three main blocks: UAV (flight controller, sensors), landing heatmap generation (receives a RGB image and produces a heatmap of the best places to land), and main processing node (orchestrates the data exchange with the UAV, sends velocity of landing commands).
### _PEACE Prompt Generation_
The selection of the best landing locations (in pixels) is executed through the use of the open vocabulary segmentation model CLIPSeg [4], which is based on CLIP [5], a model trained on a private dataset with an estimated 400 million image-text pairs. To provide some context on CLIP's training dataset scale, the widely recognized ImageNet [21] dataset contains _only_ 1.2 million images. Our previous work DOVESEI used the prompt modifier "aerial view, drone footage photo of \(\{\}\), shade, shadows, low resolution." with CLIPSeg. The well-selected combination of words were heuristically chosen to perform better than the baseline prompt "A photo of \(\{\}\)." used by the original CLIP and CLIPSeg since it aligns better with the general environment of aerial view with rendered simulations and real-world satellite images with the goal of generating aerial segmentation, and a raw safe landing heatmap. PEACE enhances this process by enabling two improved features:
1. Widening the scope of words from a preset combination of words to a list of description types.
2. Automated prompt engineering to select the words that best match the description of a given image frame.
This is achievable through the PEACE system (see Fig. 2). The highlight of this system is the improvements made to the existing CLIP Interrogator [6] to specifically benefit aerial image segmentation. First of all, CLIP Interrogator is designed to describe everything observed within an image in an artistic fashion. Thus, the list of description types were artists, mediums, movements, trending and flavors. In addition, it uses BLIP [22] to generate detailed captions to maximize the precision of the image description. For CLIPSeg, we replaced these lists of description types with: resolution of an image, positive targets (potential landing zones such as grass, open-field, sidewalk, dirt, garden, vegetation, etc.), type of _frame_ (photo, screen, image, view, etc.) and _environment_ (sunny, rainy, foggy, snow, bright, dark, etc.). We experimented with description types such as aerial (high, top, altitude, etc.) and context-aware (next to, top of, behind, etc.) but they were not generally improving the results. BLIP captions were removed from prompts also since our objective is to target a selection of potential safe landing zones and description of every single detail in an image was excessive and not helpful for segmentation. Best results were obtained from the prompt "A _frame_ of \(\{\}\) in _environment._". As a result, PEACE feeds an observed image to the modified CLIP Interrogator, generating an optimal prompt specific to the image frame, and it passes this prompt alongside the image as inputs to CLIPSeg to generate a safe landing zone segmentation heatmap. The ability to dynamically generate optimized prompts per image frame in an autonomous manner is the key contribution of PEACE.
### _Landing Heatmap Generation Service_
The heatmap generator receives as inputs an aerial image in RGB format and its textual prompts that defines the given image. This generator uses a pre-trained semantic segmentation model to generate (zero-shot) comprehensive heatmap. This heatmap is the crucial component of DOVESEI that is capable of selecting the optimal (referred throughout this work as "best") landing positions within the context of the current observable image frame.
Our segmentation model for generating the landing heatmap is based on CLIP. CLIP is a foundation model with promising performance on zero-shot learning. Just like GPT [23], CLIP is able to learn to execute a variety of operations (OCR, geo-localization, action recognition, etc.) during pre-training. CLIP takes a batch \(N\) (image, text) input pairs:
\[(i_{input}\in\mathbb{R}^{3\times H\times W},t_{input}\in\Sigma^{*}) \tag{1}\]
where \(\Sigma\) is an alphabet [24] and generates \(N\times N\) possible (\(i_{output}\), \(t_{output}\)) pairings. The innovation behind CLIP comes from jointly training an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples using cosine similarity while common classifiers jointly train an image feature extractor and a linear classifier to predict a label. Cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. In detail, CLIP is trained with the aim of maximizing the cosine similarity of the image and text embeddings of the \(N\) real pairs in the batch while minimizing the cosine similarity of the embeddings of the \(N^{2}-N\) incorrect pairings. Although our work does not implement CLIP directly, our segmentation system takes advantage of CLIP's capabilities and consists of two tasks, which are optimized prompts engineering and zero-shot segmentation. PEACE is used to
Fig. 2: Visual description of PEACE. An observed image is passed through a modified CLIP Interrogator to generate an optimized prompt for the given image. The optimized prompt and the image are used as inputs to CLIPSeg to generate segmentation.
optimize prompts. An input image \(i_{input}\) is passed to PEACE to obtain a caption that gives a general description of the image and to CLIP to generate embeddings in parallel. This encoded information is compared with embeddings from lists of words heuristically selected to describe aerial images. Lists and words can be expressed in the following:
\[L=\left(\begin{array}{c}l_{1}\\ l_{2}\\ \vdots\\ l_{n-1}\\ l_{n}\end{array}\right)\quad l_{i}=\left(\begin{array}{c}w_{i,1}\\ w_{i,2}\\ \vdots\\ w_{i,n-1}\\ w_{i,n}\end{array}\right) \tag{2}\]
where \(L\) is a set of lists and each list \(l_{i}\) where \(i,j\in\mathbb{N}\) has a set of words \(w_{i,j}\). From these lists, four words with highest cosine similarity are selected. The final result is used as prompt engineering for CLIPSeg's input. CLIPSeg is a unified model capable of generating segmentation map based on prompts that are in the form of text or image. This model uses the visual transformer-based (ViT-B/16) CLIP with a transformer-based decoder extension. This decoder generates the binary segmentation through a linear projection on the tokens of its last transformer layer:
\[\mathbb{R}^{(1+\frac{W}{P}\times\frac{H}{P})\times D}\rightarrow\mathbb{R}^{W \times H} \tag{3}\]
where \(P\) is the token patch size of CLIP and \(D\) is token embedding size. CLIPSeg extended PhraseCut dataset [4] to enable training using image-text interpolation to have a joint model that operates on text and visual input.
### _Main Processing Node_
The main node is responsible for the high level control of the whole system and it's directly interconnected to the UAV flight controller. Its functionality can be summarised within three core components: the main state machine, raw heatmap post-processing, and dynamic focus.
#### Iii-D1 State Machine
The state machine controls the dynamic behavior of our system and its main states are:
1. Searching: coarse search for a landing spot from a safe (collision free) altitude.
2. Aiming: refined search to better align the UAV with a safe landing spot.
3. Landing: descend while checking for dynamic obstacles.
4. Waiting: stop and wait if any obstacles were detected after it started landing.
5. Climbing: climb back to the safe altitude if the waiting phase triggered a failure.
6. Restarting: restart the coarse search by moving to a new starting position.
#### Iii-D2 Raw Heatmap Post-processing
The raw segmentation received from the Landing Heatmap Generation Service (III-C) is a binary mask and it alone is not enough to allow the selection of the next movement direction. To compute this direction, it's essential to choose the optimal pixel for landing, which involves passing the raw segmentation through the pipeline depicted in Fig. 3. A final "best" pixel position is found using the objective function (the higher the value, the better) below:
\[P_{best}=\frac{Area}{Perimeter}\cdot(C_{dist}+1)^{-1} \tag{4}\]
where \(P_{best}\) is the objective function value defining the best pixel to land on (see Fig. 3), \(Area\) and \(Perimeter\) refer to the continuous segmentation patch where the pixel is in (after the Distance Map, see Fig. 3), and \(C_{dist}\) is the distance to the centre of the image (the UAV's virtual position).
#### Iii-D3 Dynamic Focus
The dynamic focus regulates the extent to which the raw heatmap is processed by the Raw Heatmap Post-processing module (III-D2). It "focuses", i.e. applies a binary mask that covers portions of the input, on the most important areas according to the current system state (III-D1) and its operation is illustrated in Fig. 4. The dynamic
Fig. 4: The focus mask radius (**R** in the illustration above) continuously varies (Eq. 5), expanding or shrinking, according to the current state of the system. Its minimum size is limited by the UAV’s projection on the ground (multiplier factor 6X for Aiming and 2X for Landing), while its upper limit is when the image is inscribed in the circle.
Fig. 3: The visual servoing system takes as input raw segmentation heatmaps (pixels with classes considered good to land on), averages them over time (UAV’s maximum bank angle is limited, constraining its maximum horizontal speed), creates a distance map, applies our dynamical focus masking method, and finally the objective function, Eq. (4), to decide on the best pixel.
(instantaneous) radius of the mask is defined by the equation below:
\[R_{focus}=R_{focus}+(S-R_{focus})\lambda \tag{5}\]
where \(R_{focus}\) is the dynamic focus radius (the **R** in Fig. 4), \(S\) is the UAV's safe-radius (user-defined parameter for maintaining a minimum clearance from obstacles) projection on the ground multiplied by a factor depending on the current state (6X for Aiming and 2X for Landing), and \(\lambda\) is constant (empirically set to 0.1) that controls its decay or increase speed.
## IV Experimental Setup
In order to validate our PEACE system, we tested it with the Aerial Semantic Segmentation Drone Dataset [25] and high-resolution satellite images from Paris, France, sourced from our open-source specialized ROS 2 package3. We can't guarantee, but we don't expect these specific images, matched with our prompts, were ever seen by CLIP during training, therefore we consider this setting as zero-shot. Aerial Semantic Segmentation Drone Dataset consists of 400 aerial images in sub-urban setting with altitudes ranging from 5 to 30 meters and its corresponding segmentation masks. For the experiments using the our aerial view simulator, a total of 50 trials were carried out for each configuration. As in [3], the simulated UAV always starts at an altitude of 100m and in a random location uniformly sampled from the rectangle formed by the latitude and longitude coordinates pairs (48.88105, 2.29263) and (48.83723, 2.38996). This bounding box is illustrated in Fig. 5.
Footnote 3: [https://github.com/ricardodeazambuja/ROS2-SatelliteAerialViewSimulator](https://github.com/ricardodeazambuja/ROS2-SatelliteAerialViewSimulator)
## V Results
Our PEACE system was compared with the default prompt engineering (used for the original CLIP and CLIPSeg) and DOVESEI's aerial prompt (Table I). We added noise to make the images slightly blur to reflect images observed in high altitude while flying in a real environment. As outlined during the introduction, DOVESEI performed worse than DOVESEI-DEF. As described in (Table I), PEACE system increased mIOU by 29.17% compared to DOVESEI's aerial prompts, which is a significant improvement given that this enhancement is purely prompt engineering based.
To validate that PEACE system can truly improve SLZ selection compared to the prompts used in DOVESEI and DOVESEI-DEF, we conducted 50 identical experiments for DOVESEI-PEACE, DOVESEI and DOVESEI-DEF. DOVESEI-PEACE was able to increase successful SLZ selections by 58.62% compared to DOVESEI (Table II).
As outlined earlier, our success criteria involves achieving an altitude of 20m over a suitable landing area, thus enabling low-cost, lightweight stereo camera and conventional 3D path planning techniques to be applied. Hence, goal positions aren't set but instead manually determined through visual inspection. Experiments exceeding the maximum allowed time (1200s) are terminated.
From a total of 50 experiments, the possible reasoning behind the higher average time spent from DOVESEI is due to the prompt engineering used for DOVESEI, which made the segmentation more brittle that leads the system to be less confident (Table II). More specifically, optimized prompt engineering can make the segmentation more stable and increase the system's confidence, therefore it would not have to go to the waiting state to figure out a better SLZ. If our hypothesis is correct, the longer average horizontal distance for DOVESEI-PEACE is due to finding a safer SLZ that is further away or found a better SLZ during descent and adjusted the path, which is out of DOVESEI and DOVESEI-DEF SLZ selections. This reasoning can explain the higher total successful SLZ selections of DOVESEI-PEACE.
For a clearer depiction of a typical successful SLZ selection and due to space constraints in this publication, Fig. 6 displays the satellite images alongside the UAV's path for one experiment using DOVESEI-PEACE. The paths for all systems experimented, and the same starting location, is presented in the Fig. 7. While the UAV is navigating only in the Searching state( III-D1), all the systems behaviour is dependent on their segmentation, as expected.
Fig. 5: Satellite image of Paris showing the latitude and longitude bounding box used to uniformly sample the 50 starting positions for our experiments (red box, dashed) [26]
## VI Discussions and Conclusions
In this paper, we propose PEACE, a novel Prompt Engineering Automation for CLIPSeg Enhancement, we extend our previous Dynamic Open Vocabulary Enhanced Safe-landing with Intelligence (DOVESEI) work, by introducing an optimization method for automating the prompt generation. Combining PEACE with DOVESEI improved the total successful runs by 58.62% compared to using only DOVESEI. Differing from those traditional methods which fully rely on conventional solutions rooted in 3D SLAM and path planning, this study aimed on investigating the feasibility of transferring much of the system's intelligence to an open-vocabulary semantic segmentation model that uses only a RGB monocular camera, thus leaving only the final meters to be handled by a small stereo camera and traditional 3D path planning. The reliance on semantic segmentation, and the observed susceptibility of such systems to fluctuations even with slight alterations in input prompts, prompted the development of our prompt engineering automation mechanism.
We assemble PEACE with DOVESEI and conduct a series of experiments to demonstrate the importance of optimizing the prompts in open vocabulary model-based image segmentation for UAV safe landing. Experimental results show the better performance of our system in enhancing the UAV landing success rate. Our prompt automation generation and dynamic focus significantly enhance the UAV's ability to select suitable landing sites for safe landing. It is necessary to state that our automated prompt engineering method does not have to run every frame in a video steam since the maximum speed is limited and the frame rates are high.
The fact that most previous works focused on low initial altitudes [10, 11, 12, 7, 8] or equipment with higher complexity [15, 12] and weight that would not fit in a small UAV [14], makes us confident in the potential of our approach, underscoring the significance of further exploration and expansion of this research in the future.
Since the objective of our work is safe landing, robustness on segmentation is paramount. Such systems achieve high scores in zero-shot tasks and, therefore, offer improved generalisation capabilities over models that use a closed vocabulary (e.g. a model trained only on MS COCO [27] classes). More efforts are needed to advance prompt engineering optimized for aerial images together with the understanding of what is considered safe given an environment. Enhancing our PEACE system with few-shot learning from CLIPSeg could be an interesting attempt to improve on aerial image segmentation.
Furthermore, our simulated UAV stopped at an altitude of 20m without exploring the final meters where small stereo vision sensors would have enough range to allow the use of 3D based path planning. Validating our work with rigorous indoor and outdoor testing would be one of the next logical milestones.
Such efforts would be advantageous in the context of detecting secure landing zones.
|
2309.14187 | Two tricks to trivialize higher-indexed families | The conventional general syntax of indexed families in dependent type
theories follow the style of "constructors returning a special case", as in
Agda, Lean, Idris, Coq, and probably many other systems. Fording is a method to
encode indexed families of this style with index-free inductive types and an
identity type. There is another trick that merges interleaved higher
inductive-inductive types into a single big family of types. It makes use of a
small universe as the index to distinguish the original types. In this paper,
we show that these two methods can trivialize some very fancy-looking indexed
families with higher inductive indices (which we refer to as higher indexed
families). | Tesla Zhang | 2023-09-25T14:47:47Z | http://arxiv.org/abs/2309.14187v2 | # Two tricks to trivialize higher-indexed families
###### Abstract.
The conventional general syntax of indexed families in dependent type theories follow the style of "constructors returning a special case", as in Agda, Lean, Idris, Coq, and probably many other systems. _Fording_ is a method to encode indexed families of this style with index-free inductive types and an identity type.
There is another trick that merges interleaved higher inductive-inductive types into a single big family of types. It makes use of a small universe as the index to distinguish the original types.
In this paper, we show that these two methods can trivialize some very fancy-looking indexed families with higher inductive indices (which we refer to as _higher indexed families_).
###### Contents
* 1 Motivation
* 2 The peculiar definition of integers
* 3 Relation to circle
* 4 Fording indexed families
* 5 Further discussion on syntax
In this article, we show that the idea of _Fording_ proposed by Conor McBride in [1] makes the "mini-universe" definition of integers \(\mathbb{Z}_{U}\) in [2] trivial.
**Warning 0.1**.: This is a very syntax-minded article, so there will be pseudocode snippets to provide intuition based on imaginary syntax, but those will be mostly analogous to the style in the HoTT book [3] and Cubical Agda [4].
We will also use the Aya language, which adapts a "pattern matching" syntax [5] for indexed families. For instance, the traditional definition of sized vectors looks like the following:
\[\begin{array}{l}\text{open data }\text{Vec }(A:\text{Type})\ (n:\text{ Nat})\\ \mid A,\text{ zero}\Rightarrow\text{nil}\\ \mid A,\text{ suc }n\Rightarrow\text{cons }A\ (\text{Vec }A\ n)\end{array}\]
It says that "the type Vec has two constructors, nil is only available when the second argument of Vec is zero, and similarly for cons but only when it's suc \(n\). The arguments of the constructors are listed like in Haskell, and unlike the GADT
syntax adapted in Haskell GADT [6], Agda [7], and Idris, we do not write out the return type of constructors.
This _pattern matching_ does not need to be exhaustive and can have _redundant_ cases, because they are actually not pattern matching, but instead describes the _availability_ of constructors. This syntax enjoys many benefits compared to the traditional one, discussed in [5].
## 1. Motivation
The definition of \(\mathbb{Z}_{U}\) is said to be inspired from a trick due to Szumi Xie1, which is also seen in the Haskell parser generator BNFC2, where the --haskell-gadt option generates a big GADT that combines several data types in the following way:
Footnote 1: Used in [https://bitbucket.org/szumixie/tt-in-tt/src/master/Cubical/Syntax.agda](https://bitbucket.org/szumixie/tt-in-tt/src/master/Cubical/Syntax.agda), a formalization of type theory syntax in Cubical Agda [4]
Footnote 2: Website: [https://bnfc.digitalgrammars.com](https://bnfc.digitalgrammars.com), GitHub: [https://github.com/BNFC/bnfc](https://github.com/BNFC/bnfc)
1. Define an "enumeration" \(U\), say, a data type with all constructors having no argument,
2. For every data type \(D\), add a constructor \(D_{U}\) to \(U\),
3. Define a GADT \(T\) indexed by \(U\), say, \(T:U\rightarrow\mathcal{U}\),
4. For every constructor \(c:X\to D\), add the following constructor to \(T\): \[c_{T}:X\to T\;D_{U}\]
**Demonstration 1.1**.: Suppose we have the following data types, written in Haskell:
data D1 = con\({}_{1}\) Int \(|\) con\({}_{2}\) Bool data D2 = con\({}_{3}\) Bool \(|\) con\({}_{4}\) Int
By applying the trick, we get the following GADT:
data U = D1_U \(|\) D2_U
data T : (\(u:\) U) \(\rightarrow\) Type where
con\({}_{1}\)_T : Int \(\rightarrow\) T D1_U
con\({}_{2}\)_T : Bool \(\rightarrow\) T D1_U
con\({}_{3}\)_T : Bool \(\rightarrow\) T D2_U
con\({}_{4}\)_T : Int \(\rightarrow\) T D2_U
\(\blacktriangle\)
Szumi Xie's trick is essentially the same thing but applied to higher inductive-inductive types, so we can encode interleaved recursion among constructors with an indexed higher inductive type. This pattern is very common in the formalization of dependent type theory itself in dependent type theory.
So, relating this inspired definition with something else could potentially open up a broader view of the trick, especially in homotopy type theory or dependent type theories with quotient inductive types [8].
We start from an explanation of \(\mathbb{Z}_{U}\) assuming familiarity with basic ideas in homotopy type theory [3], including univalence, the notion of equivalence (a function with the property of being an equivalence), and higher inductive types (chapter 6 of [3]). These will be used as the context of discussion, but we will not go into details, so even a vague impression should suffice. Then we explain the Fording technique, and show how to Ford \(\mathbb{Z}_{U}\).
## 2. The peculiar definition of integers
In [2], Thorsten Altenkirch and Luis Scoccola proposed the following definition of integers using a "mini-universe":
open data U
T
path : T = T
open data PreInt U
T \(\Rightarrow\) zero
def Int \(\Rightarrow\) PreInt T
They call it \(\mathbb{Z}_{U}\), but we stick to an ASCII name for a controlled use of Unicode characters in program source code. This definition is not expected to the understood just by looking at it - the rest of this section is devoted to provide the relevant background.
**Remark 2.1**.: In the above snippet, PreInt is a _simpler indexed type_[5]. It is equivalent to the following Agda-style constructor:
\[\text{zero}:\text{PreInt T}\]
To decipher this definition, we need to understand what integers are. Instead of just defining the sets (i.e. list its elements) or the _normal form_ of integers, we look at the algebraic structure.
We think of them as a modified version of natural numbers, where the successor function is not just a constructor (which is an injective map), but is also equipped with a "canonical inverse" (so it's now bijective), so it becomes an equivalence rather than a function.
So, it is tempting to write down the following definition:
data Int where
zero : Int
succ : Int \(\simeq\) Int
How do we deal with elimination of this type?
**Observation 2.2**.: Maps from a natural number to \(X\) always provide the following information:
* An instance \(x:X\),
* A function \(f:X\to X\).
This is because every map \(h\) from a natural number to \(X\) consists of:
* \(h(0)\), which is an instance of \(X\),
* \(h(n+1)\), which can use \(h(n):X\) and constructs an instance of \(X\), which is essentially a map from \(X\) to \(X\).
Maps from an integer, analogously, always provide the following information:
* An instance \(x:X\),
* An equivalence \(e:X\simeq X\).
This is what we need to provide in the eliminator!
But the above imaginary syntax is not supported by any known type theory (which may also worth exploration!). One way to make sense of this is to unfold the definition of equivalence, which is discussed in detail in [2].
**Observation 2.3**.: For the definition of \(\mathbb{Z}_{U}\), we may apply PreInt to path to get the analogous thing of succ as in observation 2.2:
def succPath : Int \(\uparrow=\) Int \(\Rightarrow\) ap PreInt path
The \(\uparrow\) symbol lifts the = to a higher universe level, so we can put types on the two sides of a path.3 It is harmless in this particular case to just ignore it. The design follows Conor McBride's proposal of implementing universe polymorphism, studied in [9].
Footnote 3: Ideally, this should be inferred automatically, but we haven’t implemented it yet.
**Observation 2.4**.: The equivalence in observation 2.3 gives the successor and predecessor function on integers, as well as the fact that they are inverse to each other.
The other way is to think about Szumi Xie's trick. When we want to define \(n\) separate types, we can define one type _indexed_ by an enumeration of \(n\) constructors. The enumeration is considered a "mini-universe", as it _governs_ several types.
Now we want to define a type Int with an equivalence on itself, so instead we define a type indexed by a "higher-enumeration" U with two constructors:
* Int for the type we want to define,
* path of type Int = Int for the equivalence on the type itself. We work with path instead of the actual equivalence definition because only the former is supported by higher inductive types.
Then, the type indexed by U (we call it PreInt) will have some constructors that correspond to the constructors of Int. For zero, we add
\[\text{zer}_{\text{OT}}:\text{PreInt}\;T\]
And the good news is that we don't need to do anything for succ, because we can directly derive it from path induction on path. Assuming the following operator for substitution and path reversal (implementation omitted to avoid distraction):
def subst (\(P:\text{A}\rightarrow\text{Type}\)) {\(a\;b:\text{A}\)} (\(p:a=b\)) : \(P(a)\to P(b)\)
def sym {\(a\;b:\text{A}\)} (\(p:a=b\)) : \(b=a\)
We can define the functions succ and even pred as follows:
def succ : Int \(\rightarrow\) Int \(\Rightarrow\) subst PreInt path
def pred : Int \(\rightarrow\) Int \(\Rightarrow\) subst PreInt (sym path)
Note that succ is just the underlying function of the map in observation 2.3. We may also show that they're inverse to each other, because substitution commutes with path composition, say, we have the following path equivalences:
subst \(A\;p\) (subst \(A\;q\;u\)) = subst \(A\;(p\cdot q)\;u\)
subst \(A\;\text{idp}\;u\) = \(u\)
These lemmas directly leads to the desired results.
**Remark 2.5**.: In cubical type theory one can try to write this:
\[\text{succ}_{\text{CT}}:(i:\mathbb{I})\rightarrow\text{PreInt}\;(\text{path }i)\]
It is difficult to make sense of this thing at this point, because the boundary information needs to be taken care of, and we do not yet know where this should happen. This will be addressed later.
## 3. Relation to circle
Observe that the definition T is identical to the circle \(S^{1}\), so we can rewrite the above definition with some renamings:
\begin{tabular}{l l} open & data \(S^{1}\) \\ \(|\) base \\ \(|\) loop & : base = base \\ open & data & Helix \(S^{1}\) \\ \(|\) base & \(\Rightarrow\) zero \\ def & Int \(\Rightarrow\) & Helix \\ \end{tabular}
In essence, this definition of integers is directly related to the path space of \(S^{1}\). To make sense of this, we first refer to the famous theorem in homotopy type theory [10]:
\[(\text{base}=\text{base})\simeq\mathbb{Z}\]
The proof is by showing that, for \(x:S^{1}\), the following type applied by \(x\) is equivalent to the path space of \(S^{1}\) based at \(x\):
\begin{tabular}{l l} def & Code \(S^{1}\) : Type \\ \(|\) base \(\Rightarrow\) Z \\ \(|\) loop \(i\)\(\Rightarrow\) univalence & succEquiv \(i\) \\ \end{tabular}
And the two directions of the theorem are called _encode_ and _decode_ respectively:
\[\text{encode}:(x:S^{1})\rightarrow(\text{base}=x)\rightarrow\text{Code}(x)\]
\[\text{decode}:(x:S^{1})\rightarrow\text{Code}(x)\rightarrow(\text{base}=x)\]
The fact that encode and decode are inverses of each other can be illustrated in the following way:
* The base point corresponds to Code(base) \(\equiv\) Z:
* The loop corresponds to the path obtained from applying succEquiv to univalence:
*
\begin{tabular}{l l} Z & univalence succEquiv \(i\) \\ base & loop \(i\) \\ \end{tabular}
**Remark 3.1**.: We have omitted the following definition for spatial considerations4:
Footnote 4: In fact, we haven’t finish implementing univalence in Aya yet.
* \(Z\) is the type of integers. This is not to be confused with Int, which is a concrete definition given above. Here \(Z\) can be any definition of integers, preferably a simple, intuitive one.
* succ is the successor function on \(Z\). Every integer type should have such a function.
* succEquiv is an equivalence between \(Z\) and itself, given by succ.
* univalence takes an equivalence and returns a path.
Another purpose of this emission is to encourage readers to think about integers abstractly.
Also, Aya uses the cubical definition of path, which is similar to a function from the interval type. That's also why the pattern for loop has a parameter. Curious readers can refer to [11] for more details.
**Observation 3.2**.: In the definition of Code, we put \(Z\) and an equivalence on itself that corresponds to succ to the function bodies. As observed in observation 2.2, this is the essential information we want on integers.
By encode-decode of \(S^{1}\), we know that Code is equivalent to the path space of \(S^{1}\). So, if somehow we can make the path space of \(S^{1}\) as the _definition_ of integers, then it will be really nice.
## 4. Fording indexed families
Henry Ford is quoted for having said that "Any color the customer wants, as long as it's black."
An analogous method is used to encode indexed families with a built-in identity type: suppose we have a family \(F:A\to\mathcal{U}\) with a constructor \(c:X\to F\)_a_ for some expression \(a:A\) (usually in neutral form), Fording turns it into the following:
\[c_{\text{Ford}}:(a^{\prime}:A)\to(a=a^{\prime})\to X\to F\;a^{\prime}\]
And now \(F\) is no longer _indexed_.
**Demonstration 4.1**.: Consider the following indexed family:
open data \[\text{Bool}\] \[\text{true}\] \[\text{false}\] open data \[\text{So \Bool}\] \[\text{true}\,\Rightarrow\,\text{oh}\]
We have:
* oh : So true is the unique inhabitant,
* So false is the empty type.
This type is extensively used in the Idris language (as Data.So5), where it lifts any Boolean-valued test to a type-level proposition. Fording it gives us the following type:
open data \[\text{So }(b:\text{Bool})\] \[\text{oh }(b=\text{true})\]
Then we can see directly that So \(b\) is nothing but a wrapper of \(b=\text{true}\).
Footnote 5: [https://www.idris-lang.org/docs/idris2/current/base_docs/docs/Data.So.html](https://www.idris-lang.org/docs/idris2/current/base_docs/docs/Data.So.html)
**Exercise 4.2**.: Ford the standard definition of finite sets (called \(\mathsf{Fin}\) in Idris) and the sized vector type.
Fording helps us get rid of index unification in pattern matching. Suppose we want to split on the type \(F\)\(b\) for some \(b:A\), then if we write the pattern \(c\)\(x\), it must be the case that \(b\) and \(a\) can be _unified_, otherwise this pattern matching will
be _rejected_. In case \(b\) is a function call that stuck on some argument, this unification is very unlikely to succeed.
**Demonstration 4.3**.: We cannot perform pattern matching on a type So (isEmpty \(x\)) where \(x\) : List \(A\) is a variable. \(\clubsuit\)
**Remark 4.4**.: In traditional dependent pattern matching [12], writing oh as a pattern for the type So \(x\) (when \(x\) is a variable) will _rewrite_\(x\) as true in the right-hand side of the clause:
\(\begin{array}{l}\mbox{f : }(x:\mbox{Bool})\to\mbox{So }x\to x=\mbox{true}\\ \mbox{f \_ oh = refl}\end{array}\)
We assume ref to be the constructor of the inductively-defined identity type.
This is a desirable feature because it matches the intuition that "only when \(x\) is true can we construct oh as a proof of So \(x\)".
After Fording, we can do the same by writing oh refl:
\(\begin{array}{l}\mbox{f : }(x:\mbox{Bool})\to\mbox{So }x\to x=\mbox{true}\\ \mbox{f \_ (oh refl) = refl}\end{array}\)
But what's good about Fording is that we can _delay_ this rewriting to the moment when we actually need it, especially in case \(x\) is not a variable. In fact, in the above case, we don't even do any dependent pattern matching:
\(\begin{array}{l}\mbox{f : }(x:\mbox{Bool})\to\mbox{So }x\to x=\mbox{true}\\ \mbox{f \_ (oh }p)=p\end{array}\)
This also prevents over-eager rewriting of variables, which can potentially lead to unreadable compiler output when doing type-directed development, or slow down proof search. \(\clubsuit\)
Now, let's apply this to the definition of integers:
\(\begin{array}{l}\mbox{open data }S^{1}\\ \mbox{| base}\\ \mbox{| loop : base = base}\\ \mbox{open data }\mbox{Helix }(x:S^{1})\\ \mbox{| zero }(x=\mbox{base})\\ \mbox{def }\mbox{Int}\Rightarrow\mbox{Helix }\mbox{base}\end{array}\)
**Observation 4.5**.: The type Helix is simply a wrapper of the path space of \(S^{1}\) based at a point given by the parameter \(x:S^{1}\).
**Observation 4.6**.: The type Int, when we look through the wrapper Helix, is an alias to the type base = base.
The same idea applies to any inhabited type. Consider such a case \(\langle H:\mathcal{U},x:H\rangle\). The path space of \(H\) based at \(x:H\) can be defined as \(P:H\to\mathcal{U}\) with a constructor \(c:P\ x\).
Fording also reveals the mystery behind the constructor succ\({}_{T}\) in remark 2.5 by expanding it to:
\(\begin{array}{l}\mbox{open data }\mbox{Helix' }(x:S^{1})\\ \mbox{| zero }(x=\mbox{base})\\ \mbox{succ }(i:I)\ (x=\mbox{loop }i)\end{array}\)
And we know how to read this type in cubical type theory: it's a square!
So the constructor is just parameterized by a square in a peculiar way.
## 5. Further discussion on syntax
In observation 2.2, we have given an example of type that is _freely generated_ by a constructor and an equivalence on the type, where the equivalence is a constructor equipped with the property of being an equivalence (called isEquiv in various formalization projects).
Even though this can only be expressed by expanding the definition of equivalence and will end up giving us loads of constructors, it is a powerful notation for informal writings.
**Demonstration 5.1**.: A real-world use case is in [13, SS2], which cites a result from [14] that certain path space of pushout is freely generated by reflexivity and an equivalence. The pushout discussed in the paper is defined for a span, which includes \(A:\mathcal{U},B:\mathcal{U},R:A\to B\to\mathcal{U}\), and we write \(A+_{R}B\) for the pushout, and
\[\operatorname{inl}:A\to A+_{R}B\] \[\operatorname{inr}:B\to A+_{R}B\]
as two of its constructors. The path spaces of interest are:
\[\operatorname{inl}(a_{0})=\operatorname{inl}(a)\quad\operatorname{inl}(a_{0}) =\operatorname{inr}(b)\quad\operatorname{inr}(b)=\operatorname{inr}(b^{\prime})\]
And the constructors for these types are:
\[\begin{array}{l}\text{data PathSpaces }(A\ B:\text{Type})\ (R:A\to B\to\text{Type})\text{ where}\\ \text{ref}:(a_{0}:A)\to\operatorname{inl}(a_{0})=\operatorname{inl}(a_{0})\\ \text{eq}:(a_{0}\ a:A)\to(b:B)\to R\ a\ b\to\\ (\operatorname{inl}(a_{0})=\operatorname{inl}(a))\simeq(\operatorname{inl}(a_{ 0})=\operatorname{inr}(b))\end{array}\]
Without homotopy type theory, it should be quite difficult to describe this path space, and even if it's written by someone, it's less likely to be understood by others compared to this one. \(\blacktriangle\)
|
2309.04394 | Phenomenological study of two minor zeros in neutrino mass matrix using
trimaximal mixing | We study the phenomenological implications of two minor zeros in neutrino
mass matrix using trimaximal mixing matrix. In this context, we analyse fifteen
possible cases of two minor zeros in neutrino mass matrix and found only two
cases, namely Class $A_1$ and Class $A_2$, that are compatible with the present
neutrino oscillation data. We present correlations of several neutrino
oscillation parameters and give prediction of the total neutrino mass, the
values of effective Majorana mass, the effective electron anti-neutrino mass
and CP violating Majorana phases for these two classes. We also explore the
degree of fine tuning in the elements of neutrino mass matrix for the allowed
classes. Moreover, we propose a flavor model within the seesaw model along with
$Z_{8}$ symmetry group that can generate such classes. | Iffat Ara Mazumder, Rupak Dutta | 2023-09-08T15:47:16Z | http://arxiv.org/abs/2309.04394v2 | # Phenomenological study of two minor zeros in neutrino mass matrix using trimaximal mixing
###### Abstract
We study the phenomenological implications of two minor zeros in neutrino mass matrix using trimaximal mixing matrix. In this context, we analyse fifteen possible cases of two minor zeros in neutrino mass matrix and found only two cases, namely class \(A_{1}\) and class \(A_{2}\), that are compatible with the present neutrino oscillation data. We present correlations of several neutrino oscillation parameters and give prediction of the total neutrino mass, the values of effective Majorana mass, the effective electron anti-neutrino mass and CP violating Majorana phases for these two classes. We also explore the degree of fine tuning in the elements of neutrino mass matrix. Moreover, We propose a flavor model within the seesaw model along with \(Z_{8}\) symmetry group to generate these classes.
pacs: 14.60.Pq, 14.60.St, 23.40.s
## I Introduction
Although no new particles beyond the Standard Model (SM) have been discovered in any experiments so far, but the neutrino oscillation phenomena observed in several experiments [1; 2] indirectly confirms the existence of a more global theory beyond the SM. The non zero neutrino mass as confirmed by neutrino oscillation experiments can not be explained by the SM and its origin is one of the most fundamental questions in particle physics today. In the flavor basis where the charged lepton mass matrix is diagonal, the neutrino mass matrix \(M\) can be parametrized in terms of nine free parameters, namely three masses (\(m_{1}\), \(m_{2}\), \(m_{3}\)), three mixing angles (\(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\)), two Majorana CP violating phases (\(\alpha\), \(\beta\)) and a Dirac CP violating phase (\(\delta\)). Although the mixing angles and the mass squared differences are known to a very good accuracy from currently available experimental data, there are still many open questions in the neutrino sector. For instance, the absolute mass scale of three neutrinos, the sign of \(\Delta m^{2}_{32}\), whether neutrinos are Dirac or Majorana particle, the three CP violating phases are still unknown. With the experimental values of the mass squared differences and the mixing angles as the input parameters, we still have four unknown parameters in the neutrino mass matrix \(M\). There exists several schemes in literature by which one can reduce the number of free parameters in the neutrino mass matrix. The general idea is to assume that some matrix elements will be dependent on other matrix elements. For instance, one can assume certain matrix element to be zero or certain combination of elements of the neutrino mass matrix to be zero that can be caused by some underlying flavor symmetry.
The exact mechanism of the origin of neutrino masses is still unknown. Out of many proposed theoretical models, the seesaw mechanism of either type-I or type-II looks more promising. The seesaw mechanism not only helps in understanding the scale of neutrino mass but also provide the necessary ingredients to realize texture zero patterns in the neutrino mass matrix. Within the framework of Type-I seesaw mechanism [3; 4], the neutrino mass matrix is written as
\[M=-M_{D}M_{R}^{-1}M_{D}^{T}\,, \tag{1}\]
where \(M_{D}\) is the \(3\times 3\) Dirac mass matrix that links (\(\nu_{e}\), \(\nu_{\mu}\), \(\nu_{\tau}\)) to their right handed singlet counterpart and \(M_{R}\) is the \(3\times 3\) Majorana mass matrix of the right handed neutrinos. It is worth mentioning that by considering the suitable position of zeros in \(M_{D}\) and \(M_{R}\), one can, in principle, get the desired zero textures of the neutrino mass matrix. It should also be noted that zeros in \(M_{D}\) and \(M_{R}\) not only give rise to zeros in the mass matrix but also give rise to zero minors in the neutrino mass matrix. It was also pointed out in Ref. [5; 6] that if \(M_{D}\) is diagonal, then zeros in \(M_{R}\) can give rise to minor zero pattern in the neutrino mass matrix \(M\).
There exists numerous literatures where the form and texture of the neutrino mass matrix have been explored. It was found that texture with more than two zeros in the neutrino mass matrix can not accommodate the latest neutrino
oscillation data. The phenomenology of two texture zeros [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] have been explored at large scale in the literature with Pontecorvo Maki Nakagawa Sakata (PMNS) and trimaximal (TM) as the mixing matrix. It should be noted that out of fifteen possible cases of two texture zeros only seven patterns are favored by the recent experimental data. [7] in case of PMNS mixing matrix. With TM mixing matrix, however, only two patterns are found to be compatible with the oscillation data. [8; 17]. In Ref. [10], the authors investigated viable textures with two zeros in the inverted neutrino mass matrix and seven such patterns were shown to be allowed. However, these textures do not apply in the case of non-invertible neutrino mass matrix \(M\). Also, in Ref. [20; 21], authors have studied the phenomenological implications of one texture zero in neutrino mass matrix and found all six possible patterns to be compatible with the data. Moreover, in Ref. [5; 6; 22; 23; 24; 25] and Ref. [26; 27; 28; 29; 30; 31] the authors have explored the phenomenology of vanishing minor and cofactor zero in the neutrino mass matrix. The class of one and two independent zero minors textures in the neutrino mass matrix were explored in Refs. [5; 22] with PMNS mixing matrix. It was shown that out of fifteen possible two minor zero patterns, only seven patterns are viable and all the six one minor zero patterns are compatible with the data. Similarly, for TM mixing matrix along with one minor zero condition [6], all the six possible patterns are found to be allowed. In Ref. [24], the authors have used the tribimaximal (TB) mixing matrix and showed that five classes of texture zeros or vanishing minor can accommodate the neutrino oscillation data.
The most widely studied lepton flavor mixing is Tribimaximal (TB) mixing pattern [32; 33; 34; 35]. Although the TB mixing matrix correctly predicted the value of atmospheric mixing angle \(\theta_{23}\) and the solar mixing angle \(\theta_{12}\), it, however, failed to explain a non zero value of the reactor mixing angle \(\theta_{13}\) that was confirmed by several experiments such as T2K [36], MINOS [37], Double Chooz [38] and Daya Bay [39]. In order to accommodate a non zero value of \(\theta_{13}\), TM mixing matrix was constructed by multiplying the TB mixing matrix by an unitary matrix [40; 41; 42]. If the first column of the TM mixing matrix is identical to the first column of TB mixing matrix, it is named as TM\({}_{1}\) mixing matrix and if the second column is identical to that of the TB mixing matrix it is called TM\({}_{2}\) mixing matrix. In this paper we analyze the compatibility of two minor zero neutrino Majorana textures with the recent experimental data using the TM mixing matrix. By using two minor zero conditions one can reduce the number of free parameters in the model. We analyse fifteen possible cases of two minor zeros in neutrino mass matrix and found only two cases, namely class \(A_{1}\) and class \(A_{2}\), that are compatible with the present neutrino oscillation data. In order to study the detail feature of the mass matrix elements we first perform a \(\chi^{2}\) analysis of the allowed two minor zero textures. We use five observables in our \(\chi^{2}\) analysis, namely three mixing angles and the two mass squared differences. We do not use Dirac CP violating phase \(\delta\) in our \(\chi^{2}\) analysis as its value has not been measured yet. We also find the degree of fine tuning in the neutrino mass matrix elements for each allowed textures. Moreover, We give our model predictions of the unknown Dirac and Majorana CP violating phases, the total neutrino mass, effective Majorana mass and the effective electron antineutrino mass for these two classes.
We organize the paper as follows. We start with TM\({}_{1}\) and TM\({}_{2}\) mixing matrix and find the elements of the neutrino mass matrix in Section. II. We then write down \(\theta_{13}\), \(\theta_{23}\), \(\theta_{12}\), Dirac CP violating phase \(\delta\), effective electron anti-neutrino mass \(m_{\nu}\) and the effective Majorana mass term \(M_{ee}\) in terms of the unknown parameters \(\theta\) and \(\phi\) of the TM mixing matrix. In Section. III, we describe the formalism of two minor zeros in neutrino mass matrix and identify all the possible cases of two minor zero in neutrino mass matrix. In Section. IV We discuss the phenomenology of the allowed two minor zero classes and also obtain the best fit values of all the oscillation parameters with our \(\chi^{2}\) analysis. We report the fine-tuning of neutrino mass matrix in Section. V. In Section. VI, we present the symmetry realization and conclude in Section. VII.
## II Neutrino mass matrix
TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, constructed by multiplying the TB mixing matrix by an unitary matrix, can be written as
\[U_{TM_{1}}=\begin{pmatrix}\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}\cos\theta& \frac{1}{\sqrt{3}}\sin\theta\\ -\frac{1}{\sqrt{6}}&\frac{\cos\theta}{\sqrt{3}}-\frac{e^{i\phi}\sin\theta}{ \sqrt{2}}&\frac{\sin\theta}{\sqrt{3}}+\frac{e^{i\phi}\cos\theta}{\sqrt{2}}\\ -\frac{1}{\sqrt{6}}&\frac{\cos\theta}{\sqrt{3}}+\frac{e^{i\phi}\sin\theta}{ \sqrt{2}}&\frac{\sin\theta}{\sqrt{3}}-\frac{e^{i\phi}\cos\theta}{\sqrt{2}} \end{pmatrix}. \tag{2}\]
and
\[U_{TM_{2}}=\begin{pmatrix}\sqrt{\frac{2}{3}}\cos\theta&\frac{1}{\sqrt{3}}& \sqrt{\frac{2}{3}}\sin\theta\\ -\frac{\cos\theta}{\sqrt{6}}+\frac{e^{-i\phi}\sin\theta}{\sqrt{2}}&\frac{1}{ \sqrt{3}}&-\frac{\sin\theta}{\sqrt{6}}-\frac{e^{-i\phi}\cos\theta}{\sqrt{2}} \\ -\frac{\cos\theta}{\sqrt{6}}&\frac{e^{-i\sqrt{6}}\sin\theta}{\sqrt{2}}&\frac{1}{ \sqrt{3}}&-\frac{\sin\theta}{\sqrt{6}}+\frac{e^{-i\phi}\cos\theta}{\sqrt{2}} \end{pmatrix}. \tag{3}\]
where \(\theta\) and \(\phi\) are two free parameters. In the flavor basis where the charged lepton mass matrix is diagonal, the symmetric neutrino mass matrix \(M\) can be expressed as
\[(M)_{\rho\sigma}=(V\,M_{diag}\,V^{T})_{\rho\sigma}\,\,\text{with}\,\,\rho\,, \sigma=e\,,\mu\,,\tau\,, \tag{4}\]
where \(M_{diag}=\text{diag}(m_{1},m_{2},m_{3})\) is the diagonal matrix containing three mass states. The lepton flavor mixing matrix \(V\) can be expressed as \(V=U_{TM}\,P\), where the diagonal phase matrix \(P\) can be written as
\[P=\begin{pmatrix}1&0&0\\ 0&e^{i\alpha}&0\\ 0&0&e^{i\beta}\end{pmatrix} \tag{5}\]
that contains two CP violating Majorana phases \(\alpha\) and \(\beta\).
We can express the neutrino oscillation parameters such as the three mixing angles \(\theta_{12}\), \(\theta_{23}\), \(\theta_{13}\), the Jarlskog invariant \(J\)[43] and the Dirac CP violating phase \(\delta\) in terms of the two unknown parameters \(\theta\) and \(\phi\) of the trimaximal mixing matrix. For TM\({}_{1}\) mixing matrix, we have
\[(s_{12}^{2})_{TM_{1}}=\frac{|(U_{12})_{TM_{1}}|^{2}}{1-|(U_{13}) _{TM_{1}}|^{2}}=1-\frac{2}{3-\sin^{2}\theta}\,,\] \[(s_{23}^{2})_{TM_{1}}=\frac{|(U_{23})_{TM_{1}}|^{2}}{1-|(U_{13}) _{TM_{1}}|^{2}}=\frac{1}{2}\Big{(}1+\frac{\sqrt{6}\sin 2\theta\cos\phi}{3- \sin^{2}\theta}\Big{)}\,,\] \[(s_{13}^{2})_{TM_{1}}=|(U_{13})_{TM_{1}}|^{2}=\frac{1}{3}\sin^{2} \theta\,,\] \[J_{TM_{1}}=\frac{1}{6\sqrt{6}}\sin 2\theta\sin\phi\,,\] \[\csc^{2}\delta=\csc^{2}\phi-\frac{6\sin^{2}2\theta\cot^{2}\phi}{( 3-\sin^{2}\theta)^{2}} \tag{6}\]
and for TM\({}_{2}\) mixing matrix, we have
\[(s_{12}^{2})_{TM_{2}}=\frac{|(U_{12})_{TM_{2}}|^{2}}{1-|(U_{13}) _{TM_{2}}|^{2}}=\frac{1}{3-2\sin^{2}\theta}\,,\] \[(s_{23}^{2})_{TM_{2}}=\frac{|(U_{23})_{TM_{2}}|^{2}}{1-|(U_{13}) _{TM_{2}}|^{2}}=\frac{1}{2}\Big{(}1+\frac{\sqrt{3}\sin 2\theta\cos\phi}{3- 2\sin^{2}\theta}\Big{)}\,,\] \[(s_{13}^{2})_{TM_{2}}=|(U_{13})_{TM_{2}}|^{2}=\frac{2}{3}\sin^{2} \theta\,,\] \[J_{TM_{2}}=\frac{1}{6\sqrt{3}}\sin 2\theta\sin\phi\,,\] \[\csc^{2}\delta=\csc^{2}\phi-\frac{3\sin^{2}2\theta\cot^{2}\phi}{( 3-2\sin^{2}\theta)^{2}}\,, \tag{7}\]
where \(s_{ij}=\sin\theta_{ij}\) and \(c_{ij}=\cos\theta_{ij}\) for \(i,j=1,2,3\).
There are several experiments that can, in principle, put constraints on the neutrino mass scale. The \(\beta\) decay experiment performed at Karlsruhe Tritium Neutrino (KATRIN) experiment can measure the effective electron antineutrino mass by studying the endpoint region of the \(\beta\) decay spectrum. The current upper bound of the effective electron antineutrino mass is reported to be \(m_{\nu}<0.8\,\text{eV}\) at \(90\%\) confidence level [44]. In future KATRIN is expected to reach the mass sensitivity up to \(0.2\,\text{eV}\). The next generation \(\beta\) decay experiment such as Project 8 experiment [45] is designed to reach the mass sensitivity up to \(0.04\,\text{eV}\). Similarly, the effective Majorana mass term \(|M_{ee}|\) can be obtained from the neutrinoless double beta decay experiment. The current upper bound on the value of \(M_{ee}\) reported by GERDA experiment obtained by using \({}^{48}\)Ca isotope is \(M_{ee}<(0.079-0.18)\,\text{eV}\)[46]. The KamLAND-Zen and EXO-200 experiments [47; 48] on \({}^{136}\)Xe isotope reported the upper limit on \(M_{ee}\) to be \(M_{ee}<(0.061-0.165)\,\text{eV}\) and \(M_{ee}<(0.09-0.29)\,\text{eV}\), respectively. It is expected that the next generation experiments can reach the mass sensitivity upto \((5-20)\,\text{meV}\). There are also several results related to the total neutrino mass coming from various cosmological observations. The Planck satellite reported the upper limit on the total neutrino mass combining BAO data with CMB data to be \(\sum\,m_{i}<0.12\,\text{eV}\) at \(95\%\) confidence level [49]. The stringent limit on the absolute neutrino mass is obtained by combining CMB lensing and galaxy clustering data and it is found to be \(\sum\,m_{i}<0.09\,\text{eV}\)[50].
We can write the effective Majorana mass \(|M_{ee}|\) for the TM\({}_{1}\) and TM\({}_{2}\) mixing matrix as
\[|M_{ee}|_{TM_{1}}=\Big{|}\frac{1}{3}(2m_{1}+m_{2}\cos^{2}\theta e^{2i\alpha}+m_{3 }\sin^{2}\theta e^{2i\beta})\Big{|}. \tag{8}\]
\[|M_{ee}|_{TM_{2}}=\Big{|}\frac{1}{3}(2m_{1}\cos^{2}\theta+m_{2}e^{2i\alpha}+2m_ {3}\sin^{2}\theta e^{2i\beta})\Big{|}. \tag{9}\]
The effective electron anti-neutrino mass \(m_{\nu}\) for both TM\({}_{1}\) and TM\({}_{2}\) can be expressed as
\[(m_{\nu}^{2})_{TM_{1}}=\sum_{i=1}^{3}U_{ie}^{2}=\frac{1}{3}(2m_{1}^{2}+m_{2}^{2 }\cos^{2}\theta+m_{3}^{2}\sin^{2}\theta). \tag{10}\]
\[(m_{\nu}^{2})_{TM_{2}}=\frac{1}{3}(2m_{1}^{2}+m_{2}^{2}\cos^{2}\theta+2m_{3}^{ 2}\sin^{2}\theta). \tag{11}\]
The most stringent constraint on the neutrino masses comes from the ratio of the absolute values of the solar and atmospheric mass squared differences characterized by
\[r\equiv\Big{|}\frac{\Delta m_{21}^{2}}{\Delta m_{32}^{2}}\Big{|}=\Big{|}\frac {1-(\frac{m_{1}}{m_{2}})^{2}}{(\frac{m_{3}}{m_{2}})^{2}-1}\Big{|}\,, \tag{12}\]
where \(\Delta m_{21}^{2}\) and \(\Delta m_{32}^{2}\) represent solar and atmospheric mass squared differences, respectively.
## III Two Minor Zeros in Neutrino Mass Matrix
The neutrino mass matrix constructed using the trimaximal mixing matrix is a \(3\times 3\) symmetric matrix and has six independent entries. Hence we have six independent minors corresponding to each independent entries in the mass matrix. There are total \({}^{6}C_{2}\) or 15 possible ways to have two minor zeros in the mass matrix. All the fifteen possible patterns of two minor zero in neutrino mass matrix are listed in Table. 1. We denote the minor corresponding to \(ij^{th}\) element of \(M_{ij}\) as \(C_{ij}\). For completeness, we report all the elements of the neutrino mass matrix for the TM\({}_{1}\) and TM\({}_{2}\) mixing matrix in Eq. 11 and Eq. 12 of appendix A.
In terms of neutrino mass matrix elements, the conditions for two minor zero can be written as
\[M_{a\,b}M_{c\,d}-M_{u\,v}M_{w\,x}=0\,,\] \[M_{a^{\prime}\,b^{\prime}\,d^{\prime}\,d^{\prime}}-M_{u^{\prime} \,v^{\prime}}M_{w^{\prime}\,x^{\prime}}=0\,. \tag{13}\]
\begin{table}
\begin{tabular}{|c|c|} \hline Class & Constraining equations \\ \hline \(A_{1}\) & \(C_{33}=0\),\(C_{32}=0\) \\ \hline \(A_{2}\) & \(C_{22}=0\),\(C_{32}=0\) \\ \hline \(B_{3}\) & \(C_{33}=0\),\(C_{31}=0\) \\ \hline \(B_{4}\) & \(C_{22}=0\),\(C_{21}=0\) \\ \hline \(B_{5}\) & \(C_{33}=0\),\(C_{12}=0\) \\ \hline \(B_{6}\) & \(C_{22}=0\),\(C_{13}=0\) \\ \hline D & \(C_{33}=0\),\(C_{22}=0\) \\ \hline \(S_{1}\) & \(C_{31}=0\),\(C_{11}=0\) \\ \hline \(S_{2}\) & \(C_{21}=0\),\(C_{11}=0\) \\ \hline \(S_{3}\) & \(C_{13}=0\),\(C_{12}=0\) \\ \hline \(F_{1}\) & \(C_{33}=0\),\(C_{11}=0\) \\ \hline \(F_{2}\) & \(C_{22}=0\),\(C_{11}=0\) \\ \hline \(F_{3}\) & \(C_{32}=0\),\(C_{11}=0\) \\ \hline \(F_{4}\) & \(C_{31}=0\),\(C_{32}=0\) \\ \hline \(F_{5}\) & \(C_{21}=0\),\(C_{32}=0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Two minor zero patterns.
We can write Eq. 13 in terms of a complex equation as
\[m_{1}m_{2}X_{3}e^{2i\alpha}+m_{2}m_{3}X_{1}e^{2i(\alpha+\beta)}+m_{3 }m_{1}X_{2}e^{2i\beta}=0\,,\] \[m_{1}m_{2}Y_{3}e^{2i\alpha}+m_{2}m_{3}Y_{1}e^{2i(\alpha+\beta)}+m _{3}m_{1}Y_{2}e^{2i\beta}=0\,. \tag{14}\]
where
\[X_{k}=(U_{ai}U_{bi}U_{cj}U_{dj}-U_{ui}U_{vi}U_{wj}U_{xj})+(i \leftrightarrow j)\,,\] \[Y_{k}=(U_{a^{\prime}i^{\prime}}U_{b^{\prime}i^{\prime}}U_{c^{ \prime}j}U_{d^{\prime}j}-U_{u^{\prime}i^{\prime}}U_{v^{\prime}i^{\prime}}U_{w ^{\prime}j}U_{x^{\prime}j})+(i\leftrightarrow j)\,. \tag{15}\]
with \((i,j,k)\) as the cyclic permutation of \((1,2,3)\). Using Eq. 14, one can write the mass ratios as
\[\frac{m_{1}}{m_{2}}e^{-2i\alpha}=\frac{X_{3}Y_{1}-X_{1}Y_{3}}{X_ {2}Y_{3}-X_{3}Y_{2}}\,,\] \[\frac{m_{1}}{m_{3}}e^{-2i\beta}=\frac{X_{1}Y_{2}-X_{2}Y_{1}}{X_{2 }Y_{3}-X_{3}Y_{2}}\,,\] \[\frac{m_{3}}{m_{2}}e^{-2i(\alpha-\beta)}=\frac{X_{3}Y_{1}-X_{1}Y_ {3}}{X_{1}Y_{2}-X_{2}Y_{1}}\,. \tag{16}\]
Similarly, the CP-violating Majorana phases can be written as
\[\alpha=-\frac{1}{2}\arg(\frac{X_{3}Y_{1}-X_{1}Y_{3}}{X_{2}Y_{3}-X _{3}Y_{2}})\,,\] \[\beta=-\frac{1}{2}\arg(\frac{X_{1}Y_{2}-X_{2}Y_{1}}{X_{2}Y_{3}-X _{3}Y_{2}})\,. \tag{17}\]
The value of \(m_{1}\), \(m_{2}\) and \(m_{3}\) can be calculated using Eq. 16 and mass square differences \(\Delta m_{21}^{2}\) and \(\Delta m_{32}^{2}\). That is
\[m_{1}=\sqrt{\Delta m_{21}^{2}}\sqrt{\frac{|\frac{m_{1}}{m_{2}}| ^{2}}{|1-|\frac{m_{1}}{m_{2}}|^{2}}},\] \[m_{2}=\sqrt{|\Delta m_{32}^{2}}|\sqrt{\frac{1}{|\frac{m_{3}}{m_{ 2}}|^{2}-1|}}\,,\] \[m_{3}=\sqrt{|\Delta m_{32}^{2}}|\sqrt{\frac{1}{|1-|\frac{m_{3}}{m _{3}}|^{2}}}\,. \tag{18}\]
We can now explore whether or not the chosen texture of the neutrino mass matrix is empirically acceptable. We can construct the mass matrix by using the allowed values of the experimental input parameters such as the mixing angles, mass squared differences and ratio \(r\) and test whether or not the other experimental constraints are respected. We now proceed to discuss our results.
## IV Results and discussion
Our main aim is to study the phenomenological implication of two minor zeros in the neutrino mass matrix on the total neutrino mass, the effective Majorana mass term, the electron anti-neutrino mass and Majorana CP violating phases. From Eq. 17, it is clear that the CP violating Majorana phases \(\alpha\) and \(\beta\) depend on \(\theta\) and \(\phi\). Similarly, from Eq. 18, it is clear that neutrino mass \(m_{i}\) depends not only on \(\theta\), \(\phi\) but also on the mass squared differences \(\Delta m_{21}^{2}\) and \(\Delta m_{32}^{2}\). Moreover, all the neutrino oscillation parameters also depend only on the value of \(\theta\) and \(\phi\). We first perform a \(\chi^{2}\) analysis to find the best fit values of our model parameters \(\theta\) and \(\phi\) and test the validity of our model. We define the \(\chi^{2}\) as follows:
\[\chi^{2}=\sum_{i=1}^{3}\frac{\left(\theta_{i}^{cal}-\theta_{i}^{ exp}\right)^{2}}{(\sigma_{i}^{exp})^{2}}+\sum_{j=21,32}\frac{\left(\Delta m_{j}^ {cal}-\Delta m_{j}^{exp}\right)^{2}}{(\sigma_{j}^{exp})^{2}}\,, \tag{19}\]
where \(\theta_{i}=(\theta_{12},\theta_{13},\theta_{23})\) and \(\Delta m_{j}=(\Delta m_{21}^{2}\), \(\Delta m_{32}^{2})\). Here \(\theta_{i}^{cal}\) and \(\Delta m_{j}^{cal}\) represent the calculated value of \(\theta_{i}\) and \(\Delta m_{j}\), respectively, whereas, \(\theta_{i}^{exp}\) and \(\Delta m_{j}^{exp}\) are the measured central values of \(\theta_{i}\) and \(\Delta m_{j}\), respectively. The \(\theta_{i}^{cal}\)
and \(\Delta m_{j}^{cal}\) depend on two unknown model parameters, namely \(\theta\) and \(\phi\). The \(\sigma_{i}^{exp}\) and \(\sigma_{j}^{exp}\) are the uncertainties corresponding to the measured value of \(\theta_{i}\) and \(\Delta m_{j}\) respectively. The central values and the corresponding uncertainties in each parameter, obtained from NuFIT [51], are reported in Table. 2. Besides the best fit values of \(\theta\) and \(\phi\), the \(\chi^{2}\) analysis also will return the best fit values of the neutrino oscillation parameters such as the three mixing angles and the two mass squared differences for each class of two minor zero patterns. Moreover, we use the \(3\sigma\) allowed range of \(r\) to constrain the values of the neutrino masses.
### Class: \(A_{1}\)
Class \(A_{1}\) corresponds to the minor zero for the \((3,3)\) and the \((3,2)\) elements of the neutrino mass matrix. The corresponding equations satisfying two minor zero conditions can be written as
\[(M)_{ee}(M)_{\mu\mu}-(M)_{e\mu}(M)_{\mu e}=0\,,\] \[(M)_{ee}(M)_{\mu\tau}-(M)_{\mu e}(M)_{e\tau}=0\,. \tag{20}\]
Using Eq. 16, the mass ratios for \(\text{TM}_{1}\) can be expressed as
\[\frac{m_{1}}{m_{2}}e^{-2i\alpha}=\frac{-\cos\theta}{2\sqrt{3}( \frac{1}{\sqrt{3}}\cos\theta+\frac{1}{\sqrt{2}}\sin\theta e^{-i\phi})}\,,\] \[\frac{m_{1}}{m_{3}}e^{-2i\beta}=\frac{-\sin\theta}{2\sqrt{3}( \frac{1}{\sqrt{3}}\sin\theta-\frac{1}{\sqrt{2}}\cos\theta e^{-i\phi})}\,,\] \[\frac{m_{3}}{m_{2}}e^{-2i(\alpha+\beta)}=\frac{\cos\theta(\frac{1 }{\sqrt{3}}\sin\theta-\frac{1}{\sqrt{2}}\cos\theta e^{-i\phi})}{\sin\theta( \frac{1}{\sqrt{3}}\cos\theta+\frac{1}{\sqrt{2}}\sin\theta e^{-i\phi})}\,. \tag{21}\]
Similarly, for \(\text{TM}_{2}\) mixing matrix, the mass ratios can be expressed as
\[\frac{m_{1}}{m_{2}}e^{-2i\alpha}=\frac{-(\cos\theta+\sqrt{3}\sin \theta e^{i\phi})}{2\cos\theta}\,,\] \[\frac{m_{1}}{m_{3}}e^{-2i\beta}=\frac{\sin\theta(\frac{1}{\sqrt{ 6}}\cos\theta+\frac{1}{\sqrt{2}}\sin\theta e^{i\phi})}{\cos\theta(\frac{1}{ \sqrt{6}}\sin\theta-\frac{1}{\sqrt{2}}\cos\theta e^{i\phi})}\,,\] \[\frac{m_{3}}{m_{2}}e^{-2i(\alpha+\beta)}=\frac{-\sin\theta+\sqrt{ 3}\cos\theta e^{i\phi}}{2\sin\theta}\,. \tag{22}\]
The best fit values of our model parameters \(\theta\) and \(\phi\) along with all the neutrino oscillation parameters namely, the three mixing angles (\(\theta_{13}\), \(\theta_{23}\), \(\theta_{12}\)), two mass squared differences (\(\Delta m_{21}^{2}\), \(\Delta m_{32}^{2}\)), the Jarsklog invariant \(J\) and Dirac CP violating phase \(\delta\) obtained for this class is reported in Table. 3. It is evident from Eq. 6 and Eq. 7 that \(\theta_{23}\) is invariant under the transformation \(\phi\rightarrow(2\pi-\phi)\), hence we get two best fit values of \(\phi\). Similarly, we get two values of \(J\) and \(\delta\) corresponding to two best fit values of \(\phi\). The best fit values of the mixing angles \(\theta_{12}\), \(\theta_{13}\) and the mass squared differences \(\Delta m_{21}^{2}\), \(\Delta m_{32}^{2}\) obtained for this class are compatible with the experimentally measured values reported in Table. 2. It is observed that, in case of \(\text{TM}_{2}\) mixing matrix, the best fit value of \(\theta_{23}\) is compatible with the experimentally measured value but for \(\text{TM}_{1}\) mixing matrix the best fit value deviates significantly from the
\begin{table}
\begin{tabular}{|c|c c|c c|} \hline \hline parameter & \multicolumn{2}{c|}{Normal ordering(best fit)} & \multicolumn{2}{c|}{inverted ordering (\(\Delta\chi^{2}=7.1\))} \\ & \multicolumn{2}{c|}{bfp\(\pm 1\sigma\)} & \multicolumn{2}{c|}{\(3\sigma\) ranges} & \multicolumn{2}{c|}{bfp\(\pm 1\sigma\)} & \multicolumn{2}{c|}{\(3\sigma\) ranges} \\ \hline \(\theta_{12}^{*}\) & \(33.44^{+0.77}_{-0.74}\) & \(31.27\)\(\rightarrow\)\(35.86\) & \(33.45^{+0.77}_{-0.74}\) & \(31.27\)\(\rightarrow\)\(35.87\) \\ \hline \(\theta_{23}^{*}\) & \(49.2^{+1.0}_{-1.3}\) & \(39.5\)\(\rightarrow\)\(52.0\) & \(49.5^{+1.0}_{-1.2}\) & \(39.8\)\(\rightarrow\)\(52.1\) \\ \hline \(\theta_{13}^{*}\) & \(8.57^{+0.13}_{-0.12}\) & \(8.20\)\(\rightarrow\)\(8.97\) & \(8.60^{+0.12}_{-0.12}\) & \(8.24\)\(\rightarrow\)\(8.98\) \\ \hline \(\delta^{\circ}\) & \(194^{+25}_{-25}\) & \(105\)\(\rightarrow\)\(405\) & \(287^{+27}_{-32}\) & \(192\)\(\rightarrow\)\(361\) \\ \hline \(\frac{\Delta m_{21}^{2}}{10-55\chi^{2}}\) & \(7.42^{+0.21}_{-0.20}\) & \(6.82\)\(\rightarrow\)\(8.04\) & \(7.42^{+0.21}_{-0.20}\) & \(6.82\)\(\rightarrow\)\(8.04\) \\ \hline \(\frac{\Delta m_{21}^{2}}{10-3\chi^{2}}\) & \(+2.515^{+0.028}_{-0.028}\) & \(+2.431\)\(\rightarrow\)\(+2.599\) & \(-2.498^{+0.028}_{-0.029}\) & -2.584\(\rightarrow\) -2.413 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Neutrino oscillation parameters from NuFIT [51].
experimentally measured value. Moreover, in case of TM\({}_{2}\) mixing matrix, the best fit value of \(\theta_{12}\) deviates from the measured value of \(\theta_{12}\) at more than \(2\sigma\) significance.
For the TM\({}_{1}\) mixing matrix, we use Eq. 6 and vary \(\theta_{13}\) within \(3\sigma\) from the central value and obtain the \(3\sigma\) allowed range of \(\theta\) to be \((14.26^{\circ}-15.64^{\circ})\). Using the allowed range of \(\theta\) and imposing the additional constraint coming from \(r\), we obtain the allowed ranges of \(\theta_{12}\) and \(\theta_{23}\) to be \((34.25^{\circ}-34.42^{\circ})\) and \((40.01^{\circ}-44.02^{\circ})\), respectively. It is clear that the value of \(\theta_{23}\) obtained in this case lies in the lower octant, i.e, for the TM\({}_{1}\) mixing matrix, this pattern prefers the atmospheric mixing angle to be smaller than \(\pi/4\). We show the variation of \(\theta_{23}\) as a function of the unknown parameter \(\phi\) in Fig. 1a. The corresponding best fit value of \(\theta_{23}\) is shown with '*' mark in Fig. 1a. We show the variation of \(J\) and \(\delta\) as a function of \(\phi\) in Fig. 1b and Fig. 1c, respectively. It is observed that the Jarlskog rephasing invariant \(J\) and the Dirac CP violating phase \(\delta\) are restricted to two regions. We obtain the \(3\sigma\) allowed ranges of \(J\) and \(\delta\) to be \([(-3.12\times 10^{-2},-3.43\times 10^{-2})\), \((3.12\times 10^{-2},3.43\times 10^{-2})]\) and \([(68.66,85.48)^{\circ}\), \((274.51,291.33)^{\circ}]\), respectively.
For the TM\({}_{2}\) mixing matrix, we use Eq. 7 and obtain the \(3\sigma\) allowed range of \(\theta\) to be \((10.03^{\circ}-10.99^{\circ})\). The corresponding allowed ranges of \(\theta_{12}\) and \(\theta_{23}\) are found to be \((35.68^{\circ}-35.75^{\circ})\) and \((39.00^{\circ}-50.99^{\circ})\), respectively. We show the variation of \(\theta_{23}\) as a function of the unknown parameter \(\phi\) in Fig. 2a. We also show the variation of \(J\) and \(\delta\) as a function of \(\phi\) in Fig. 2b and Fig. 2c, respectively. The \(3\sigma\) allowed ranges of \(J\) and \(\delta\) are found to be \([0,\pm 3.39\times 10^{-2}]\) and \([(0,90)^{\circ}\), \((270,360)^{\circ}]\), respectively.
Figure 1: (a) Variation of \(\theta_{23}\) as a function of \(\phi\), (b) variation of \(J\) as a function of \(\phi\), and (c) variation of \(\delta\) as a function of \(\phi\) for TM\({}_{1}\) mixing matrix. The ’\(*\)’ mark in the figures represents the best fit value.
We show the variation of neutrino masses \(m_{1}\), \(m_{2}\) and \(m_{3}\) as a function of \(\phi\) in Fig 3a and Fig. 4a for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, respectively. It shows normal mass ordering for both TM\({}_{1}\) and TM\({}_{2}\) mixing matrix. In Fig 3b and Fig. 4b, we show the variation of \(\sum m_{i}\) as a function of \(\phi\). The correlation of \(M_{ee}\) and \(\sum m_{i}\) for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix are shown in Fig. 3c and Fig. 4c, respectively. In Fig. 3d and Fig. 4d, we have shown the correlation of \(m_{\nu}\) with \(\sum m_{i}\) for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, respectively. The variation of Majorana phases \(\alpha\) and \(\beta\) as a function of \(\phi\) is shown in Fig. 3e and Fig. 3f for TM\({}_{1}\) mixing matrix and in Fig. 4e and Fig. 4f for TM\({}_{2}\) mixing matrix, respectively.
The best fit values and the corresponding \(3\sigma\) allowed ranges of the absolute neutrino mass scale, the effective Majorana neutrino mass, the effective electron anti-neutrino mass and CP violating phases \(\alpha\) and \(\beta\) are listed in Table. 4. It is observed that the CP violating Majorana phases \(\alpha\) and \(\beta\) are restricted to two regions. The best fit values of \(\alpha\) and \(\beta\) are obtained to be \(\pm 9.28\) and \(\pm 33.48\), respectively for TM\({}_{1}\) mixing matrix and for TM\({}_{2}\) mixing matrix the best fit values of \(\alpha\) and \(\beta\) are \(\pm 6.62\) and \(\pm 16.46\), respectively. The upper bound of \(M_{ee}\) obtained for this class is of \(\mathcal{O}(10^{-2})\) and is within the sensitivity reach of neutrinoless double beta decay, The upper bound on the effective electron anti-neutrino mass \(m_{\nu}<0.012\,\mathrm{eV}\) is beyond the reach of current \(\beta\) decay experiments.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Mixing matrix & Values & \(\sum m_{i}\,\mathrm{(eV)}\) & \(M_{ee}\,\mathrm{(eV)}\) & \(m_{\nu}\,\mathrm{(eV)}\) & \(\alpha^{\circ}\) & \(\beta^{\circ}\) \\ \hline \multirow{2}{*}{TM\({}_{1}\)} & Best fit & 0.066 & 0.006 & 0.010 & \(\pm 9.28\) & \(\pm 33.48\) \\ \cline{2-7} & \(3\sigma\) Range & \([0.060,0.073]\) & \([0.001,0.007]\) & \([0.009,0.011]\) & \([(-10.03,-8.84),(8.84,10.03)]\) & \([(-36.79,-27.82),(27.82,36.79)]\) \\ \hline \multirow{2}{*}{TM\({}_{2}\)} & Best fit & 0.069 & 0.009 & 0.011 & \(\pm 6.62\) & \(\pm 16.46\) \\ \cline{2-7} & \(3\sigma\) Range & \([0.058,0.071]\) & \([0.005,0.010]\) & \([0.009,0.012]\) & \([0,\pm 9.18]\) & \([0,\pm 44.99]\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Best fit and \(3\sigma\) allowed range of \(\sum m_{i}\,\mathrm{(eV)}\), \(M_{ee}\,\mathrm{(eV)}\), \(m_{\nu}\,\mathrm{(eV)}\), \(\alpha^{\circ}\) and \(\beta^{\circ}\) for Class \(A_{1}\).
Figure 3: (a) Variation of \(m_{1}\), \(m_{2}\), \(m_{3}\) as a function of \(\phi\), (b) variation of \(\sum m_{i}\) as a function of \(\phi\), (c) correlation between \(\sum m_{i}\) and \(M_{ee}\), (d) correlation between \(\sum m_{i}\) and \(m_{\nu}\), (e) variation of \(\alpha\) as a function of \(\phi\), and (f) variation of \(\beta\) as a function of \(\phi\) for TM\({}_{1}\) mixing matrix.
### Class: \(A_{2}\)
For Class \(A_{2}\), the minors corresponding to the \((2,2)\) and \((3,2)\) elements of the neutrino mass matrix are zero. The minor zero conditions for this class can be written in terms of the neutrino mass matrix elements as
\[(M)_{ee}(M)_{\tau\tau}-(M)_{e\tau}(M)_{\tau e}=0\,,\] \[(M)_{ee}(M)_{\mu\tau}-(M)_{\mu e}(M)_{e\tau}=0\,. \tag{23}\]
The mass ratios for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix can be expressed as
\[\frac{m_{1}}{m_{2}}e^{-2i\alpha}=\frac{-\cos\theta}{2\sqrt{3}( \frac{1}{\sqrt{3}}\cos\theta-\frac{1}{\sqrt{2}}\sin\theta e^{-i\phi})}\,,\] \[\frac{m_{1}}{m_{3}}e^{-2i\beta}=\frac{-\sin\theta}{2\sqrt{3}( \frac{1}{\sqrt{3}}\sin\theta+\frac{1}{\sqrt{2}}\cos\theta e^{-i\phi})}\,,\] \[\frac{m_{3}}{m_{2}}e^{-2i(\alpha+\beta)}=\frac{\cos\theta(\frac{ 1}{\sqrt{3}}\sin\theta+\frac{1}{\sqrt{2}}\cos\theta e^{-i\phi})}{\sin\theta( \frac{1}{\sqrt{3}}\cos\theta-\frac{1}{\sqrt{2}}\sin\theta e^{-i\phi})}\,. \tag{24}\]
and
\[\frac{m_{1}}{m_{2}}e^{-2i\alpha}=\frac{-\cos\theta+\sqrt{3}\sin \theta e^{i\phi}}{2\cos\theta}\,,\] \[\frac{m_{1}}{m_{3}}e^{-2i\beta}=\frac{\sin\theta(\frac{1}{\sqrt{6 }}\cos\theta-\frac{1}{\sqrt{2}}\sin\theta e^{i\phi})}{\cos\theta(\frac{1}{ \sqrt{6}}\sin\theta+\frac{1}{\sqrt{2}}\cos\theta e^{i\phi})}\,,\] \[\frac{m_{3}}{m_{2}}e^{-2i(\alpha+\beta)}=\frac{-(\sin\theta+ \sqrt{3}\cos\theta e^{i\phi})}{2\sin\theta}\,. \tag{25}\]
Figure 4: (a) Variation of \(m_{1}\), \(m_{2}\), \(m_{3}\) as a function of \(\phi\), (b) variation of \(\sum m_{i}\) as a function of \(\phi\), (c) correlation between \(\sum m_{i}\) and \(M_{ee}\), (d) correlation between \(\sum m_{i}\) and \(m_{\nu}\), (e) variation of \(\alpha\) as a function of \(\phi\), and (f) variation of \(\beta\) as a function of \(\phi\) for TM\({}_{2}\) mixing matrix.
The best fit values of our model parameters \(\theta\) and \(\phi\) along with all the neutrino oscillation parameters namely, the three mixing angles (\(\theta_{13}\), \(\theta_{23}\), \(\theta_{12}\)), two mass squared differences (\(\Delta m^{2}_{21}\), \(\Delta m^{2}_{32}\)), the Jarsklog invariant \(J\) and Dirac CP violating phase \(\delta\) obtained for this class is reported in Table. 5. The best fit values of the mixing angles \(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\) and the mass squared differences \(\Delta m^{2}_{21}\), \(\Delta m^{2}_{32}\) obtained are compatible with the experimentally measured values. It should, however, be noted that the best fit value of \(\theta_{12}\) for the TM\({}_{2}\) mixing matrix differs from the experimentally measured central value at more than \(2\sigma\) significance. This is quite a generic feature of TM\({}_{2}\) mixing matrix.
For the TM\({}_{1}\) mixing matrix, the allowed range of \(\theta\), obtained by using the \(3\sigma\) experimental range of \(\theta_{13}\) reported in Table. 2, is \((14.26^{\circ}-15.64^{\circ})\). Using the allowed range of \(\theta\), we obtain the allowed ranges of \(\theta_{12}\) and \(\theta_{23}\) to be \((34.25^{\circ}-34.42^{\circ})\) and \((45.97^{\circ}-49.98^{\circ})\), respectively. We also use the constraint coming from \(r\) to constrain the allowed parameter space. It is clear that the value of \(\theta_{23}\) obtained in this case lies in the upper octant, i.e, for the TM\({}_{1}\) mixing matrix, this pattern prefers the atmospheric mixing angle to be higher than \(\pi/4\). We show the variation of \(J\) and \(\delta\) as a function of \(\phi\) in Fig. 4(b) and Fig. 4(c), respectively. We obtain the \(3\sigma\) allowed ranges of \(J\) and \(\delta\) to be \([(-3.12\times 10^{-2},-3.43\times 10^{-2})\), \((3.12\times 10^{-2},3.43\times 10^{-2})]\) and \([(68.66,85.48)^{\circ},(274.51,291.33)^{\circ}]\), respectively. Similar to Class \(A_{1}\), it is observed that the Jarskog rephasing invariant \(J\) and the Dirac CP violating phase \(\delta\) are restricted to two regions.
For the TM\({}_{2}\) mixing matrix, the \(3\sigma\) allowed range of \(\theta\) is found to be \((10.03^{\circ}-10.99^{\circ})\). Corresponding \(3\sigma\) allowed range of \(\theta_{12}\) and \(\theta_{23}\) are \((35.68^{\circ}-35.75^{\circ})\) and \((39.00^{\circ}-50.99^{\circ})\), respectively. We show the variation of \(\theta_{23}\) as a function of the unknown parameter \(\phi\) in Fig. 5(a). The \(3\sigma\) allowed ranges of \(J\) and \(\delta\) are found to be \([0,\pm 3.39\times 10^{-2}]\) and \([(0,90)^{\circ},(270,360)^{\circ}]\), respectively. We also show the variation of \(J\) and \(\delta\) as a function of \(\phi\) in Fig. 5(b) and Fig. 5(c), respectively.
Let us now proceed to discuss the phenomenological implication of class \(A_{2}\) pattern on neutrino masses, the effective Majorana mass, effective electron antineutrino mass and the CP violating Majorana phases. We show the variation of neutrino masses \(m_{1}\), \(m_{2}\) and \(m_{3}\) as a function of \(\phi\) in Fig 6(a) and Fig. 7(a) for the TM\({}_{1}\) and the TM\({}_{2}\) mixing matrix, respectively. In each case, they show normal mass ordering. The variation of \(\sum m_{i}\) as a function of \(\phi\) is shown in Fig 6(b) and Fig. 7(b) for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, respectively. In Fig. 6(c) and Fig. 7(c), we have shown the correlation of \(M_{ee}\) and \(\sum m_{i}\) for both the mixing matrix, respectively. We have shown the correlation of \(m_{\nu}\) with \(\sum m_{i}\) for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix in Fig. 6(d) and Fig. 7(d), respectively. Also in Fig. 6(e) and Fig. 7(e), we have shown the variation of \(\alpha\) with respect to \(\phi\) for both TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, respectively. In Fig. 6(f) and Fig. 7(f), we have shown the variation of \(\beta\) with respect to \(\phi\) for both the mixing matrix. The phenomenology of this class is quite similar to class A\({}_{1}\).
The best fit values and the corresponding allowed ranges of the absolute neutrino mass scale, the effective Majorana mass, the effective electron anti-neutrino mass and Majorana CP violating phases for this class are listed in Table. 6. The best fit values and the allowed ranges of each observables obtained for this class are quite similar to the values obtained for class \(A_{1}\).
Figure 5: (a) Variation of \(\theta_{23}\) as a function of \(\phi\), (b) variation of \(J\) as a function of \(\phi\), and (c) variation of \(\delta\) as a function of \(\phi\) for TM\({}_{1}\) mixing matrix. The ’\(*\)’ mark in the figures represents the best fit value.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Mixing & Values & \(\sum m_{i}\,\)(eV) & \(M_{ee}\,\)(eV) & \(m_{\nu}\,\)(eV) & \(\alpha^{\circ}\) & \(\beta^{\circ}\) \\ \cline{2-6} matrix & & & & & & \\ \hline \multirow{2}{*}{\(\text{TM}_{1}\)} & Best fit & \(0.066\) & \(0.006\) & \(0.010\) & \(\pm 9.58\) & \(\pm 30.73\) \\ \cline{2-6} & \(3\sigma\) Range & \([0.060,0.073]\) & \([0.006,0.007]\) & \([0.009,0.012]\) & \([(-10.03,-8.84),(8.84,10.03)]\) & \([(-36.79,-27.82),(27.82,36.79)]\) \\ \hline \multirow{2}{*}{\(\text{TM}_{2}\)} & Best fit & \(0.064\) & \(0.006\) & \(0.009\) & \(\pm 7.84\) & \(\pm 29.21\) \\ \cline{2-6} & \(3\sigma\) Range & \([0.057,0.071]\) & \([0.006,0.009]\) & \([0.008,0.012]\) & \([0,\pm 9.18]\) & \([0,\pm 44.99]\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Best fit and \(3\sigma\) allowed range of \(\sum m_{i}\)(eV), \(M_{ee}\)(eV), \(m_{\nu}\)(eV), \(\alpha^{\circ}\) and \(\beta^{\circ}\) for Class \(A_{2}\).
Figure 6: (a) Variation of \(\theta_{23}\) as a function of \(\phi\), (b) variation of \(J\) as a function of \(\phi\), and (c) variation of \(\delta\) as a function of \(\phi\) for \(\text{TM}_{2}\) mixing matrix. The ’\(*\)’ mark in the figures represents the best fit value.
Figure 7: (a) Variation of \(m_{1}\), \(m_{2}\), \(m_{3}\) as a function of \(\phi\), (b) variation of \(\sum m_{i}\) as a function of \(\phi\), (c) correlation between \(\sum m_{i}\) and \(M_{ee}\), (d) correlation between \(\sum m_{i}\) and \(m_{\nu}\), (e) variation of \(\alpha\) as a function of \(\phi\), and (f) variation of \(\beta\) as a function of \(\phi\) for \(\text{TM}_{1}\) mixing matrix.
### Other Classes
The classes \(B_{3}\), \(B_{4}\), \(B_{5}\), \(B_{6}\), \(S_{1}\), \(S_{2}\), \(F_{1}\), \(F_{2}\) and \(F_{3}\) are not acceptable for both TM\({}_{1}\) and TM\({}_{2}\) mixing matrix because the value of \(r\) obtained for these classes are not within the experimental range. Similarly, the classes \(S_{3}\), \(F_{4}\) and \(F_{5}\) are not allowed since they predict \(m_{1}=m_{2}\) for both TM\({}_{1}\) and TM\({}_{2}\) mixing matrix respectively. Moreover, the class D is also not allowed as this class predicts \(m_{3}=m_{2}\) for TM\({}_{1}\) mixing and \(m_{1}=m_{3}\) for TM\({}_{2}\) mixing matrix, respectively.
## V Degree of Fine Tuning in the Neutrino Mass Matrix
It should be noted that minimum \(\chi^{2}\) alone is not sufficient to determine the best mass model because it does not provide any information regarding the degree of fine tuning in the mass matrix elements that is needed to reproduce the experimental data. In order to clarify this issue, we will present a quantitative analysis regarding the degree of fine tuning in the elements of the neutrino mass matrix. In case of TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, the elements of the neutrino mass matrix depend on two unknown parameters \(\theta\) and \(\phi\). The dimensionless quantity \(d_{FT}\) which measures the amount of fine tuning in the neutrino mass matrix element is defined as the sum of the absolute values of the ratios between each parameter and its error [15; 52]. We define \(d_{FT}\) as
\[d_{FT}=\sum\left|\frac{par_{i}}{err_{i}}\right|, \tag{26}\]
where \(par_{i}\) is the best fit values of the parameters \(\theta\) and \(\phi\), respectively. The error \(err_{i}\) for each parameter is calculated from the shift in the best fit value that changes \(\chi^{2}_{min}\) value by one unit while keeping other parameters fixed at their best fit values. Also we define \(d_{Data}\) as the ratio of sum of absolute values of each parameter and their error. Using the data from Table. 2, we obtain \(d_{Data}\) to be around 100. The \(d_{FT}\) parameter can provide a rough estimate of the degree of fine tuning in the mass matrix elements because if the \(d_{FT}\) value is large then there will be large difference in the \(\chi^{2}\) for a small change in the corresponding parameter. Hence a large value of \(d_{FT}\) corresponds to a strong fine tuning of the mass matrix elements and vice versa. The \(\chi^{2}_{min}\) value and the corresponding value of the \(d_{FT}\) parameter for Class \(A_{1}\) and Class \(A_{2}\) for the TM\({}_{1}\) and the TM\({}_{2}\) mixing matrix are listed in the Table. 7.
Figure 8: (a) Variation of \(m_{1}\), \(m_{2}\), \(m_{3}\) as a function of \(\phi\), (b) variation of \(\sum m_{i}\) as a function of \(\phi\), (c) correlation between \(\sum m_{i}\) and \(M_{ee}\), (d) correlation between \(\sum m_{i}\) and \(m_{\nu}\), (e) variation of \(\alpha\) as a function of \(\phi\), and (f) variation of \(\beta\) as a function of \(\phi\) for TM\({}_{2}\) mixing matrix.
It is evident from Table. 7 that the fine tuning parameter \(d_{FT}\) is quite small in case of TM\({}_{2}\) mixing matrix for class \(A_{1}\) and \(A_{2}\) pattern. However, the \(d_{FT}\) value is quite large in case of TM\({}_{1}\) mixing matrix for both the classes although the \(\chi^{2}_{min}\) value obtained for class \(A_{2}\) is quite small. Based on the \(d_{FT}\) value, one can conclude that the degree of fine tuning in the mass matrix element is quite strong in case of TM\({}_{1}\) mixing matrix.
## VI Symmetry realization
We can realize the symmetry of two minor zero in neutrino mass matrix through type-I seesaw mechanism along with Abelian symmetry. For generating leptonic mass matrix, we need three right handed neutrinos \(\nu_{Rp}\) (\(p=1,2,3\)), three right handed charged leptons \(l_{Rp}\) and three left handed lepton doublets \(D_{Lp}\). Along with these, we need Higgs doublet for non zero elements \((M_{l})_{pq}\) or \((M_{D})_{pq}\), and scalar singlet for non zero elements \((M_{R})_{pq}\) where \(q=1,2,3\). Higgs doublets get vacuum expectation values (vevs) at the electroweak scale and the scalar singlets get the vevs at the seesaw scale. We follow the procedure discussed in Refs. [10; 20] to present the symmetry realization of class \(A_{1}\) using \(Z_{8}\) symmetry group. Under \(Z_{8}\) symmetry, the leptons of first family remain invariant, leptons of second family changes sign and leptons of third family get multiplied by \(\omega\) = exp(\(\frac{i\pi}{4}\)).
The leptonic fields under \(Z_{8}\) transform as
\[\bar{l}_{R1}\to\bar{l}_{R1}\,, \bar{\nu}_{R1}\to\bar{\nu}_{R1}\,, D_{L1}\to\bar{D}_{L1}\,,\] \[\bar{l}_{R2}\to\omega^{4}\bar{l}_{R2}\,, \bar{\nu}_{R2}\to\omega^{4}\bar{\nu}_{R2}\,, D_{L2}\to\omega^{4}\bar{D}_{L2}\,,\] \[\bar{l}_{R3}\to\omega\bar{l}_{R3}\,, \bar{\nu}_{R3}\to\omega\bar{\nu}_{R3}\,, D_{L3}\to\omega^{7}\bar{D}_{L3}\,, \tag{27}\]
The bilinears \(\bar{l}_{Rp}\,D_{Lq}\) and \(\bar{\nu}_{Rp}\,D_{Lq}\) relevant for \((M_{l})_{pq}\) and \((M_{D})_{pq}\) transform as
\[\bar{l}_{Rp}\,D_{Lq}\cong\bar{\nu}_{Rp}\,D_{Lq}\cong\begin{pmatrix}1&\omega^{4 }&\omega^{7}\\ \omega^{4}&1&\omega^{3}\\ \omega&\omega^{5}&1\end{pmatrix} \tag{28}\]
and the bilinears \(\bar{\nu}_{Rp}\bar{\nu}_{Rq}\) relevant for \((M_{R})_{pq}\) transform as
\[\bar{\nu}_{Rp}\bar{\nu}_{Rq}\cong\begin{pmatrix}1&\omega^{4}&\omega\\ \omega^{4}&1&\omega^{5}\\ \omega&\omega^{5}&\omega^{2}\end{pmatrix}. \tag{29}\]
Under these transformation the diagonal Dirac mass matrices are generated automatically for both charged leptons and neutrinos. Also, under \(Z_{8}\) transformation the elements \((1,1)\) and \((2,2)\) of Majorana mass matrix \(M_{R}\) remain invariant. We can obtain non zero elements \((1,2)\) by introducing real scalar field \(\chi_{12}\) which changes sign under \(Z_{8}\) transformation and \((1,3)\) by introducing complex scalar fields \(\chi_{13}\) which under \(Z_{8}\) transformation gets multiplied by \(\omega^{7}\) for \(M_{R}\). In the absence of any scalar singlets other elements of \(M_{R}\) remain zero. The Majorana mass matrix \(M_{R}\) can be written as
\[M_{R}=\begin{pmatrix}1&b&c\\ b&1&0\\ c&0&0\end{pmatrix}\,. \tag{30}\]
This gives two minor zero conditions corresponding to Class \(A_{1}\) in the neutrino mass matrix. Class \(A_{2}\) can also be realised similarly for different \(M_{R}\).
## VII Conclusion
We explore the consequences of two minor zeros in the neutrino mass matrix using trimaximal mixing matrix. There are total fifteen possible patterns and out of these only two patterns namely class \(A_{1}\) and class \(A_{2}\) are found to be phenomenologically acceptable for TM\({}_{1}\) and TM\({}_{2}\) mixing matrix, respectively. We perform a naive \(\chi^{2}\) analysis to find the best fit values of the two unknown parameters \(\theta\) and \(\phi\) of the TM mixing matrix. We include five observables in our \(\chi^{2}\) analysis namely, the three mixing angles \((\theta_{13},\theta_{12},\theta_{23})\) and the two mass squared differences \((\Delta m_{21}^{2},\Delta m_{32}^{2})\). It is found that class \(A_{2}\) with TM\({}_{1}\) mixing matrix provides the best fit to the experimental results. All the patterns show normal mass ordering. We also give prediction of several unknown parameters such as the absolute neutrino mass scale, the effective Majorana mass, the effective electron antineutrino mass, three CP violating phases and the Jarlskog invariant measure of CP violation. The effective Majorana mass obtained for each pattern is within the reach of neutrinoless double beta decay experiment. The upper bound on the effective electron anti-neutrino mass obtained for each pattern is beyond the reach of current \(\beta\) decay experiments. We also discuss the degree of fine tuning in the elements of the mass matrix for all the patterns by introducing a new parameter \(d_{FT}\). We observe that for the class \(A_{1}\) and \(A_{2}\) with TM\({}_{2}\) mixing matrix, the fine tuning among the elements of the mass matrix is small compared to TM\({}_{1}\) mixing matrix. Moreover, we also discuss the symmetry realization of Class \(A_{1}\) in the framework of type-I seesaw model using Abelian symmetry group \(Z_{8}\).
## Appendix A
The elements of neutrino mass matrix using TM\({}_{1}\) mixing matrix can be written as
\[(M)_{ee} = \frac{2}{3}\,m_{1}+\frac{1}{3}\cos^{2}\theta\,m_{2}\,e^{2i\alpha }+\frac{1}{3}\sin^{2}\theta\,m_{3}\,e^{2\,i\,\beta},\] \[(M)_{e\mu} = (-\frac{1}{3})m_{1}+(\frac{1}{3}\cos^{2}\theta-\frac{1}{\sqrt{6 }}\sin\theta\cos\theta e^{i\phi})m_{2}e^{2i\alpha}+(\frac{1}{3}\sin^{2}\theta +\frac{1}{\sqrt{6}}\sin\theta\cos\theta e^{i\phi})m_{3}e^{2i\beta},\] \[(M)_{e\tau} = (-\frac{1}{3})m_{1}+(\frac{1}{3}\cos^{2}\theta+\frac{1}{\sqrt{6 }}\sin\theta\cos\theta e^{i\phi})m_{2}e^{2i\alpha}+(\frac{1}{3}\sin^{2}\theta -\frac{1}{\sqrt{6}}\sin\theta\cos\theta e^{i\phi})m_{3}e^{2i\beta},\] \[(M)_{\mu\mu} = \frac{1}{6}m_{1}+(\frac{1}{\sqrt{3}}\cos\theta-\frac{1}{\sqrt{2}} \sin\theta e^{i\phi})^{2}m_{2}e^{2i\alpha}+(\frac{1}{\sqrt{3}}\sin\theta+ \frac{1}{\sqrt{2}}\cos\theta e^{i\phi})^{2}m_{3}e^{2i\beta},\] \[(M)_{\mu\tau} = \frac{1}{6}m_{1}+(\frac{1}{3}\cos^{2}\theta-\frac{1}{2}\sin^{2} \theta e^{2i\phi})m_{2}e^{2i\alpha}+(\frac{1}{3}\sin^{2}\theta-\frac{1}{2}\cos ^{2}\theta e^{2i\phi})m_{3}e^{2i\beta},\] \[(M)_{\tau\tau} = \frac{1}{6}m_{1}+(\frac{1}{\sqrt{3}}\cos\theta+\frac{1}{\sqrt{2} }\sin\theta e^{i\phi})^{2}e^{2i\alpha}+(\frac{1}{\sqrt{3}}\sin\theta-\frac{1}{ \sqrt{2}}\cos\theta e^{i\phi})^{2}m_{3}e^{2i\beta}. \tag{10}\]
Using TM\({}_{2}\) mixing matrix, we can write the elements of neutrino mass matrix as
\[(M)_{ee} = (\frac{2}{3}\cos^{2}\theta)\,m_{1}+\frac{1}{3}\,m_{2}\,e^{2i\alpha }+(\frac{2}{3}\sin^{2}\theta)\,m_{3}\,e^{2\,i\,\beta},\] \[(M)_{e\mu} = (-\frac{1}{3}\cos^{2}\theta+\frac{1}{\sqrt{3}}\sin\theta\cos \theta e^{-i\phi})m_{1}+\frac{1}{3}m_{2}e^{2i\alpha}+(-\frac{1}{3}\sin^{2} \theta-\frac{1}{\sqrt{3}}\sin\theta\cos\theta e^{-i\phi})m_{3}e^{2i\beta},\] \[(M)_{e\tau} = (-\frac{1}{3}\cos^{2}\theta-\frac{1}{\sqrt{3}}\sin\theta\cos \theta e^{-i\phi})m_{1}+\frac{1}{3}m_{2}e^{2i\alpha}+(-\frac{1}{3}\sin^{2} \theta+\frac{1}{\sqrt{3}}\sin\theta\cos\theta e^{-i\phi})m_{3}e^{2i\beta},\] \[(M)_{\mu\mu} = (-\frac{1}{\sqrt{6}}\cos\theta+\frac{1}{\sqrt{2}}\sin\theta e^{-i \phi})^{2}m_{1}+\frac{1}{3}m_{2}e^{2i\alpha}+(\frac{1}{\sqrt{6}}\sin\theta+ \frac{1}{\sqrt{2}}\cos\theta e^{-i\phi})^{2}m_{3}e^{2i\beta},\] \[(M)_{\mu\tau} = (\frac{1}{6}\cos^{2}\theta-\frac{1}{2}\sin^{2}\theta e^{-2i\phi})m _{1}+\frac{1}{3}m_{2}e^{2i\alpha}+(\frac{1}{6}\sin^{2}\theta-\frac{1}{2}\cos^{ 2}\theta e^{-2i\phi})m_{3}e^{2i\beta},\] \[(M)_{\tau\tau} = (\frac{1}{\sqrt{6}}\cos\theta+\frac{1}{\sqrt{2}}\sin\theta e^{-i \phi})^{2}m_{1}+\frac{1}{3}m_{2}e^{2i\alpha}+(-\frac{1}{\sqrt{6}}\sin\theta+ \frac{1}{\sqrt{2}}\cos\theta e^{-i\phi})^{2}m_{3}e^{2i\beta}. \tag{11}\]
|
2309.07845 | Conformal little Higgs models | Little Higgs models address the hierarchy problem by identifying the SM Higgs
doublet as pseudo-Nambu--Goldstone bosons (pNGB) arising from global symmetries
with collective breakings. These models are designed to address the little
hierarchy problem up to a scale of $\Lambda\!\sim\! {\cal O}(10)$ TeV.
Consequently, these models necessitate an ultraviolet (UV) completion above
this scale. On the other hand, conformal extensions of the Standard Model are
intriguing because scales emerge as a consequence of dimensional transmutation.
In this study, we present a unified framework in which the electroweak
hierarchy problem is tackled through a conformal symmetry collectively broken
around the TeV scale, offering an appealing UV completion for little Higgs
models. Notably, this framework automatically ensures the presence of the
required UV fixed points, eliminating the need for careful adjustments to the
particle content of the theory. Moreover, this framework naturally addresses
the flavor puzzles associated with composite or little Higgs models.
Furthermore, we suggest that in this framework all known little Higgs models
can be UV-completed through conformal dynamics above the scale $\Lambda$ up to
arbitrary high scales. | Aqeel Ahmed, Manfred Lindner, Philipp Saake | 2023-09-14T16:40:09Z | http://arxiv.org/abs/2309.07845v2 | # Conformal Little Higgs
###### Abstract
Little Higgs models address the hierarchy problem by identifying the SM Higgs doublet as pseudo-Nambu-Goldstone bosons (pNGB) arising from global symmetries with collective breakings. These models are designed to address the little hierarchy problem up to a scale of \(\Lambda\sim\mathcal{O}(10)\) TeV. Consequently, these models necessitate an ultraviolet (UV) completion above this scale. On the other hand, conformal extensions of the Standard Model are intriguing because scales emerge as a consequence of dimensional transmutation. In this study, we present a unified framework in which the electroweak hierarchy problem is tackled through a conformal symmetry collectively broken around the TeV scale, offering an appealing UV completion for Little Higgs models. Notably, this framework automatically ensures the presence of the required UV fixed points, eliminating the need for careful adjustments to the particle content of the theory. Moreover, this framework naturally addresses the flavor puzzles associated with composite or Little Higgs models. Furthermore, we suggest that in this framework all known Little Higgs models can be UV-completed through conformal dynamics above the scale \(\Lambda\) up to arbitrary high scales.
Beyond the Standard Model, Hierarchy Problem, Little Higgs, Conformal Field Theory
## I Introduction
The Standard Model (SM) of particle physics works extremely well despite the fact that there are a number of theoretical and experimental reasons for embedding into more fundamental underlying theories. The so-called hierarchy problem is one of the main reasons to go beyond the SM. Composite Higgs models provide an attractive solution where the SM Higgs boson is a composite state of more elementary constituents with some strong confining dynamics [1; 2; 3]. However such models naturally predict resonant states close to the electroweak scale and null results at the LHC for such states lead to the so-called little hierarchy problem. To alleviate this little hierarchy problem the Little Higgs models were invented [4; 5; 6; 7; 8; 9; 10; 11]. The idea is that scalars are eventually not fundamental, but Goldstone bosons or pseudo-Nambu-Goldstone bosons (pNGBs) related to suitable larger symmetries and their collective breaking patterns. This implies that Higgs couplings and Yukawa couplings would not be fundamental, but effective such that only gauge couplings would survive in the UV limit.
Another interesting route to address the hierarchy problem is conformal symmetry. Theoretically it appears interesting since all scales would emerge dynamically instead of putting numbers into a lagrangian. This direction is also tempting from the fact that the SM 1 is a one-scale theory where all masses appear from dimensionless couplings times one single scale, the electroweak vacuum expectation value (VEV), \(v\simeq 246\text{ GeV}\). Moreover, the independent parameters of the SM are such that the effective potential of the Higgs sector becomes miraculously flat at very high scales, see e.g. [12]. This may point to a vacuum stability problem [12; 13], but being conservative about all theoretical and experimental errors it may also imply that a flat potential has a special meaning. Combined with a vanishing mass parameter this could indicate that conformal or shift symmetry plays a role at high scales.
Footnote 1: Without neutrino masses or with Dirac neutrino masses only.
Implementing the idea of conformal symmetry one must avoid conformal anomalies since otherwise all benefits of the symmetry would be lost by quantum effects. This implies that all couplings of the theory must eventually have a UV fixed point, i.e. vanishing beta functions which are related to the trace of the energy-momentum tensor. Obtaining UV fixed points for all Higgs, Yukawa, and gauge couplings of a theory is a non-trivial requirement and the problem may be solved by carefully selecting the gauge groups and the representations [14]. Even if one is successful the question remains if there is a principle behind the required very specific choices.
In this paper, we combine the ideas of composite Higgs models with conformal symmetry and implement them as a UV completion for the Little Higgs models. Consequently, all scalar and Yukawa couplings are not fundamental such that only gauge couplings survive in the UV. This automatically guarantees that the theory has the required UV fixed points if it is based on non-Abelian gauge groups. In particular, we consider a UV theory with fundamental fermions, called _technirfermions_, in a strongly coupled non-Abelian gauge theory such that we require that theory is in the conformal phase at the high energies. The conformal symmetry is softly broken by relevant deformations of technifermion mass terms at conformal breaking scale \(\Lambda\). Since the theory is strongly coupled at energies close to the conformal breaking scale \(\Lambda\) it confines and breaks the residual chiral symmetry spontaneously such that it gives pNGBs related to the symmetry breaking, see also [15; 16; 17]. In this framework, the Higgs boson and other pNGBs emerge as compos
ite states of the fundamental fermions. Furthermore, the symmetry structure in the model is such that the low-energy symmetry is broken collectively a la Little Higgs models [4; 5; 6; 7; 8; 9; 10; 11]. Hence, this framework provides an interesting UV completion of the Little Higgs models through conformal dynamics such that these models are consistent up to arbitrary high scales.
As an explicit example, we consider the UV completion of the 'bestest' Little Higgs model with strongly coupled conformal dynamics, based on global symmetry breaking in the coset \(SU(4)^{2}/SU(4)\). We utilize four massless and four massive bifundamental technifermions, denoted as \(\psi\) and \(\chi\) respectively, all charged under the confining gauge group \(SU(2)\). This choice places the gauge theory within the conformal window [18; 19] above the scale \(\Lambda\). At this scale, the conformal symmetry is broken due to the relevant deformation caused by the massive technifermion bilinear operator. Below this scale, the gauge theory undergoes confinement and spontaneously breaks the flavor symmetry of four massless technifermions to \(SU(4)^{2}/SU(4)\). Notably, the SM Higgs doublet is a pNGB that incorporates electroweak custodial symmetry. We highlight that in conformal Little Higgs models, the flavor scale, at which SM flavor physics, particularly the top-quark sector, becomes strongly coupled, can be naturally decoupled from the Little Higgs breaking scale. For a related discussion regarding technicolor models, see [15]. The model can in principle be distinguished from other UV completions of the Little Higgs models for probes at scale \(\Lambda\). We comment on the possible signatures of conformal Little Higgs models at the future colliders probings energies \(\mathcal{O}(10)\) TeV. Moreover, we propose that any known Little Higgs model can be UV-completed by incorporating strongly coupled conformal dynamics, achieved by extending the fermionic content of the original Little Higgs model.
The paper is organized as follows: In the next Sec. II we discuss the general framework of conformal Little Higgs models based on flavor symmetry breaking \(SU(N)^{2}/SU(N)\) of \(N\) flavors charged under a confining gauge group \(SU(N_{c})\). We consider the case with \(N=4\) and \(N_{c}=2\) in Sec. III which is the conformal UV completion of 'bestest' Little Higgs model. In Sec. IV we comment on the UV completion of any known Little Higgs model with strongly coupled conformal dynamics and conclude in Sec. V.
## II Conformal Little Higgs models
In this section, we embed Little Higgs models based on symmetry breaking coset \(\mathcal{G}/\mathcal{H}\) into a strongly coupled conformal field theory (CFT), thus providing an attractive UV completion framework for Little Higgs models valid up to arbitrary high scales. For concreteness, we focus on the case where \(\mathcal{G}\) is a chiral global symmetry, \(SU(N)_{L}\times SU(N)_{R}\), with \(N\) Dirac fermion flavors spontaneously broken to its diagonal subgroup, \(\mathcal{H}=SU(N)_{V}\), due to the strong confining dynamics of the gauge symmetry \(SU(N_{c})\).
The key feature of our framework is that above the spontaneous symmetry-breaking scale, the theory exhibits conformal dynamics. To achieve this conformal dynamics, we add \(N_{m}\) Dirac fermion flavors charged under an \(SU(N_{c})\) gauge symmetry, such that the theory has an enhanced global symmetry, \(SU(N_{f})_{L}\times SU(N_{f})_{R}\), with \(N_{f}\equiv N+N_{m}\) flavors. For non-supersymmetric gauge theories, it is argued that the confining phase is achieved in the conformal window with \(N_{c}\) colors and \(N_{f}\) flavors for [18; 19]
\[\frac{7}{2}\lesssim\frac{N_{f}}{N_{c}}\lesssim\frac{11}{2}. \tag{1}\]
Since our interest lies in chiral symmetry breaking due to confining gauge dynamics for \(N\) flavors, we consider these as massless Dirac fermions and denote them as \(\psi_{i}\) (\(i=1,\cdots,N\)). Meanwhile, \(N_{m}\) flavors are assumed to be massive Dirac fermions \(\chi_{j}\) (\(j=1,\cdots,N_{m}\)) with mass \(M\). The technifermions \(\psi_{i}\) and \(\chi_{j}\) are in the fundamental representation of the gauge group \(SU(N_{c})\), whereas we also add \(\hat{\psi}_{i}\) and \(\hat{\chi}_{j}\) fermions in the conjugate (antifundamental) representation. We assume the Lagrangian for the UV theory contains
\[\mathcal{L}\supset\mathcal{L}_{\rm CFT}+\mathcal{L}_{\rm def}, \tag{2}\]
where \(\mathcal{L}_{\rm CFT}\) is the Lagrangian for the strongly coupled CFT and \(\mathcal{L}_{\rm def}\) is the Lagrangian for a deformation to the CFT, which we consider of the following form,
\[\mathcal{L}_{\rm def}=-M\hat{\chi}\chi. \tag{3}\]
Above the mass parameter \(M\) is a relevant deformation to the CFT for the scaling dimension of the \(\hat{\chi}\chi\) bilinear operator \(d\leq 4\). Therefore, the above deformation breaks the conformal symmetry softly at the scale
\[\Lambda\equiv M^{1/(4-d)}. \tag{4}\]
To summarize, in this framework the strongly coupled CFT has an enhanced chiral symmetry \(SU(N_{f})_{L}\times SU(N_{f})_{R}\). However, the theory exits the conformal fixed point below the conformal breaking scale \(\Lambda\) due to deformation operator (3) and enters the confining phase with \(N\) flavors and \(N_{c}\) colors 2. This confinement forms a condensate
Footnote 2: In general the conformal breaking scale \(\Lambda\) can be different from the confining (condensation) scale of the QCD-like theory. However we assume that the CFT is strongly coupled, therefore the two scales can be identified as the same scale \(\Lambda\).
\[\langle\hat{\psi}_{i}^{\alpha}\psi_{j\alpha}\rangle\sim\Lambda^{d}\,\delta_{ij} \tag{5}\]
which spontaneously breaks the flavor symmetry \(SU(N)_{L}\times SU(N)_{R}/SU(N)_{V}\), where \(\alpha\) is the \(SU(N_{c})\)
color index and \(i,j\) label the fermions, i.e. \(i,j=1,\cdots,N\). This symmetry breaking gives \((N^{2}-1)\) Goldstone bosons, \(\hat{\psi}\psi\), corresponding to broken generators. Note the fermion bilinear scalar operator \(\hat{\psi}\psi\) has the same scaling dimension \(d\) as that of \(\hat{\chi}\chi\). This is due to the fact that above the conformal symmetry breaking, they belong to the same symmetry group, \(SU(N_{f})_{L}\times SU(N_{f})_{R}\), and conformal dynamics. The scaling dimension \(d\) of these scalar operators \(\hat{\psi}\psi\) (and \(\hat{\chi}\chi\)) is bounded from below to be greater than one, i.e. \(d\geq 1\), due to unitarity. The limiting case \(d=1\) implies free (elementary) scalar fields. Whereas, \(d=3\) for technicolor models and \(d\gtrsim 2\) can be achieved in walking technicolor models. As we discuss below for a conformal Little Higgs model we would require \(1<d<2\).
Below the conformal breaking scale \(\Lambda\), the low-energy effective theory exhibits the Little Higgs dynamics for the coset \(\mathcal{G}/\mathcal{H}\) through the collective breaking patterns. This ensures that in the low-energy theory, the SM Higgs field and other pNGB masses are protected from quadratic divergences. Therefore, low-energy observables and predictions remain similar to those of any conventional Little Higgs model. However, there are crucial differences between our conformal Little Higgs model and other Little Higgs UV completions that appear at the conformal breaking scale \(\Lambda\). For instance, a general problem with strongly coupled UV completions of Little Higgs or composite Higgs models is that flavor dynamics does not decouple from the electroweak breaking dynamics. In the strongly coupled conformal Little Higgs models, the flavor problem can be decoupled from the electroweak scale \(\Lambda_{\rm EW}\equiv 4\pi v\) up to flavor scale \(\Lambda_{t}\)[15],
\[\Lambda_{t}\equiv\Lambda\bigg{(}\frac{\Lambda_{\rm EW}}{m_{t}}\bigg{)}^{1/ \epsilon},\quad\text{for}\quad\epsilon\equiv d-1\sim 1/\text{few}, \tag{6}\]
where \(m_{t}\) is the top-quark mass and \(v\) is the electroweak VEV. Furthermore, at energies at or above the conformal breaking scale \(\Lambda\), the observable effects of strong conformal dynamics begin to appear in the form of broad resonances corresponding to \(\rho_{\psi}\)-mesons (corresponding to excited spin-1 \(\hat{\psi}\psi\) states) and other composite states of the conformal dynamics (e.g., \(\pi_{\chi}\equiv\hat{\chi}\chi\) states). In Fig. 1 a schematic depiction with different energy scales is presented for our strongly coupled conformal Little Higgs model.
## III Bestest Conformal Little Higgs
In this section, we consider the 'bestest' Little Higgs model with a global symmetry breaking coset \(SU(4)_{L}\times SU(4)_{R}/SU(4)_{V}\) with four light flavors of fundamental fermions. After flavor symmetry breaking it results in 15 Goldstone bosons. As mentioned above the main aspect of this work is to consider the UV completion of Little Higgs models without introducing any elementary/fundamental scalars in the model. For this purpose, we employ a confining gauge symmetry \(SU(N_{c})\) where for concreteness we take \(N_{c}=2\) and new fermions \(\psi\)'s and \(\chi\)'s, labeled as _technifermions_, charged under this gauge group. In Table 1 we summarize the quantum numbers for the new fermions under the confining gauge group as well as under the SM gauge group \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\). Where \(\tilde{\psi}\) and \(\psi^{\prime}\) are label as fundamentals of the electroweak gauge group \(SU(2)_{L}\subset SU(4)_{L}\) and the custodial group \(SU(2)_{L^{\prime}}\subset SU(4)_{L}\), respectively. The \(U(1)_{Y}\) charges are associated with the diagonal generators of the custodial \(SU(2)\). Similarly the conjugate fields \(\hat{\psi}\) are charged under the subgroups of \(SU(4)_{R}\).
We assume the number of flavors \(N_{m}\) of \(\chi\) are such that the theory is near a strongly coupled conformal fixed point. In the following, we consider \(N_{m}=4\) which is believed to be enough flavors to make the \(SU(2)_{\rm CTC}\) gauge
Figure 1: A schematic depiction of energy scales in the conformal Little Higgs model. In the UV we expect the theory enters a conformal fix point at scale \(M_{\rm UV}\) a’la Banks-Zaks [18]. At an IR scale \(\Lambda\), the conformal symmetry is softly broken by a relevant deformation due to explicit mass parameter \(M=\Lambda^{4-d}\) of the technifermion bilinear operator \(\hat{\chi}\chi\). Below the conformal breaking scale \(\Lambda\) the residual chiral symmetry with \(N\) flavors is spontaneously broken by the confinement resulting in \((N^{2}-1)\) pNGB states \(\pi_{\psi}=\hat{\psi}\psi\) including the SM model Higgs boson \(h\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \(SU(N_{c})\) & \(SU(3)_{C}\) & \(SU(2)_{L}\) & \(U(1)_{Y}\) \\ \hline \hline \(\hat{\psi}\equiv\begin{pmatrix}\psi_{1}\\ \psi_{2}\end{pmatrix}\) & \(\Box\) & \(\bf 1\) & \(\Box\) & \(0\) \\ \(\psi^{\prime}\equiv\begin{pmatrix}\psi_{3}\\ \psi_{4}\end{pmatrix}\) & \(\Box\) & \(\bf 1\) & \(\bf 1\) & \(-\frac{1}{2}\) \\ \(\chi\times N_{m}\) & \(\Box\) & \(\bf 1\) & \(\bf 1\) & \(+\frac{1}{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Quantum numbers for technifermions under the \(SU(N_{c})\) confining gauge group and the SM gauge group \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\).
theory strongly coupled nearly conformal fixed point. The Lagrangian for the confining sector is
\[\mathcal{L}\supset-M_{\psi}\hat{\psi}\psi-M\hat{\chi}\chi, \tag{7}\]
where \(\psi=(\tilde{\psi},\psi^{\prime})\). Note that the coefficients of "mass" terms, i.e. fermion bilinears, denoted with \(M_{\psi}\) and \(M\) has scaling dimension different from the canonical mass dimension 1. For a non-trivial scaling dimension \(d\) of the fermion bilinears, which is a scalar operator, we have \(4-d\) scaling dimension for \(M_{\psi}\) and \(M\). Therefore, for the scaling dimension \(1<d<4\) these mass terms are relevant deformations that take the theory out of the conformal fixed point. In other words, the conformal symmetry is broken at scale \(\Lambda\) where these deformations become strong. Since we are interested in \(SU(4)_{L}\times SU(4)_{R}\) chiral breaking we assume \(M_{\psi}=0\) such that the conformal breaking is dominated by \(M\) deformation of \(\tilde{\chi}\chi\) operator and \(\psi\)'s are the massless flavor which breaks the chiral symmetry only due to the condensate \(\langle\hat{\psi}\psi\rangle\).
Since the technifermions \(\chi\)'s are heavy with masses of the order conformal breaking scale, one can integrate them out at the scale \(\Lambda\). In the effective theory at/below scale \(\Lambda\) we are left with four flavors fermions \(\psi\) and their conjugate fermions \(\hat{\psi}\), which has \(SU(4)_{L}\times SU(4)_{R}\) chiral symmetry. These mass terms in the above Lagrangian softly break the \(SU(4)_{L}\times SU(4)_{R}\) global symmetry. However, we assume that dominant global symmetry breaking is due to the condensate of \(\psi\) fermions at scale \(\Lambda\) which coincides with the conformal symmetry breaking. The technifermions \(\psi\) and \(\hat{\psi}\) transform as \(\mathbf{4}\) and \(\mathbf{\bar{4}}\) of \(SU(4)_{L}\) and \(SU(4)_{R}\), respectively, therefore, the condensate transforms as
\[\langle\hat{\psi}\psi\rangle=(\mathbf{4},\mathbf{\bar{4}})_{SU(4)_{L}\times SU (4)_{R}}\,. \tag{8}\]
The above condensate breaks flavor symmetry \(SU(4)_{L}\times SU(4)_{R}\to SU(4)_{V}\) resulting in 15 Goldstone bosons which transform as the adjoint of the unbroken \(SU(4)_{V}\). Assuming massless technifermions \(\psi\) we can align the vacuum along the direction that preserves \(SU(4)_{V}\) and does not break the SM EW symmetry,
\[\langle\hat{\psi}\psi\rangle=\Lambda^{d}\begin{pmatrix}\mathds{1}&0\\ 0&\mathds{1}\end{pmatrix}, \tag{9}\]
where \(\Lambda\) is the scale of symmetry breaking and \(d\) is the scaling dimension of the condensate 3.
Footnote 3: Note that by starting with the \(SU(4)^{2}\) flavor symmetry, it becomes possible to achieve a conformal UV completion for the ‘minimal moose’ Little Higgs model with custodial symmetry, based on the coset \(SO(5)^{2}/SO(5)\)[9], which is isomorphic to \(Sp(4)^{2}/Sp(4)\). One can explicitly break the \(SU(4)^{2}\) flavor symmetry down to \(Sp(4)^{2}\), and subsequent chiral symmetry breaking results in the coset \(Sp(4)^{2}/Sp(4)\), as demonstrated in [20].
### Higgs sector
The 15 Goldstone bosons transform under the custodial symmetry \(SO(4)\simeq SU(2)_{L}\times SU(2)_{R}\subset SU(4)_{V}\) as
\[\mathbf{15}_{SU(4)_{V}}=(2,2)+(2,2)+(3,1)+(1,3)+(1,1), \tag{10}\]
where we have two doublets, two triplets, and a singlet under the custodial symmetry. We can write the Goldstone matrix as,
\[U=\exp\Bigl{[}i\Pi/\sqrt{2}f\Bigr{]}; \tag{11}\]
where
\[\Pi=\begin{pmatrix}\sigma^{a}\Delta_{1}^{a}+\eta/\sqrt{2}&-i\Phi_{H}\\ i\Phi_{H}^{\dagger}&\sigma^{a}\Delta_{2}^{a}-\eta/\sqrt{2}\end{pmatrix}. \tag{12}\]
Above \(\Phi_{H}\) is a bi-doublet,
\[\Phi_{H}\equiv\left(\widetilde{H}_{1}+i\widetilde{H}_{2},\quad H_{1}+iH_{2} \right), \tag{13}\]
with \(\widetilde{H}_{i}\equiv i\sigma_{2}H_{i}^{*}\) where \(H_{i}\) are Higgs doublets under \(SU(2)_{L}\) and the triplets \(\sigma_{i}\Delta_{1,2}^{i}\) under \(SU(2)_{L}\) are,
\[\sigma^{a}\Delta^{a}=\left(\begin{array}{cc}\Delta^{0}&\sqrt{2}\Delta^{+}\\ \sqrt{2}\Delta^{-}&-\Delta^{0}\end{array}\right). \tag{14}\]
Note that the matrix \(U\) transforms linearly under the \(SU(4)_{V}\) group as, \(U\to\Omega_{V}\cdot U\cdot\Omega_{V}^{\dagger}\).
The EW symmetry breaking vacuum is where both the Higgs doublets acquire non-zero VEVs, i.e. \(\langle H_{1,2}\rangle\equiv v_{1,2}/\sqrt{2}\). Such that most general custodial symmetry preserving vacuum can be written as,
\[\langle\Phi_{H}\rangle=v\,e^{i\beta}\mathds{1}/\sqrt{2}, \tag{15}\]
where \(v^{2}\equiv v_{1}^{2}+v_{2}^{2}\) and \(\tan\beta\equiv v_{2}/v_{1}\). The misalignment of the EW vacuum \(v\) with respect to the global symmetry breaking vacuum \(f\) can be parametrized by a transformation,
\[\Sigma=\Omega_{0}\cdot U\cdot\Omega_{0}, \tag{16}\]
such that the new vacuum is,
\[\langle\Sigma\rangle\equiv\Sigma_{0}=\Omega_{0}\cdot\Omega_{0}\equiv \begin{pmatrix}\cos\theta\,\mathds{1}&e^{i\beta}\sin\theta\,\mathds{1}\\ -e^{-i\beta}\sin\theta\,\mathds{1}&\cos\theta\,\mathds{1}\end{pmatrix}, \tag{17}\]
where the vacuum rotation matrix is
\[\Omega_{0}=\begin{pmatrix}\cos\frac{\theta}{2}\,\mathds{1}&e^{i\beta}\sin \frac{\theta}{2}\,\mathds{1}\\ -e^{-i\beta}\sin\frac{\theta}{2}\,\mathds{1}&\cos\frac{\theta}{2}\,\mathds{1} \end{pmatrix}, \tag{18}\]
with the misalignment angle \(\sin\theta\equiv v/f\).
### Gauge sector
To employ the collective breaking in the gauge sector, we gauge subgroups \(SU(2)_{L}\subset SO(4)_{L}\subset SU(4)_{L}\) and \(SU(2)_{R}\subset SO(4)_{R}\subset SU(4)_{R}\). The symmetry structure of the theory can be depicted as a Moose diagram Fig. 2. The gauge boson masses can be obtained as,
\[\mathcal{L} \supset\frac{1}{2}f^{2}\Big{[}(D_{\mu}\Sigma)^{\dagger}(D^{\mu} \Sigma)\Big{]}, \tag{19}\] \[\supset\frac{1}{4}g_{\rm EW}^{2}f^{2}\sin^{2}\theta W_{\mu}W^{\mu} +\frac{1}{8\cos^{2}\theta_{W}}g_{\rm EW}^{2}f^{2}\sin^{2}\theta Z_{\mu}Z^{\mu}\] \[\quad+\frac{1}{4}(g_{L}^{2}+g_{R}^{2})f^{2}\cos^{2}\theta W_{\mu} ^{\prime}W^{\prime\mu}\] \[\quad+\frac{1}{8}(g_{L}^{2}+g_{R}^{2})f^{2}\cos^{2}\theta Z_{\mu} ^{\prime}Z^{\prime\mu}+\cdots. \tag{20}\]
Above, ellipses denote Goldstone bosons' interactions with the gauge bosons. The covariant derivate is defined as,
\[D_{\mu}\Sigma\equiv\partial_{\mu}\Sigma+ig_{L}\mathcal{A}_{L}^{a}T^{a}\Sigma- ig_{R}\mathcal{A}_{R}^{a}T^{a}\Sigma, \tag{21}\]
where \(\mathcal{A}_{A,B}\) are the gauge bosons corresponding to \(SU(2)_{LA,B}\) and \(g_{\rm EW}^{2}=g_{L}^{2}g_{R}^{2}/(g_{L}^{2}+g_{R}^{2})\). Masses of the heavy gauge bosons \(W^{\prime}\) and \(Z^{\prime}\) are,
\[m_{W^{\prime}}^{2}=m_{Z^{\prime}}^{2}=\frac{1}{4}(g_{L}^{2}+g_{R}^{2})f^{2} \cos^{2}\theta, \tag{22}\]
whereas the SM gauge bosons are,
\[m_{W}^{2}=m_{Z}^{2}\cos^{2}\theta_{W}=\frac{1}{4}g_{\rm EW}^{2}v^{2}+\mathcal{ O}(v^{4}/f^{2}), \tag{23}\]
where the electroweak VEV is \(v\equiv f\sin\theta\) and \(\theta_{W}\) is the Weinberg angle.
One of the key features of the Little Higgs models is that the Higgs boson and other uneaten pNGB masses are not quadratically sensitive to the symmetry-breaking scale \(\Lambda\). At one loop level, the radiative corrections to the Higgs and other uneaten pNGB masses due to light and heavy gauge bosons are,
\[\mathcal{L}\!\supset\!\frac{9g_{\rm EW}^{2}m_{W^{\prime}}^{2}}{128\pi^{2}}\ln \!\left(\!\frac{\Lambda^{2}}{m_{W^{\prime}}^{2}}\!\right)\!\left[|H_{1}|^{2} +|H_{2}|^{2}+\frac{8}{3}|\Delta_{1}|^{2}\right]\!, \tag{24}\]
which is only sensitive to the heavy gauge boson mass which could be much smaller than the scale \(\Lambda\).
Note that in a strongly coupled theory, the dynamics that result in the nonlinear sigma model for the gauge sector (19) with weak gauge bosons of the theory would become strong at the scale \(m_{\rho}\). In our framework, this scale is given by
\[m_{\rho}=\Lambda\!\left(\frac{4\pi v}{m_{W}}\right)^{1/(d-1)}, \tag{25}\]
which can be naturally decoupled from the strong dynamics responsible for the electroweak symmetry breaking for \(\epsilon\equiv d-1\sim 1/\text{few}\). For \(\epsilon=1/3\) the gauge bosons strong dynamics scale \(m_{\rho}\) can be pushed up to \(\mathcal{O}(10^{4})\) TeV for \(\Lambda\sim\mathcal{O}(1)\) TeV.
### Fermion sector
In this framework, the SM fermions are elementary and their mass is obtained through Yukawa-like interactions with the composite Higgs as,
\[\mathcal{L}\supset\frac{\lambda_{t}}{\Lambda_{t}^{d-1}}Q_{L}t_{R}\mathcal{O}_{ H}, \tag{26}\]
where \(d\) is the scaling dimension of Higgs operator \(\mathcal{O}_{H}\sim\hat{\psi}^{\prime}\hat{\psi}\) and \(\Lambda_{t}\) is the scale where the above operator gets strong4
Footnote 4: An alternative possibility is the partial composite framework, where top-quark (\(Q_{L},t_{R}\)) mixes linearly with the composite fermion (\(\mathcal{O}_{L},\mathcal{O}_{R}\)) as [21; 22],
\[\mathcal{L}\supset\frac{\lambda_{L}}{\Lambda_{t}^{d-5/2}}Q_{L}\mathcal{O}_{L}+ \frac{\lambda_{R}}{\Lambda^{d_{R}-5/2}}t_{R}\mathcal{O}_{R}, \tag{27}\]
where \(d_{L,R}\) are the scaling dimensions of fermionic operator \(\mathcal{O}_{L,R}\). The effective top Yukawa coupling is \[y_{t}\sim\frac{\lambda_{L}\lambda_{R}}{\Lambda^{d_{L}+d_{R}-5}},\] (28) which is relevant for \(d_{L,R}\lesssim 5/2\)[22]. The dimensionless constant \(\lambda_{t}\) is an order parameter in the UV theory, i.e. \(\lambda_{t}\sim 1\). The top mass can be written as, \[m_{t}\equiv y_{t}v\sim 4\pi\lambda_{t}\Big{(}\frac{\Lambda}{\Lambda_{t}}\Big{)}^{d-1}\,v,\] (29)
where \(\Lambda\equiv 4\pi f\) is the scale of confinement/condensation and \(v=f\sin\theta\) is SM Higgs VEV. Above \(\Lambda_{t}\) is the scale where the top quark (flavor) becomes strongly coupled.
In this framework, the flavor scale is naturally decoupled from the strong dynamics responsible for the electroweak symmetry breaking for \(\epsilon\equiv d-1\sim\mathcal{O}(0.3)\)[15], i.e.
\[\Lambda_{t}=\Lambda\!\left(\frac{4\pi v}{m_{t}}\right)^{1/\epsilon}\!. \tag{30}\]
For \(\epsilon=1/3\) the flavor scale can be pushed up to \(\mathcal{O}(10^{3})\Lambda\). For comparison, in a technicolor-like model, the scaling
Figure 2: Collective symmetry breaking the structure of the bestest conformal Little Higgs.
dimension is \(d\sim 3\) and therefore the flavor scale is \(\mathcal{O}(5)\Lambda\) which would require \(\Lambda\gg\mathcal{O}(1)\) TeV in order to avoid the flavor constraints in the absence of a flavor symmetry in the UV theory. Therefore, conformal UV completions of the Little Higgs models or composite Higgs models naturally decouple the flavor scale from the electroweak breaking scale for scaling dimension \(1\!<\!d\!<\!2\).
Note that the top-quark mass operator of the form (26) and other SM fermionic operators explicitly break the \(SU(4)\) symmetry protecting the Higgs boson and other pNGB masses. For instance, each SM \(SU(2)_{L}\) quark doublet, denoted as \(Q\) originates from a fundamental representation of \(SU(4)\). A second quark doublet, \(Q^{\prime}_{L}\), which transforms under \(SU(2)^{\prime}_{L}\), is added to complete the \(SU(4)_{L}\). The quark doublet \(Q^{\prime}_{L}\) carry \(U(1)_{Y}\) charge that are equivalent to the SM hypercharges of \(Q_{L}\). Corresponding to each \(Q^{\prime}_{L}\) there exists a field \(Q^{\prime}_{R}\) that forms a vectorlike pair. The SM \(SU(2)_{L}\) singlet quark fields, \(U_{R},D_{R}\) and their partners \(D^{\prime}_{R},D^{\prime}_{R}\) only carry \(U(1)_{Y}\) gauge quantum numbers. The quark Yukawa couplings can be derived from the following Lagrangian,
\[\mathcal{L}\supset\frac{1}{\Lambda^{d-1}}\left(\begin{array}{c}\bar{Q}_{L} \\ \tilde{Q}^{\prime}_{L}\end{array}\right)\Sigma\left(\begin{array}{c}0\\ 0\\ \lambda_{U}U_{R}\\ \lambda_{D}D_{R}\end{array}\right)+\lambda^{\prime}\Lambda\tilde{Q}^{\prime} _{L}Q^{\prime}_{R}, \tag{31}\]
where \(\Sigma\!\sim\!\psi\hat{\psi}\) transforms as \((\mathbf{4},\bar{\mathbf{4}})\) under the flavor symmetry \(SU(4)_{L}\times SU(4)_{R}\). Within this framework, one of the linear combinations of \(U_{R}\) and \(U^{\prime}_{R}\subset Q^{\prime}_{R}\) acquires a mass on the order of \(\Lambda\), while the other linear combination corresponds to the conventional SM singlet, \(u_{R}\). A similar situation applies to the down-type quarks. Importantly, the invariance of the first term under \(SU(4)\) ensures the cancellation of one-loop quadratic divergences originating from the quark sector. Extending this approach to all quark generations and charged leptons follows a similar path.
### Phenomenology
The low-energy phenomenology of this model closely resembles that of the 'bestest' Little Higgs model [11], also discussed in [23]. In this context, \(H_{1}\) represents the SM Higgs doublet, while the second Higgs doublet \(H_{2}\), as well as the scalar triplet \(\Delta_{1}\) and singlet \(\eta\), have substantial masses on the order of around \(\mathcal{O}(1)\) TeV. Similarly, the heavy gauge boson partners \(W^{\prime}\) and \(Z^{\prime}\) also exhibit masses of approximately \(\mathcal{O}(1)\) TeV. Additionally, the fermionic top-partners possess masses at the scale of \(f\). These scalar, vector, and fermionic states, with masses around \(\mathcal{O}(1)\) TeV, serve as promising targets for future LHC runs [23].
In this work, our objective does not include conducting a detailed phenomenological analysis, as it is highly model-dependent. Nevertheless, one distinctive characteristic of conformal Little Higgs models, as opposed to other UV completions of Little Higgs models, is the presence of broad resonances and continuum states associated with strong conformal dynamics at the scale \(\Lambda\). For values of \(f\) around 1 TeV, which correspond to a tuning order of roughly 10%, the strongly coupled states with broad widths typically emerge at a scale of approximately 10 TeV. Consequently, such states can be well-motivated targets for future colliders like FCC [24; 25].
## IV Comments and discussion
In the following comments, we argue that any known Little Higgs model based on the coset \(\mathcal{G}/\mathcal{H}\) where \(\mathcal{F}\supset\mathcal{G}\) is weakly gauged can be made conformal Little Higgs. The general idea is based on arguments presented in [26; 27; 28], see also [20], where it is pointed out that any known Little Higgs model based on the coset \(\mathcal{G}/\mathcal{H}\) with weakly gauge subgroup \(\mathcal{F}\) of \(\mathcal{G}\) has the same low-energy physics as a two-site nonlinear sigma model with global symmetry \(G^{2}\) broken to \(G\) spontaneously and gauging the subgroups \(\mathcal{F}\times\mathcal{H}\) in the limit that the gauge coupling of \(\mathcal{H}\) is large. Such a two-site model can be UV-completed with bifundamental fermions \(\psi\)'s with QCD-like confining gauge dynamics of \(SU(N_{c})\) or \(Sp(2N_{c})\) gauge symmetry as shown in the following Moose diagram
(32)
A quick way to see this low-energy "duality" of a Little Higgs model, based on global symmetry breaking coset \(\mathcal{G}/\mathcal{H}\) with weakly gauged subgroup \(\mathcal{F}\) (say theory-A) to that of a chiral breaking of \(G^{2}\to G\) due to confining dynamics with gauged subgroup \(\mathcal{F}\times H\) in the limit of large \(\mathcal{H}\) gauge coupling (say theory-B), is counting the number of uneaten pNGBs in both theories. In theory A we have
\[N^{\text{A}}_{\text{pNGB}}=\Big{[}\underbrace{N(\mathcal{G})-N(\mathcal{H})}_{ \text{broken gen.}}\Big{]}-\underbrace{N(\mathcal{F})}_{\text{eaten}}, \tag{33}\]
whereas, in theory B we have
\[N^{\text{B}}_{\text{pNGB}}=\underbrace{N(\mathcal{G})}_{\text{broken gen.}}- \Big{[}\underbrace{N(\mathcal{H})+N(\mathcal{F})}_{\text{eaten}}\Big{]}. \tag{34}\]
Therefore the two theories have the same number of uneated pNGBs. Moreover, in the limit large \(\mathcal{H}\) gauge coupling in theory-B, the corresponding gauge bosons are heavy and therefore can be integrated out similar to Hidden Local Symmetry. Hence the two theories would give exactly the same low-energy dynamics with uneaten pNGBs and light gauge boson corresponding to \(\mathcal{F}\subset\mathcal{G}\).
In order to UV complete the two-site Little Higgs models (theory-B) with strongly coupled conformal dynamics one would need to extend the fermion sector of the
confining gauge theory \(SU(N_{c})\) (or \(Sp(2N_{c})\)) with additional bifundamental fermions \(\chi\)'s such that the theory lies in the conformal window with \(N_{f}/N_{c}\) satisfying (1). In order to obtain a theory that undergoes spontaneous chiral symmetry breaking due to fermion condensate, it needs to exit the conformal fixed point and flow to the confining phase at scale \(\Lambda\). Therefore, as argued above we would require a relevant deformation to the strongly coupled CFT in the form of (3). Such that at scale \(\Lambda\) the additional massive fermionic d.o.f. \(\chi\), responsible for conformal dynamics, can be integrated out. Hence the low-energy theory can be described as the above Moose diagram (32), where \({\cal G}^{2}\) symmetry is spontaneously broken to the diagonal subgroup \({\cal G}\) due to \(\hat{\psi}\psi\) condensate of confining gauge theory \(SU(N_{c})\) (or \(Sp(2N_{c})\)). The subgroups \({\cal F}\times{\cal H}\) are gauged and in the limit \({\cal H}\) gauge coupling is large, we get the low energy theory for a Little Higgs model based on \({\cal G}/{\cal H}\) coset (theory A).
It is relevant to make a comment here concerning the (large) hierarchy problem. Since all Higgs and Yukawa couplings dissolve, it implies that no scalars (pNGBs) propagate beyond the Little Higgs scale \(\Lambda\). Consequently, we are left with a chiral QCD-like theory, comprised of chiral fermions and non-abelian gauge fields. Symmetry breaking and condensation occur in the infrared (IR) following the logarithmic running of the gauge coupling. In other words, the symmetry breaking gives rise to the conformal anomaly (beta functions) below the little Higgs scale \(\Lambda\), leading to the emergence of pNGBs and other low-energy degrees of freedom. As a result, the underlying theory resembles more of a chiral QCD scenario with no hierarchy problem and dimensional transmutation. Meanwhile, the Little Higgs dynamics effectively addresses the little hierarchy between the electroweak scale \(v\) and the conformal breaking scale \(\Lambda\).
## V Conclusions
In this study, we have presented a UV completion for Little Higgs models based on strongly coupled conformal dynamics, extending their validity to arbitrarily high energy scales. Our proposal suggests that any Little Higgs model founded on the \({\cal G}/{\cal H}\) coset can achieve UV completion through robust conformal dynamics. The Little Higgs mechanism ensures the absence of quadratic divergences for the SM Higgs doublet and other pNGB states up to the conformal breaking scale \(\Lambda\). Beyond the conformal breaking scale, the theory displays conformal dynamics that persists up to an arbitrary high UV scale \(M_{\rm UV}\), effectively addressing the gauge hierarchy problem up to this UV scale.
Furthermore, this framework naturally decouples flavor dynamics from the strong dynamics responsible for breaking the electroweak symmetry. With regards to the scaling dimension of the fermion condensate, which corresponds to the SM Higgs operator, in the range \(1<d<2\), it has been demonstrated that the flavor scale can be exponentially separated from the confining scale \(\Lambda\).
To illustrate the practicality of our concept, we have presented a concrete UV completion of the 'bestest' Little Higgs model based on the coset \(SU(4)^{2}/SU(4)\). The low-energy phenomenology closely mirrors that of the canonical 'bestest' Little Higgs model. This model predicts the existence of a light SM Higgs doublet, a relatively heavy second Higgs doublet, a heavy \(SU(2)_{L}\) triplet, and a singlet. The gauge partners of the SM gauge bosons possess relatively heavy masses, typically on the order of \(\Lambda/(4\pi)\).
We have also discussed a potential UV completion for any known Little Higgs model using strongly coupled conformal dynamics. The key features of this framework include the natural separation of the flavor problem and the possibility of detecting signatures of strong conformal dynamics at future colliders through broad resonances and continuum states at the conformal breaking scale \(\Lambda\).
|
2309.06583 | Uncovering the spin ordering in magic-angle graphene via edge state
equilibration | Determining the symmetry breaking order of correlated quantum phases is
essential for understanding the microscopic interactions in their host systems.
The flat bands in magic angle twisted bilayer graphene (MATBG) provide an
especially rich arena to investigate such interaction-driven ground states, and
while progress has been made in identifying the correlated insulators and their
excitations at commensurate moire filling factors, the spin-valley
polarizations of the topological states that emerge at high magnetic field
remain unknown. Here we introduce a new technique based on twist-decoupled van
der Waals layers that enables measurements of their electronic band structure
and, by studying the backscattering between counter-propagating edge states,
determination of relative spin polarization of the their edge modes. Applying
this method to twist-decoupled MATBG and monolayer graphene, we find that the
broken-symmetry quantum Hall states that extend from the charge neutrality
point in MATBG are spin-unpolarized at even integer filling factors. The
measurements also indicate that the correlated Chern insulator emerging from
half filling of the flat valence band is spin-unpolarized, but suggest that its
conduction band counterpart may be spin-polarized. Our results constrain models
of spin-valley ordering in MATBG and establish a versatile approach to study
the electronic properties of van der Waals systems. | Jesse C. Hoke, Yifan Li, Julian May-Mann, Kenji Watanabe, Takashi Taniguchi, Barry Bradlyn, Taylor L. Hughes, Benjamin E. Feldman | 2023-09-12T20:10:51Z | http://arxiv.org/abs/2309.06583v2 | # Uncovering the spin ordering in magic-angle graphene via edge state equilibration
###### Abstract
Determining the symmetry breaking order of correlated quantum phases is essential for understanding the microscopic interactions in their host systems. The flat bands in magic-angle twisted bilayer graphene (MATBG) [1; 2; 3] provide an especially rich arena to investigate such interaction-driven ground states, and while progress has been made in identifying the correlated insulators and their excitations at commensurate moire filling factors [1; 4; 5; 6; 7; 8; 9; 10; 11], the spin-valley polarizations of the topological states that emerge at high magnetic field [2; 4; 5; 7; 11; 12; 13; 14; 15; 16; 17; 18; 19] remain unknown. Here we introduce a new technique based on twist-decoupled van der Waals layers that enables measurements of their electronic band structure and, by studying the backscattering between counter-propagating edge states, determination of relative spin polarization of the their edge modes. Applying this method to twist-decoupled MATBG and monolayer graphene, we find that the broken-symmetry quantum Hall states that extend from the charge neutrality point in MATBG are spin-unpolarized at even integer filling factors. The measurements also indicate that the correlated Chern insulator emerging from half filling of the flat valence band is spin-unpolarized, but suggest that its conduction band counterpart may be spin-polarized. Our results constrain models of spin-valley ordering in MATBG and establish a versatile approach to study the electronic properties of van der Waals systems.
## I Introduction
The relative twist angle between adjacent van der Waals layers provides a powerful tuning knob to control electronic properties. In the limit of large interlayer twist, the misalignment leads to a mismatch in the momentum and/or internal quantum degrees of freedom of low-energy states in each layer, resulting in effectively decoupled electronic systems [20; 21; 22; 23; 24; 25; 26]. This decoupling can be sufficiently pronounced to realize independently tunable quantum Hall bilayers that support artificial quantum spin Hall states [21] or excitonic condensation [25; 26]. In the opposite regime of low twist angle, a moire superlattice develops, and can lead to extremely flat electronic bands with prominent electron-electron interaction effects. The archetypal low-twist example is magic-angle twisted bilayer graphene (MATBG) [1; 2; 3], which has been shown to support broken-symmetry quantum Hall states [2; 7; 11; 12] as well as correlated Chern insulators (ChIs) at high magnetic fields [4; 5; 11; 13; 14; 15; 16; 17; 18; 19]. However, a full understanding of the nature of these states, including their spin and valley polarization, has so far remained elusive.
Combining large and small interlayer twists in a single device provides a new approach to probe microscopic details of correlated ground states in moire systems [27; 28; 29]. Such a device would yield electronically decoupled flat and dispersive bands which can be used to interrogate each other. In some ways, this is reminiscent of other two-dimensional heterostructures which host bands of differing character. One notable example is mirror-symmetric magic-angle twisted trilayer graphene (MATTG) and its multilayer generalizations [30; 31; 32; 33; 34; 35; 36], which can be decomposed into flat MATBG-like bands that coexist with more dispersive bands. However, these bands hybridize at non-zero displacement field, whereas a twist-decoupled architecture provides fully independent bands. This enables control over the relative filling of light and heavy carriers, including in a bipolar (electron-hole) regime. Crucially, in a perpendicular magnetic field, such a device can realize a quantum Hall bilayer with co- or counter-propagating edge modes. Because the inter-edge mode coupling depends on their respective internal degrees of freedom, the effects of edge backscattering on transport can be used to identify spin/valley flavor polarization of the flat moire bands, extending previous work that used edge equilibration to do so in monolayer graphene (MLG) [37].
Here we report transport measurements of a dual-gated, twisted trilayer graphene device that realizes electrically decoupled MATBG and MLG subsystems. By tracking features in the resistance as a function of car
rier density and displacement field, we demonstrate independently tunable flat and dispersive bands and show that transport measurements can be used to simultaneously determine the thermodynamic density of states in each subsystem. Furthermore, in the regime of counter-propagating MLG and MATBG edge modes in a magnetic field, we use longitudinal and non-local resistance measurements to infer the spin order within the MATBG subsystem - both for broken-symmetry quantum Hall states emanating from the charge neutrality point (CNP), and for the primary sequence of ChIs. Our work clarifies the microscopic ordering of correlated states in MATBG and demonstrates a powerful generic method to probe internal quantum degrees of freedom in two-dimensional electron systems.
## II Twist-decoupled flat and dispersive bands
An optical image of the device is shown in Fig. 1**a**, with a side view of individual layers schematically illustrated in Fig. 1**b**. As we demonstrate below, the bottom two graphene layers have a twist of \(1.11^{\circ}\) and display behavior consistent with typical MATBG samples, while the topmost graphene layer is electrically decoupled because of the larger interlayer twist of approximately \(5-6^{\circ}\) (see Methods). The whole device is encapsulated in hexagonal boron nitride (hBN) and has graphite top and bottom gates. This dual gated structure allows us to independently tune the total carrier density \(n_{\text{tot}}=(C_{b}V_{b}+C_{t}V_{t})/e\) and applied displacement field \(D=(C_{t}V_{t}-C_{b}V_{b})/(2\epsilon_{0})\), where \(C_{b(t)}\) and \(V_{b(t)}\) are the capacitance and voltage of the bottom (top) gate, \(e\) is the electron charge, and \(\epsilon_{0}\) is the vacuum permittivity. The applied displacement field shifts the relative energies of states in each subsystem and therefore controls how the total carrier density is distributed between them (Fig. 1**c**).
We first describe electronic transport through the device at zero magnetic field. The longitudinal resistance \(R_{xx}\) is largest along a curve at low/moderate \(D\), with multiple fainter, S-shaped resistive features extending outward, i.e. approximately transverse to it (Fig. 1**d**). This phenomenology arises from electronic transport in parallel through the MLG and MATBG subsystems. Specifically, the strongly resistive behavior occurs when the MLG is at its CNP (solid black line in Fig. 1**d**). Rel
Figure 1: **Twist-decoupled monolayer graphene (MLG) and magic-angle twisted bilayer graphene (MATBG).****a**, Optical image of the device. The scale bar is 2 \(\mu\)m. **b**, Schematic of the device structure and interlayer angles. The twisted trilayer graphene is encapsulated in hexagonal boron nitride (hBN) and has graphite top and bottom gates. **c**, Band diagram of the combined MLG-MATBG system. The displacement field \(D\) modifies the energies of states in each subsystem and therefore tunes the relative chemical potential \(\mu_{i}\) of each subsystem \(i\) at fixed total carrier density \(n_{\text{tot}}\). **d**, **e**, Longitudinal resistance \(R_{xx}\) as a function of \(n_{\text{tot}}\) and \(D\), at zero magnetic field \(B\) and at \(B=2\) T, respectively. Black solid (white dashed) lines denote where the MLG (MATBG) is at its charge neutrality point (CNP). Parenthesis indicate which carrier types are present in the MLG and MATBG, respectively: \(e\) indicates electrons and \(h\) indicates holes. **f**, \(R_{xx}\) as a function of moiré filling factor \(s\) at \(B=0\) and at various temperatures \(T\) where the MLG is at its CNP (solid black curve in **d**). **g**, \(\mu_{\text{MATBG}}\) as a function of \(s\) at \(B=0\), as extracted from **d** and analogous data at other temperatures.
atively higher peaks in \(R_{xx}\) along this curve reflect insulating states in MATBG. Analogously, when the carrier density in MATBG is fixed to an insulating state, \(R_{xx}\) remains elevated even as the carrier density in the MLG is adjusted. This leads to the resistive S-shaped curves (such as the dashed white line in Fig. 1**d**; see discussion below).
The peaks in \(R_{xx}\) centered near \(n_{\rm tot}=\pm 2.8\times 10^{12}\) cm\({}^{-2}\) correspond to the single-particle superlattice gaps at moire filling factor (number of electrons per unit cell) \(s=\pm 4\). From these densities, we extract a twist angle of \(\theta=1.11^{\circ}\) between the bottom two layers, and similar measurements using different contact pairs show that there is little twist angle disorder in these two layers (Fig. ED1). Intermediate resistance peaks are also present at \(s=0\), \(1\), \(\pm 2\), and \(3\) (Fig. 1**d**,**f**), consistent with the correlated insulators that have been previously observed in MATBG [12; 38; 39; 40; 7; 1]. At higher temperatures, another peak develops near \(s=-1\) (Fig. ED2), matching prior reports of a Pomeranchuk-like effect in MATBG [6; 41].
Our characterization demonstrates the ability to independently tune the carrier density in each subsystem, and hence shows that the subsystems are effectively decoupled. This further allows the MLG to act as a thermodynamic sensor for the MATBG, similar to schemes in which a sensing graphene flake is isolated by a thin hBN spacer from the target sample [6; 32; 42; 5]. By tracking the resistive maxima when the MLG is at its CNP, and using a model that accounts for screening of electric fields by each layer (Supplementary Sec. 1), we extract the MATBG chemical potential \(\mu_{\rm MATBG}\) (Fig. 1**g**). We find a total change of chemical potential across the flat bands of \(\delta\mu\approx 40\) meV, with non-monotonic dependence on filling that matches previous reports of a sawtooth in inverse compressibility [6; 43; 41; 13; 18]. Similarly, we can determine the MLG chemical potential as a function of its carrier density \(\mu_{\rm MLG}(n_{\rm MLG})\) by fitting to the S-shaped resistive features in Fig. 1**d**, which occur at fixed \(s\) in MATBG (Supplementary Sec. 1). These match the scaling \(\mu_{\rm MLG}\propto{\rm sgn}(n_{\rm MLG})|n_{\rm MLG}|^{1/2}\) that is expected for the Dirac dispersion of graphene. We observe similar behavior in a second trilayer device, where MLG-like states are decoupled from a bilayer graphene moire system with
Figure 2: **Spin polarization of MATBG quantum Hall states near the CNP.****a**, Schematic illustration of two possible scenarios for a single pair of counter-propagating edge modes. If the spins of each edge mode are aligned (top), backscattering is allowed (orange circle). Backscattering is suppressed when the spins are anti-aligned (bottom), leading to quantum spin Hall-like behavior with \(R_{xx}=h/2e^{2}\). **b**, \(R_{xx}\) as a function of the total filling factor \(\nu_{\rm tot}=\nu_{\rm MLG}+\nu_{\rm MATBG}\) and \(D\) at \(B=8\) T. **c**, **d**\(R_{xx}\) and \(R_{\rm NL}\), respectively measured in the configurations shown in the top left insets, as a function of \(D\) when \(\nu_{\rm tot}=0\). Filling factors of each subsystem for each regime of \(D\) are indicated in the bottom inset of **c**. Insets in **d** schematically represent the inferred relative spin orientations (black arrows) of edge modes in MLG (blue arrows) and MATBG (purple arrows), with orange circles indicating backscattering between a given pair. **e**, **f**, \(R_{xx}\) and \(R_{\rm NL}\) for \(\nu_{\rm MATBG}=\pm 2/\mp 2\) (red and blue, respectively) averaged over \(0.1<|D|<0.25\) V nm\({}^{-1}\). Error bars correspond to one standard deviation. **g**, Schematic diagram of CNP MATBG Landau levels (LLs) and their spin characters. Gaps between LLs are depicted schematically and do not represent experimentally measured field dependence.
a \(1.3^{\circ}\) twist angle (Fig. ED3), suggesting this is a generic phenomenon that is widely applicable in multilayer heterostructures.
Electronic decoupling is also evident when we apply a perpendicular magnetic field \(B\), where the energy spectrum of MLG consists of Landau levels (LLs), and a Hofstadter butterfly spectrum develops in MATBG. Figure 1**e** shows \(R_{xx}\) as a function of \(n_{\rm tot}\) and \(D\) at \(B=2\) T, revealing staircase-like patterns which reflect crossings of the MLG LLs and MATBG states (Hall resistance \(R_{xy}\) is plotted in Fig. ED4). Vertical features at constant \(n_{\rm tot}\) occur when the MLG is in a quantum Hall state; their extent (in \(D\)) is proportional to the size of the gap between LLs. As the displacement field tunes the relative energies of states in each subsystem, transitions occur when graphene LLs are populated or emptied. These cause each feature associated with a MATBG state to shift horizontally in density by the amount needed to fill a fourfold degenerate LL, \(n_{\rm LL}=4eB/h\), where \(h\) is Planck's constant and the factor of four accounts for the spin and valley degrees of freedom (e.g., see dashed white line in Fig. 1**e**).
## Quantum Hall edge state equilibration
In a magnetic field, the decoupled MLG and MATBG realize a quantum Hall bilayer in which either carrier type (electron or hole) can be stabilized in either subsystem. This results in co- (counter-)propagating edge modes when the respective carrier types are the same (different). Additionally, because the device is etched into a Hall bar after stacking, the edges of MLG and MATBG are perfectly aligned. Crucially, in the counter-propagating regime, the measured resistance encodes information about the efficiency of scattering between the edge modes in each subsystem (Supplementary Sec. 2), which depends on their internal quantum degrees of freedom. We expect that atomic scale roughness at the etched edge of the device enables large momentum transfer, and therefore anticipate efficient coupling irrespective of valley (in MLG and MATBG) and moire valley (in MATBG). However, assuming the absence of magnetic disorder, edge states having different spins should remain decoupled, whereas those with the same spin can backscatter and exhibit increased longitudinal resistance (Fig. 2**a**). Probing \(R_{xx}\) therefore allows us to deduce the relative spin polarization of edge states in MLG and MATBG.
We first focus at low carrier density and high magnetic field, where the behavior of each subsystem \(i\) is well described by quantum Hall states having filling factors \(\nu_{i}=n_{i}h/eB\) emanating from their respective CNPs. A sharp peak in \(R_{xx}\) emerges at combined filling factor \(\nu_{\rm tot}=0\), flanked by several quantum Hall states at other integer \(\nu_{\rm tot}\) (Fig. 2**b**). These features exhibit a series of \(D\)-field tuned transitions as the relative filling of MLG and MATBG changes. The data encompass MLG states with \(|\nu_{\rm MLG}|\leq 2\). Importantly, prior work has shown that MLG edge modes at \(\nu_{\rm MLG}=\pm 1\) have opposite spin and valley quantum numbers, whereas those at \(\nu_{\rm MLG}=\pm 2\) are spin unpolarized [37]. Combining this information with the measured resistance enables us to determine the spin polarization of the MATBG quantum Hall states with \(|\nu_{\rm MATBG}|\leq 4\).
When \(\nu_{\rm tot}=0\), MLG and MATBG have equal and opposite filling, and \(R_{xx}\) approaches different values depending on the number of counter-propagating edge states (Fig. 2**c**). At \(D=0\), each subsystem is in an insulating, \(\nu=0\) broken-symmetry state. Here, no bulk conduction or edge modes are anticipated, and we observe a large resistance. Near \(|D|\approx 0.05\) V/nm, \(\nu_{\rm MLG}/\nu_{\rm MATBG}=\pm 1/\mp 1\), and \(R_{xx}\) reaches a minimum near \(h/2e^{2}\) (Fig. 2**c**). This phenomenology can be explained by a pair of counter-propagating edge modes with opposite spins, analogous to helical edge modes observed in large-angle twisted bilayer graphene [21]. This interpretation is further corroborated by similar behavior in another contact pair (Supplementary Sec. 3), and measurements of non-local resistance \(R_{\rm NL}\) (Fig. 2**d**). Indeed, the pronounced non-local resistance signal at \(\nu_{\rm MLG}/\nu_{\rm MATBG}=\pm 1/\mp 1\) indicates that transport is dominated by edge modes (see Supplementary Sec. 4 for a discussion of bulk effects). This is corroborated by the value of \(R_{\rm NL}\), which is suppressed toward \(h/5e^{2}\), the quantized value predicted from the Landauer-Buttiker formula for counter-propagating edge states in this contact configuration (Supplementary Sec. 2). We therefore conclude that similar to MLG, MATBG has a filled spin down (up) electron- (hole-)like LL at \(\nu_{\rm MATBG}=1(-1)\).
Beyond \(|D|\approx 0.08\) V/nm, where \(\nu_{\rm MLG}/\nu_{\rm MATBG}=\pm 2/\mp 2\), we observe larger resistances \(R_{xx}>h/2e^{2}\) and \(R_{\rm NL}>h/5e^{2}\) (Fig. 2**c**,**d**). This suggests that backscattering occurs for both pairs of edge modes: if both MATBG edge states had identical spin, one counter-propagating pair would remain decoupled and would lead to quantized resistance \(R_{xx}=h/2e^{2}\) and \(R_{\rm NL}=h/5e^{2}\). (Supplementary Sec. 2). A resistance above this value, as well as the large increase in resistance relative to \(\nu_{\rm MLG}/\nu_{\rm MATBG}=\pm 1/\mp 1\), therefore both indicate that the edge states at \(\nu_{\rm MATBG}=\pm 2\) are spin unpolarized (see Supplementary Sec. 3-4 for additional measurements and discussion of alternative interpretations which we rule out as unlikely). Based on the above observations, we can deduce the spin polarization of the edge modes of the MATBG LLs emanating from its CNP as illustrated in Fig. 2**g**.
The observed spin sequence provides insight into the relative magnitudes of different symmetry breaking effects in MATBG. When \(\nu_{\rm MLG}/\nu_{\rm MATBG}=\pm 2/\mp 2\), \(R_{xx}\) (\(R_{\rm NL}\)) remains above \(h/2e^{2}\) (\(h/5e^{2}\)) for all measured
magnetic fields \(B>2\) T (Fig. 3**e-f**). This indicates that the MATBG states remain spin-unpolarized, and thus that Zeeman splitting is not dominant. We also note that moire valley splitting [13], which arises in the presence of \(M_{y}\) symmetry breaking, is necessary to produce a spin unpolarized state at \(\nu_{\rm MATBG}=\pm 2\) (see Supplementary Sec. 5).
## Addressing spin polarization of the Chern insulators
In addition to broken-symmetry quantum Hall states emerging from the CNP, ChIs extrapolating to nonzero \(s\) are evident in Landau fan measurements of \(R_{xx}\) and \(R_{xy}\) at fixed top gate voltages of \(\pm 3\) V (Fig. 3). At these values, the MLG filling factor is \(\nu_{\rm MLG}=\pm 2\), respectively, at high fields. Consequently, both the Chern number of the primary sequence of quantum Hall states in MATBG (black lines in Fig. 3**c,f**) emerging from \(s=0\), and the ChIs (colored lines) are offset by \(\pm 2\). After accounting for this shift, the ChIs that we observe are consistent with the primary sequence \(|t+s|=4\) commonly reported in MATBG, where \(t\) is the Chern number of the MATBG subsystem [4; 5; 11; 13; 14; 15; 16; 17]. Below, we focus primarily on the \((t,s)=(\pm 2,\pm 2)\) ChIs, which exhibit near-zero \(R_{xx}\) and quantized \(R_{xy}\) in the co-propagating regime (Fig. ED5). Here, ChI edge mode chirality is determined by the sign of \(t\): states with \(t>0\) (\(t<0\)) have electron- (hole-)like edge modes.
Tuning into the bipolar (electron-hole) regime, allows us to realize counter-propagating edge modes from the MATBG ChIs and the MLG quantum Hall states. We apply the edge state equilibration analysis to determine the spin polarization of the ChIs in MATBG. For the \((t,s)=(-1,-3)\) ChI, we find a sharp resistive feature that occurs only when \(\nu_{\rm MLG}=1\) (Fig. 4**a-b**), i.e. when there is one pair of counter-propagating edge states. The resistance grows with increasing \(B\) and reaches values significantly larger than \(h/2e^{2}\) (Fig. 4**d**). This indicates
Figure 3: **Landau fans demonstrating correlated Chern Insulators (ChIs).****a**, **b**, \(R_{xx}\) and \(R_{xy}\) as a function of \(s\) and \(B\) at fixed top gate voltage \(V_{t}=3\) V. **c**, Wannier diagram indicating the strongest quantum Hall and ChI states determined from **a** and **b**. The Chern numbers \(t\) of the MATBG states are labeled. At high fields, the total Chern numbers of each state are offset by 2 because \(\nu_{\rm MLG}=2\). Black, red, orange, and blue lines correspond to states with zero-field intercepts \(s=0\), \(s=|1|\), \(s=|2|\), and \(s=|3|\), respectively. For states with \(s=0\), \(t\equiv\nu_{\rm MATBG}\). Black dashed lines label the MATBG symmetry broken quantum Hall states \(-4<\nu_{\rm MATBG}<4\). **d**, **e**, **f**, Same as **a**, **b**, **c**, but for \(V_{t}=-3\), where \(\nu_{\rm MLG}=-2\) at high fields. Data collected at \(T\approx 300\) mK.
strong backscattering between edge modes, and hence that both have the same spin (inset, Fig. 4**d**). We conclude that the first flavor to occupy the MATBG Hofstadter subbands (see Supplementary Sec. 5) is spin down, consistent with expectations based on the Zeeman effect.
A resistive state also occurs when \((t,s)=(-2,-2)\) and \(\nu_{\text{MLG}}=2\) (Fig. 4**a-b**). We observe \(R_{xx}>h/2e^{2}\) that grows with increasing \(B\) (Fig. 4**b,e**; Fig. ED6), indicating efficient backscattering between both pairs of counter-propagating edge modes. We obtain consistent results from both the non-local resistance (Fig. 4**f**) and \(R_{xx}\) measurements of a second contact pair (Supplementary Sec. 3). We therefore conclude that the \((-2,-2)\) ChI in MATBG is spin unpolarized (red inset, Fig. 4**f**).
In contrast, we observe more moderate resistance for the \((t,s)=(2,2)\) ChI in MATBG when \(\nu_{\text{MLG}}=-2\) (Fig. 4**a,c**). In measurements of \(R_{xx}\) (\(R_{\text{NL}}\)) at fixed \(B\), the resistance of this state saturates near \(h/2e^{2}\) (\(h/5e^{2}\)) at high \(B\) (Fig. 4**e**, **f**), with similar near-quantized \(R_{xx}\) in a Landau fan measurement (Fig. ED6). Together, these results demonstrate that there is only partial coupling between edge modes. The data are consistent with one pair of decoupled, counter-propagating edge modes, and another pair having allowed backscattering. This would naturally arise if the \((t,s)=(2,2)\) ChI in MATBG is spin polarized (blue inset, Fig. 4**f**). The data therefore suggest a spin-polarized ground state may be favored (see Supplementary Sec. 4,5 for further discussion).
## Conclusion
In conclusion, we have realized a twisted graphene multilayer consisting of electrically decoupled MATBG and MLG subsystems. Even though the layers are in contact, the twist-decoupled architecture provides a new method to extract thermodynamic properties and probe internal quantum degrees of freedom in moire systems, and we an
Figure 4: **Spin polarization of the ChIs in MATBG.****a**, \(R_{xx}\) as a function of \(n_{\text{tot}}\) and \(D\) at \(B=8\) T (see Fig. ED7 for equivalent map in a non-local contact configuration). **b**, **c**, Zoom-ins of \(R_{xx}\) near the \((t,s)=(\pm 2,\pm 2)\) ChIs in the counter-propagating regime, corresponding to the left and right dashed boxes in **a**, respectively. The \(x\)-axis is relabeled in terms of \(s\) to show that the states of interest occur at fixed \(s\). Specific combinations of MATBG and MLG states in each panel are labeled. **d**, \(R_{xx}\) for the \(\nu_{\text{MLG}}=1,(t,s)=(-1,-3)\) state as a function of \(B\). **e**, **f**, \(R_{xx}\) and \(R_{\text{NL}}\), respectively measured in the configurations shown in the top left insets, for \(\nu_{\text{MLG}}=\pm 2,(t,s)=(\mp 2,\mp 2)\) states (red and blue, respectively) as a function of \(B\). Data are averaged over \(0.325<|D|<0.525\) V nm\({}^{-1}\). Error bars correspond to one standard deviation. Insets in **f** schematically represent the inferred relative spin orientations (black arrows) of edge modes in MLG (blue arrows) and MATBG (purple arrows), with orange circles indicating backscattering between a given pair.
ticipate its extension to other van der Waals heterostructures. Looking forward, this device geometry also represents the most extreme limit of dielectric screening of interactions [38; 39; 40] in which a tunable screening layer is immediately adjacent to the system of interest. More generally, it provides a natural arena to explore Kondo lattices [44; 45] with independently tunable densities of itinerant electrons and local moments, as well as an opportunity to study Coulomb drag between adjacent layers [46].
|
2309.16948 | Denoising Diffusion Bridge Models | Diffusion models are powerful generative models that map noise to data using
stochastic processes. However, for many applications such as image editing, the
model input comes from a distribution that is not random noise. As such,
diffusion models must rely on cumbersome methods like guidance or projected
sampling to incorporate this information in the generative process. In our
work, we propose Denoising Diffusion Bridge Models (DDBMs), a natural
alternative to this paradigm based on diffusion bridges, a family of processes
that interpolate between two paired distributions given as endpoints. Our
method learns the score of the diffusion bridge from data and maps from one
endpoint distribution to the other by solving a (stochastic) differential
equation based on the learned score. Our method naturally unifies several
classes of generative models, such as score-based diffusion models and
OT-Flow-Matching, allowing us to adapt existing design and architectural
choices to our more general problem. Empirically, we apply DDBMs to challenging
image datasets in both pixel and latent space. On standard image translation
problems, DDBMs achieve significant improvement over baseline methods, and,
when we reduce the problem to image generation by setting the source
distribution to random noise, DDBMs achieve comparable FID scores to
state-of-the-art methods despite being built for a more general task. | Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon | 2023-09-29T03:24:24Z | http://arxiv.org/abs/2309.16948v3 | # Denoising Diffusion Bridge Models
###### Abstract
Diffusion models are powerful generative models that map noise to data using stochastic processes. However, for many applications such as image editing, the model input comes from a distribution that is not random noise. As such, diffusion models must rely on cumbersome methods like guidance or projected sampling to incorporate this information in the generative process. In our work, we propose Denoising Diffusion Bridge Models (DDBMs), a natural alternative to this paradigm based on _diffusion bridges_, a family of processes that interpolate between two paired distributions given as endpoints. Our method learns the score of the diffusion bridge from data and maps from one endpoint distribution to the other by solving a (stochastic) differential equation based on the learned score. Our method naturally unifies several classes of generative models, such as score-based diffusion models and OT-Flow-Matching, allowing us to adapt existing design and architectural choices to our more general problem. Empirically, we apply DDBMs to challenging image datasets in both pixel and latent space. On standard image translation problems, DDBMs achieve significant improvement over baseline methods, and, when we reduce the problem to image generation by setting the source distribution to random noise, DDBMs achieve comparable FID scores to state-of-the-art methods despite being built for a more general task.
## 1 Introduction
Diffusion models are a powerful class of generative models which learn to reverse a diffusion process mapping data to noise (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020; Song et al., 2020). For image generation tasks, they have surpassed GAN-based methods (Goodfellow et al., 2014) and achieved a new state-of-the-art for perceptual quality (Dhariwal and Nichol, 2021). Furthermore, these capabilities have spurred the development of modern text-to-image generative AI systems(Ramesh et al., 2022).
Despite these impressive results, standard diffusion models are ill-suited for other tasks. In particular, the diffusion framework assumes that the prior distribution is random noise, which makes it difficult to adapt to tasks such as image translation, where the goal is to map between pairs of images. As such, one resorts to cumbersome techniques, such as conditioning the model (Ho and Salimans, 2022; Saharia et al., 2021) or manually altering the sampling procedure (Meng et al., 2022; Song et al., 2020). These methods are not theoretically principled and map in one direction (typically from corrupted to clean images), losing the cycle consistency condition (Zhu et al., 2017).
Instead, we consider methods which directly model a transport between two arbitrary probability distributions. This framework naturally captures the desiderata of image translation, but existing methods fall short empirically. For instance, ODE based flow-matching methods (Lipman et al., 2023; Albergo and Vanden-Eijnden, 2023; Liu et al., 2022), which learn a deterministic path between two arbitrary probability distributions, have mainly been applied to image generation problems and have not been investigated for image translation. Furthermore, on image generation, ODE methods have not achieved the same empirical success as diffusion models. Schrodinger Bridge and models (De Bortoli et al., 2021) are another type of model which instead learn an entropic optimal transport between two probability distributions. However, these rely on expensive iterative approximation methods and have also found limited empirical use. More recent extensions including Diffusion Bridge Matching (Shi et al., 2023; Peluchetti, 2023) similarly require expensive iterative calculations.
In our work, we seek a scalable alternative that unifies diffusion-based unconditional generation methods and transport-based distribution translation methods, and we name our general framework Denoising Diffusion Bridge Models (DDBMs). We consider a reverse-time perspective of _diffusion bridges_, a diffusion process conditioned on given endpoints, and use this perspective to establish a general framework for distribution translation. We then note that this framework subsumes existing generative modeling paradigms such as score matching diffusion models (Song et al., 2020) and flow matching optimal transport paths (Albergo and Vanden-Eijnden, 2023; Lipman et al., 2023; Liu et al., 2022a). This allows us to reapply many design choices to our more general task. In particular, we use this to generalize and improve the architecture pre-conditioning, noise schedule, and model sampler, minimizing input sensitivity and stabilizing performance. We then apply DDBMs to high-dimensional images using both pixel and latent space based models. For standard image translation tasks, we achieve better image quality (as measured by FID (Heusel et al., 2017)) and significantly better translation faithfulness (as measured by LPIPS (Zhang et al., 2018) and MSE). Furthermore, when we reduce our problem to image generation, we match standard diffusion model performance.
## 2 Preliminaries
Recent advances in generative models have relied on the classical notion of transporting a data distribution \(q_{\mathrm{data}}(\mathbf{x})\) gradually to a prior distribution \(p_{\mathrm{prior}}(\mathbf{x})\)(Villani, 2008). By learning to reverse this process, one can sample from the prior and generate realistic samples.
### Generative Modeling with Diffusion Models
**Diffusion process.** We are interested in modeling the distribution \(q_{\mathrm{data}}(\mathbf{x})\), for \(\mathbf{x}\in\mathbb{R}^{d}\). We do this by constructing a diffusion process, which is represented by a set of time-indexed variables \(\{\mathbf{x}_{t}\}_{t=0}^{T}\) such that \(\mathbf{x}_{0}\sim p_{0}(\mathbf{x}):=q_{\mathrm{data}}(\mathbf{x})\) and \(\mathbf{x}_{T}\sim p_{T}(\mathbf{x}):=p_{\mathrm{prior}}(\mathbf{x})\). Here \(q_{\mathrm{data}}(\mathbf{x})\) is the initial "data" distribution and \(p_{\mathrm{prior}}(\mathbf{x})\) is the final "prior" distribution. The process can be modeled as the solution to the following SDE
\[d\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t)dt+g(t)d\mathbf{w}_{t} \tag{1}\]
where \(\mathbf{f}:\mathbb{R}^{d}\times[0,T]\rightarrow\mathbb{R}^{d}\) is vector-valued _drift_ function, \(g:[0,T]\rightarrow\mathbb{R}\) is a scalar-valued _diffusion_ coefficient, and \(\mathbf{w}_{t}\) is a Wiener process. Following this diffusion process forward in time constrains the final variable \(\mathbf{x}_{T}\) to follow distribution \(p_{\mathrm{prior}}(\mathbf{x})\). The reverse of this process is given by
\[d\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t)-g(t)^{2}\nabla_{\mathbf{x}_{t}} \log p(\mathbf{x}_{t}))dt+g(t)d\mathbf{w}_{t} \tag{2}\]
where \(p(\mathbf{x}_{t}):=p(\mathbf{x}_{t},t)\) is the marginal distribution of \(\mathbf{x}_{t}\) at time \(t\). Furthermore, one can derive an equivalent deterministic process called the probability flow ODE (Song et al., 2020), which has the same marginal distributions:
\[d\mathbf{x}_{t}=\Big{[}\mathbf{f}(\mathbf{x}_{t},t)-\frac{1}{2}g(t)^{2} \nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\Big{]}dt \tag{3}\]
In particular, one can draw \(\mathbf{x}_{T}\sim q_{\mathrm{data}}(\mathbf{y})\) and sample \(q_{\mathrm{data}}\) by solving either the above reverse SDE or ODE backward in time.
**Denoising score-matching.** The score, \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\), can be learned by the score-matching loss
\[\mathcal{L}(\theta)=\mathbb{E}_{\mathbf{x}_{t}\sim p(\mathbf{x}_{t}|\mathbf{x }_{0}),\mathbf{x}_{0}\sim q_{\mathrm{data}}(\mathbf{x}),t\sim\mathcal{U}(0,T )}\big{\|}\mathbf{s}_{0}(\mathbf{x}_{t},t)-\nabla_{\mathbf{x}_{t}}\log p( \mathbf{x}_{t}\mid\mathbf{x}_{0})\|^{2}\Big{]} \tag{4}\]
such that the minimizer \(\mathbf{s}_{\theta}^{*}(\mathbf{x}_{t},t)\) of the above loss approximates the true score. Crucially, the above loss is tractable because the transition kernel \(p(\mathbf{x}_{t}\mid\mathbf{x}_{0})\), which depends on specific choices of drift and diffusion functions, is designed to be Gaussian \(\mathbf{x}_{t}=\alpha_{t}\mathbf{x}_{0}+\sigma_{t}\boldsymbol{\epsilon}\), where \(\alpha_{t}\) and \(\sigma_{t}\) are functions of time and \(\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\boldsymbol{I})\). It is also common to view the diffusion process in terms of the \(\mathbf{x}_{t}\)'s signal-to-noise ratio (SNR), defined as \(\alpha_{t}^{2}/\sigma_{t}^{2}\).
### Diffusion Process with Fixed Endpoints
Diffusion models are limited because they can only transport complex data distributions to a standard Gaussian distribution and cannot be naturally adapted to translating between two arbitrary distributions, _e.g._ in the case of image-to-image translation. Luckily, classical results have shown that one can condition a diffusion process on a fixed known endpoint via the famous Doob's \(h\)-transform:
**Stochastic bridges via \(h\)-transform.** Specifically, a diffusion process defined in Eq. (1) can be driven to arrive at a particular point of interest \(y\in\mathbb{R}^{d}\) almost surely via Doob's \(h\)-transform (Doob and Doob, 1984; Rogers and Williams, 2000),
\[d\mathbf{x}_{t}=\mathbf{f}(\mathbf{x}_{t},t)dt+g(t)^{2}\mathbf{h}(\mathbf{x}_{ t},t,y,T)+g(t)d\mathbf{w}_{t},\quad\mathbf{x}_{0}\sim q_{\mathrm{data}}( \mathbf{x}),\quad\mathbf{x}_{T}=y \tag{5}\]
where \(\mathbf{h}(x,t,y,T)=\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{T}\mid\mathbf{ x}_{t})\big{|}_{\mathbf{x}_{t}=x,\mathbf{x}_{T}=y}\) is the gradient of the log transition kernel of from \(t\) to \(T\) generated by the original SDE, evaluated at points \(\mathbf{x}_{t}=x\) and \(\mathbf{x}_{T}=y\), and each \(\mathbf{x}_{t}\) now explicitly depends on \(y\) at time \(T\). Furthermore, \(p(\mathbf{x}_{T}=y\mid\mathbf{x}_{t})\) satisfies the Kolmogorov backward equation (specified in Appendix A). With specific drift and diffusion choices, \(\mathbf{f}(\mathbf{x}_{t},t)=\mathbf{0}\), \(\mathbf{h}\) is tractable due to the tractable (Gaussian) transition kernel of the underlying diffusion process.
When the initial point \(\mathbf{x}_{0}\) is fixed, the process is often called a _diffusion bridge_(Sarkka and Solin, 2019; Heng et al., 2021; Delyon and Hu, 2006; Schauer et al., 2017; Peluchetti et al., 2022b), and its ability to connect any given \(\mathbf{x}_{0}\) to a given value of \(\mathbf{x}_{T}\) is promising for image-to-image translation. Furthermore, the transition kernel may be tractable, which serves as further motivation.
## 3 Denoising Diffusion Bridge Models
Assuming that the endpoints of a diffusion bridge both exist in \(\mathbb{R}^{d}\) and come from an arbitrary and unknown joint distribution, _i.e._\((\mathbf{x}_{0},\mathbf{x}_{T})=(\mathbf{x},\mathbf{y})\sim q_{\mathrm{data}}( \mathbf{x},\mathbf{y})\), we wish to devise a process that learns to approximately sample from \(q_{\mathrm{data}}(\mathbf{x}\mid\mathbf{y})\) by reversing the diffusion bridge with boundary distribution \(q_{\mathrm{data}}(\mathbf{x},\mathbf{y})\), given a training set of _paired_ samples drawn from \(q_{\mathrm{data}}(\mathbf{x},\mathbf{y})\).
### Time-Reversed SDE and Probability Flow ODE
Inspired by diffusion bridges, we construct the stochastic process \(\{\mathbf{x}_{t}\}_{t=0}^{T}\) with marginal distribution \(q(\mathbf{x}_{t})\) such that \(q(\mathbf{x}_{0},\mathbf{x}_{T})\) approximates \(q_{\mathrm{data}}(\mathbf{x}_{0},\mathbf{x}_{T})\). Reversing the process amounts to sampling from \(q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\). Note that distribution \(q(\cdot)\) is different from \(p(\cdot)\), _i.e._ the diffusion marginal distribution, in that the endpoint distribution are now \(q_{\mathrm{data}}(\mathbf{x}_{0},\mathbf{x}_{T})=q_{\mathrm{data}}(\mathbf{x},\mathbf{y})\) instead of the distribution of a diffusion \(p(\mathbf{x}_{0},\mathbf{x}_{T})=p(\mathbf{x}_{T}\mid\mathbf{x}_{0})q_{ \mathrm{data}}(\mathbf{x}_{0})\), which defines a Gaussian \(\mathbf{x}_{T}\) given \(\mathbf{x}_{0}\). We can construct the time-reversed SDE/probability flow ODE of \(q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\) via the following theorem.
**Theorem 1**.: _The evolution of conditional probability \(q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\) has a time-reversed SDE of the form_
\[d\mathbf{x}_{t}=\Big{[}\mathbf{f}(\mathbf{x}_{t},t)-g^{2}(t)\Big{(}\mathbf{ s}(\mathbf{x}_{t},t,y,T)-\mathbf{h}(\mathbf{x}_{t},t,y,T)\Big{)}\Big{]}dt+g(t)d \hat{\mathbf{w}}_{t},\quad\mathbf{x}_{T}=y \tag{6}\]
_with an associated probability flow ODE_
\[d\mathbf{x}_{t}=\Big{[}\mathbf{f}(\mathbf{x}_{t},t)-g^{2}(t)\Big{(}\frac{1}{2} \mathbf{s}(\mathbf{x}_{t},t,y,T)-\mathbf{h}(\mathbf{x}_{t},t,y,T)\Big{)}\Big{]} dt,\quad\mathbf{x}_{T}=y \tag{7}\]
Figure 1: **A schematic for Denoising Diffusion Bridge Models.** DDBM uses a diffusion process guided by a drift adjustment (in blue) towards an endpoint \(\mathbf{x}_{T}=y\). They tears to reverse such a bridge process by matching the denoising bridge score (in orange), which allows one to reverse from \(\mathbf{x}_{T}\) to \(\mathbf{x}_{0}\) for any \(\mathbf{x}_{T}=\mathbf{y}\sim q_{\mathrm{data}}(\mathbf{y})\). The forward SDE process shown on the top is unidirectional while the probability flow ODE shown at the bottom is deterministic and bidirectional. White nodes are stochastic while grey nodes are deterministic.
on \(t\leq T-\epsilon\) for any \(\epsilon>0\), where \(\hat{\mathbf{w}}_{t}\) denotes a Wiener process, \(\mathbf{s}(x,t,y,T)=\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}\mid\mathbf{x}_ {T})\big{|}_{\mathbf{x}_{t}=x,\mathbf{x}_{T}=y}\) and \(\mathbf{h}\) is as defined in Eq. (5)._
A schematic of the bridge process is shown in Figure 1. Note that this process is defined up to \(T-\epsilon\). To recover the initial distribution in the SDE case, we make an approximation that \(\mathbf{x}_{T-\epsilon}\approx y\) for some small \(\epsilon\) simulate SDE backward in time. For the ODE case, since we need to sample from \(p(\mathbf{x}_{T-\epsilon})\) which cannot be Dirac delta, we cannot approximate \(\mathbf{x}_{T-\epsilon}\) with a single \(y\). Instead, we can first approximate \(\mathbf{x}_{T-\epsilon^{\prime}}\approx y\) where \(\epsilon>\epsilon^{\prime}>0\), and then take an Euler-Maruyama step to \(\mathbf{x}_{T-\epsilon}\), and Eq. (7) can be used afterward. A toy visualization of VE bridge and VP bridges are shown in Figure 2. The top and bottom shows the respective SDE and ODE paths for VE and VP bridges.
### Marginal Distributions and Denoising Bridge Score Matching
The sampling process in Theorem 1 requires approximation of the score \(\mathbf{s}(x,t,y,T)=\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}\mid\mathbf{x }_{T})\big{|}_{\mathbf{x}_{t}=x,\mathbf{x}_{T}=y}\) where \(q(\mathbf{x}_{t}\mid\mathbf{x}_{T})=\int_{\mathbf{x}_{0}}q(\mathbf{x}_{t}\mid \mathbf{x}_{0},\mathbf{x}_{T})q_{\mathrm{data}}(\mathbf{x}_{0}\mid\mathbf{x} _{T})d\mathbf{x}_{0}\). However, as the true score is not known in closed-form, we take inspiration from denoising score-matching (Song et al., 2020) and use a neural network to approximate the true score by matching against a tractable quantity. This usually results in closed-form marginal sampling of \(\mathbf{x}_{t}\) given data (_e.g._\(\mathbf{x}_{0}\) in the case of diffusion models and \((\mathbf{x}_{0},\mathbf{x}_{T})\) in our case), and given \(\mathbf{x}_{t}\), the model is trained to match against the closed-form denoising score objective. We are motivated to follow a similar approach because (1) tractable marginal sampling of \(\mathbf{x}_{t}\) and (2) closed-form objectives enable a simple and scalable algorithm. We specify how to design the marginal sampling distribution and the tractable score objective below to approximate the ground-truth conditional score \(\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\).
**Sampling distribution.** Fortunately, for the former condition, we can design our sampling distribution \(q(\cdot)\) such that \(q(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T}):=p(\mathbf{x}_{t}\mid \mathbf{x}_{0},\mathbf{x}_{T})\), where \(p(\cdot)\) is the diffusion distribution pinned at both endpoints as in Eq. (5). For diffusion processes with Gaussian transition kernels, _e.g._ VE, VP (Song et al., 2020), our sampling distribution is a Gaussian distribution of the form
\[\begin{split} q(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T}) &=\mathcal{N}(\hat{\mu}_{t},\hat{\sigma}_{t}^{2}\mathbf{I}),\quad \text{where}\\ \hat{\mu}_{t}&=\frac{\text{SNR}_{T}}{\text{SNR}_{t}} \frac{\alpha_{t}}{\alpha_{T}}\mathbf{x}_{T}+\alpha_{t}\mathbf{x}_{0}(1-\frac{ \text{SNR}_{T}}{\text{SNR}_{t}})\\ \hat{\sigma}_{t}^{2}&=\sigma_{t}^{2}(1-\frac{\text{SNR }_{T}}{\text{SNR}_{t}})\end{split} \tag{8}\]
where \(\alpha_{t}\) and \(\sigma_{t}\) are pre-defined signal and noise schedules and \(\text{SNR}_{t}=\alpha_{t}^{2}/\sigma_{t}^{2}\) is the signal-to-noise ratio at time \(t\). For VE schedule, we assume \(\alpha_{t}=1\) and derivation details are provided in Appendix A.1. Notably, the mean of this distribution is a linear interpolation between the (scaled) endpoints, and the distribution approaches a Dirac distribution when nearing either end. For concreteness, we present the bridge processes generated by both VP and VE diffusion in Table 1 and recommend choosing \(\mathbf{f}\) and \(g\) specified therein.
**Training objective.** For the latter condition, diffusion bridges benefit from a similar setup as in diffusion models, since a pre-defined signal/noise schedule gives rise to a closed-form conditional score \(\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T})\). We show in the following theorem that with \(\mathbf{x}_{t}\sim q(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T})\), a neural network \(\mathbf{s}_{\theta}(\mathbf{x}_{t},\mathbf{x}_{T},t)\) that matches against this closed-form score approximates the true score.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \(\mathbf{f}(\mathbf{x}_{t},t)\) & \(g^{2}(t)\) & \(p(\mathbf{x}_{t}\mid\mathbf{x}_{0})\) & \(\text{SNR}_{t}\) & \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{T}\mid\mathbf{x}_{t})\) \\ \hline VP & \(\frac{d\log\alpha_{t}}{dt}\mathbf{x}_{t}\) & \(\frac{d}{dt}\sigma_{t}^{2}-2\frac{d\log\alpha_{t}}{dt}\sigma_{t}^{2}\) & \(\mathcal{N}(\alpha_{t}\mathbf{x}_{0},\sigma_{t}^{2}\mathbf{I})\) & \(\alpha_{t}^{2}/\sigma_{t}^{2}\) & \(\frac{(\alpha_{t}/\alpha_{T})\mathbf{x}_{T-\mathbf{x}_{t}}}{\sigma_{t}^{2}( \text{SNR}_{t}/\text{SNR}_{T-1})}\) \\ \hline VE & \(\mathbf{0}\) & \(\frac{d}{dt}\sigma_{t}^{2}\) & \(\mathcal{N}(\mathbf{x}_{0},\sigma_{t}^{2}\mathbf{I})\) & \(1/\sigma_{t}^{2}\) & \(\frac{\mathbf{x}_{T-\mathbf{x}_{t}}}{\sigma_{T}^{2}-\sigma_{t}^{2}}\) \\ \hline \end{tabular}
\end{table}
Table 1: VP and VE instantiations of diffusion bridges.
Figure 2: VE bridge (left) and VP bridge (right) with their SDE (top) and ODE (bottom) visualization.
**Theorem 2** (Denoising Bridge Score Matching).: _Let \((\mathbf{x}_{0},\mathbf{x}_{T})\sim q_{\mathrm{data}}(\mathbf{x},\mathbf{y})\), \(\mathbf{x}_{t}\sim q(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T})\), \(t\sim p(t)\) for any non-zero time sampling distribution \(p(t)\) in \([0,T]\), and \(w(t)\) be a non-zero loss weighting term of any choice. Minimum of the following objective:_
\[\mathcal{L}(\theta)=\mathbb{E}_{\mathbf{x}_{t},\mathbf{x}_{0},\mathbf{x}_{T}, t}\Big{[}w(t)\|\mathbf{s}_{0}(\mathbf{x}_{t},\mathbf{x}_{T},t)-\nabla_{ \mathbf{x}_{t}}\log q(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T})\|^{2} \Big{]} \tag{9}\]
_satisfies \(\mathbf{s}_{\theta}(\mathbf{x}_{t},\mathbf{x}_{T},t)=\nabla_{\mathbf{x}_{t}} \log q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\)._
In short, we establish a tractable diffusion bridge over two endpoints and, by matching the conditional score of the Gaussian bridge, we can learn the score of the new distribution \(q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\) that satisfies the boundary distribution \(q_{\mathrm{data}}(\mathbf{x},\mathbf{y})\).
## 4 Generalized Parameterization for Distribution Translation
Building the bridge process upon diffusion process allows us to further adapt many recent advancements in the score network parameterization \(\mathbf{s}_{\theta}(\mathbf{x}_{t},\mathbf{x}_{T},t)\)(Ho et al., 2020; Song et al., 2020; Salimans and Ho, 2022; Ho et al., 2022; Karras et al., 2022), different noise schedules, and efficient ODE sampling (Song et al., 2020; Karras et al., 2022; Lu et al., 2022a,b; Zhang and Chen, 2022) to our more general framework. Among these works, EDM (Karras et al., 2022) proposes to parameterize the model output to be \(D_{\theta}(\mathbf{x}_{t},t)=c_{\text{skip}}(t)\mathbf{x}_{t}+c_{\text{out}}( t)F_{\theta}(c_{\text{in}}(t)\mathbf{x}_{t},c_{\text{noise}}(t))\) where \(F_{\theta}\) is a neural network with parameter \(\theta\) that predicts \(\mathbf{x}_{0}\). In a similar spirit, we adopt this pred-\(\mathbf{x}\) parameterization and additionally derive a set of scaling functions for distribution translation, which we show is a strict superset.
**Score reparameterization.** Following the sampling distribution proposed in (8), a pred-\(\mathbf{x}\) model can predict bridge score by
\[\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}\mid\mathbf{x}_{T})\approx-\frac {\mathbf{x}_{t}-\Big{(}\frac{\text{SNR}_{T}}{\text{SNR}_{t}}\frac{\alpha_{t} }{\alpha_{T}}\mathbf{x}_{T}+\alpha_{t}D_{\theta}(\mathbf{x}_{t},\mathbf{x}_{T },t)(1-\frac{\text{SNR}_{T}}{\text{SNR}_{t}})\Big{)}}{\sigma_{t}^{2}(1-\frac{ \text{SNR}_{T}}{\text{SNR}_{t}})} \tag{10}\]
**Scaling functions and loss weighting.** Following Karras et al. (2022), and let \(a_{t}=\alpha_{t}/\alpha_{T}*\text{SNR}_{T}/\text{SNR}_{t}\), \(b_{t}=\alpha_{t}(1-\text{SNR}_{T}/\text{SNR}_{t})\), \(c_{t}=\sigma_{t}^{2}(1-\text{SNR}_{T}/\text{SNR}_{t})\), the scaling functions and weighting function \(w(t)\) can be derived to be
\[c_{\text{in}}(t)=\frac{1}{\sqrt{a_{t}^{2}\sigma_{T}^{2}+b_{t}^{2 }\sigma_{0}^{2}+2a_{t}b_{t}\sigma_{0T}+c_{t}}},\quad c_{\text{out}}(t)=\sqrt{a _{t}^{2}(\sigma_{T}^{2}\sigma_{0}^{2}-\sigma_{0T}^{2})+\sigma_{0}^{2}c_{t}}*c _{\text{in}}(t) \tag{11}\] \[c_{\text{skip}}(t)=\Big{(}b_{t}\sigma_{0}^{2}+a_{t}\sigma_{0T} \Big{)}*c_{\text{in}}^{2}(t),\quad w(t)=\frac{1}{c_{\text{out}}(t)^{2}},\quad c _{\text{noise}}(t)=\frac{1}{4}\log{(t)} \tag{12}\]
where \(\sigma_{0}^{2}\), \(\sigma_{T}^{2}\), and \(\sigma_{0T}\) denote the variance of \(\mathbf{x}_{0}\), variance of \(\mathbf{x}_{T}\), and the covariance of the two, respectively. The only additional hyperparameters compared to EDM are \(\sigma_{T}\) and \(\sigma_{0T}\), which characterize the distribution of \(\mathbf{x}_{T}\) and its correlation with \(\mathbf{x}_{0}\). One can notice that in the case of EDM, \(\sigma_{t}=t\), \(\sigma_{0}^{2}=\sigma_{0}^{2}+T^{2}\) because \(\mathbf{x}_{T}=\mathbf{x}_{0}+T\epsilon\) for some Gaussian noise \(\epsilon\), \(\sigma_{0T}=\sigma_{0}^{2}\), and \(\text{SNR}_{T}/\text{SNR}_{t}=t^{2}/T^{2}\). One can show that the scaling functions then reduce to those in EDM. We leave details in Appendix A.5.
**Generalized time-reversal.** Due to the probability flow ODE's resemblance with classifier-guidance (Dhariwal and Nichol, 2021; Ho and Salimans, 2022), we can introduce an additional parameter \(w\) to set the "strength" of drift adjustment as below.
\[d\mathbf{x}_{t}=\Big{[}\mathbf{f}(\mathbf{x}_{t},t)-g^{2}(t)\Big{(}\frac{1}{2} \mathbf{s}(\mathbf{x}_{t},t,y,T)-w\mathbf{h}(\mathbf{x}_{t},t,y,T)\Big{)} \Big{]}dt,\quad\mathbf{x}_{T}=y \tag{13}\]
which allows for a strictly wider class of marginal density of \(\mathbf{x}_{t}\) generated by the resulting probability flow ODE. We examine the effect of this parameter in our ablation studies.
## 5 Stochastic Sampling for Denoising Diffusion Bridges
Although the probability flow ODE allows for one to use fast integration techniques to accelerate the sampling process (Zhang and Chen, 2022; Song et al., 2020; Karras et al., 2022), purely
following an ODE path is problematic because diffusion bridges have fixed starting points given as data \(\mathbf{x}_{T}=\mathbf{y}\sim q_{\mathrm{data}}(\mathbf{y})\), and following the probability flow ODE backward in time generates a deterministic "expected" path. This can result in "averaged" or blurry outputs given initial conditions. Thus, we are motivated to introduce noise into our sampling process to improve the sampling quality and diversity.
**Higher-order hybrid sampler.** Our sampler is built upon prior higher-order ODE sampler in (Karras et al., 2022), which discretizes the sampling steps into \(t_{N}>t_{N-1}>\cdots>t_{0}\) with decreasing intervals (see Appendix A.6 for details). Inspired by the predictor-corrector sampler introduced by Song et al. (2020), we additionally introduce a scheduled Euler-Maruyama step which follows the backward SDE in between higher-order ODE steps. This ensures that the marginal distribution at each step approximately stays the same. We introduce additional scaling hyperparameter \(s\), which define a step ratio in between \(t_{i-1}\) and \(t_{i}\) such that the interval \([t_{i}-s(t_{i}-t_{i-1}),t_{i}]\) is used for Euler-Maruyama steps and \([t_{i-1},t_{i}-s(t_{i}-t_{i-1})]\) is used for Heun steps, as described in Algorithm 1.
```
Input: model \(D_{\theta}(\mathbf{x}_{t},t)\), time steps \(\{t_{i}\}_{i=0}^{N}\), max time \(T\), guidance strength \(w\), step ratio \(s\), distribution \(q_{\mathrm{data}}(\mathbf{y})\) Output:\(\mathbf{x}_{0}\) Sample \(\mathbf{x}_{N}\sim q_{\mathrm{data}}(\mathbf{y})\) for\(i=N,\ldots,1\)do Sample \(\boldsymbol{\epsilon}_{i}\sim\mathcal{N}(\mathbf{0},\boldsymbol{I})\) \(\hat{t}_{i}\gets t_{i}+s(t_{i-1}-t_{i})\) \(\boldsymbol{d}_{i}\leftarrow-\mathbf{f}(\mathbf{x}_{i},t_{i})+g^{2}(t_{i}) \Big{(}\mathbf{s}(\mathbf{x}_{i},t_{i},\mathbf{x}_{N},T)-\mathbf{h}(\mathbf{ x}_{i},t_{i},\mathbf{x}_{N},T)\Big{)}\) \(\hat{\mathbf{x}}_{i}\leftarrow\mathbf{x}_{i}+\boldsymbol{d}_{i}(\hat{t}_{i}- t_{i})+g(t_{i})\sqrt{\hat{t}_{i}-t_{i}}\boldsymbol{\epsilon}_{i}\) \(\hat{\boldsymbol{d}}_{i}\leftarrow-\mathbf{f}(\hat{\mathbf{x}}_{i},t_{i})+g^{2} (\hat{t}_{i})\Big{(}\frac{1}{2}\mathbf{s}(\hat{\mathbf{x}}_{i},\hat{t}_{i}, \mathbf{x}_{N},T)-w\mathbf{h}(\hat{\mathbf{x}}_{i},\hat{t}_{i},\mathbf{x}_{N}, T)\Big{)}\) \(\mathbf{x}_{i-1}\leftarrow\hat{\mathbf{x}}_{i}+\hat{\boldsymbol{d}}_{i}(t_{i-1}- \hat{t}_{i})\) if\(i\neq 1\)then \(\boldsymbol{d}^{\prime}_{i}\leftarrow-\mathbf{f}(\mathbf{x}_{i-1},t_{i-1})+g^ {2}(t_{i-1})\Big{(}\frac{1}{2}\mathbf{s}(\mathbf{x}_{i-1},t_{i-1},\mathbf{x}_ {N},T)-w\mathbf{h}(\mathbf{x}_{i-1},t_{i-1},\mathbf{x}_{N},T)\Big{)}\) \(\mathbf{x}_{i-1}\leftarrow\hat{\mathbf{x}}_{i}+(\frac{1}{2}\boldsymbol{d}^{ \prime}_{i}+\frac{1}{2}\hat{\boldsymbol{d}}_{i})(t_{i-1}-\hat{t}_{i})\) endif endfor
```
**Algorithm 1** Denoising Diffusion Bridge Hybrid Sampler
## 6 Related Works and Special Cases
**Diffusion models.** The advancements in diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020) have improved state-of-the-art in image generation and outperformed GANs (Goodfellow et al., 2014). The success of diffusion models goes hand in hand with important design choices such as network design (Song et al., 2020; Karras et al., 2022; Nichol and Dhariwal, 2021; Hoogeboom et al., 2023; Peebles and Xie, 2023), improved noise-schedules (Nichol and Dhariwal, 2021; Karras et al., 2022; Peebles and Xie, 2023), faster and more accurate samplers (Song et al., 2020; Lu et al., 2022; Ma et al., 2022; Zhang and Chen, 2022), and guidance methods (Dhariwal and Nichol, 2021; Ho and Salimans, 2022). Given the large body of literature on diffusion models for unconditional generation, which largely is based on these various design choices, we seek to design our bridge formulation to allow for a seamless integration with this literature. As such, we adopt a time-reversal perspective to directly extend these methods.
**Diffusion bridges, Schodinger bridges, and Doob's h-transform.** Diffusion bridges (Sarkka and Solin, 2019) are a common tool in probability theory and have been actively studied in recent years in the context of generative modeling (Liu et al., 2022; Somnath et al., 2023; De Bortoli et al., 2021; Peluchetti, 2023). Heng et al. (2021) explores diffusion bridges conditioned on fixed starting/ending points and learns to simulate the time-reversal of the bridge given an approximation of the score \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t})\). More recently, instead of considering bridges with fixed endpoints, Liu et al. (2022) uses Doob's h-transform to bridge between arbitrary distributions. A forward bridge is learned via
score-matching by simulating entire paths during training. In contrast, other works (Somnath et al., 2023; Peluchetti), while also adopting Doob's h-transform, propose simulation-free algorithms for forward-time generation. Delbracio and Milanfar (2023) similarly constructs a Brownian Bridge for direct iteration and is successfully applied to image-restoration tasks. Another approach De Bortoli et al. (2021) proposes Iterative Proportional Fitting (IPF) to tractably solve Schodinger Bridge (SB) problems in translating between different distributions. Liu et al. (2023) is built on a tractable class of SB which results in a simulation-free algorithm and has demonstrated strong performance in image translation tasks. More recently, extending SB with IPF, Bridge-Matching (Shi et al., 2023) proposes to use Iterative Markovian Fitting to solve the SB problem. A similar algorithm is also developed by Peluchetti (2023) for distribution translation. A more related work to ours is Li et al. (2023), which proposes to directly reverse a Brownian Bridge for distribution translation in discrete time. Our method instead shows how to construct a bridge model from any existing VP and VE diffusion processes in continuous time, and Brownian Bridge (as considered in most previous works) is but a special case of VE bridges. We additionally show that, when implemented correctly, VP bridges can achieve very strong empirical performance. Although similar perspective can be derived using forward-time diffusion as in Peluchetti which also proposes VE/VP bridge schedules, our framework enjoys additional empirical (reusing diffusion designs) and theoretical (connection with OT-Flow-Matching(Lipman et al., 2023; Tong et al., 2023b) and Rectified Flow (Liu et al., 2022a)) benefits.
**Flow and Optimal Transport** Works based on Flow-Matching (Lipman et al., 2023; Tong et al., 2023b; Pooladian et al., 2023; Tong et al., 2023a) learn an ODE-based transport map to bridge two distributions. Lipman et al. (2023) has demonstrated that by matching the velocity field of predefined transport maps, one can create powerful generative models competitive with the diffusion counterparts. Improving this approach, Tong et al. (2023b); Pooladian et al. (2023) exploit potential couplings between distributions using minibatch simulation-free OT. Rectified Flow (Liu et al., 2022a) directly constructs the OT bridge and uses neural networks to fit the intermediate velocity field. Another line of work uses stochastic interpolants (Albergo and Vanden-Eijnden, 2023) to build flow models and directly avoid the use of Doob's h-functions and provide an easy way to construct interpolation maps between distributions. Albergo et al. (2023) presents a general theory with stochastic interpolants unifying flow and diffusion, and shows that a bridge can be constructed from both an ODE and SDE perspective. Separate from these methods, our model uses a different denoising bridge score-matching loss than this class of models. Constructing from this perspective allows us to extend many existing successful designs of diffusion models (which are not directly applicable to these works) to the bridge framework and push state-of-the-art further for image translation while retaining strong performance for unconditional generation.
### Special Cases of Denoising Diffusion Bridge Models
**Case 1: Unconditional diffusion process (Song et al., 2020b).** For unconditional diffusion processes (which map data to noise), we can first show that the marginal \(p(\mathbf{x}_{t})\) when \(p(\mathbf{x}_{0})=q_{\mathrm{data}}(\mathbf{x})\) exactly matches that of a regular diffusion process when \(\mathbf{x}_{T}\sim q_{\mathrm{data}}(\mathbf{y}\mid\mathbf{x})=\mathcal{N}( \alpha_{T}\mathbf{x},\sigma_{T}^{2}\mathbf{I})\). By taking expectation over \(\mathbf{x}_{T}\) in Eq. (8), we have
\[p(\mathbf{x}_{t}\mid\mathbf{x}_{0})=\mathcal{N}(\alpha_{t}\mathbf{x}_{0}, \sigma_{t}\mathbf{I}) \tag{14}\]
One can further show that during sampling, Eq. (6) and (7) reduce to the reverse SDE and ODE (respectively) of a diffusion process when \(\mathbf{x}_{T}\) is sampled from a Gaussian. We leave derivation details to Appendix A.4.
**Case 2: OT-Flow-Matching (Lipman et al., 2023; Tong et al., 2023b) and Rectified Flow (Liu et al., 2022a).** These works learn to match deterministic dynamics defined through ODEs instead of SDEs. In this particular case, they work with "straight line" paths defined by \(\mathbf{x}_{T}-\mathbf{x}_{0}\).
To see that our framework generalizes this, first let us define a family of diffusion bridges with variance scaled by \(c\in(0,1)\) such that \(p(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{T})=\mathcal{N}(\hat{\mu}_{t}, c^{2}\hat{\sigma}_{t}^{2}\mathbf{I})\) where \(\hat{\mu}_{t}\) and \(\hat{\sigma}_{t}\) are as defined in Eq. (8). One can therefore show that with a VE diffusion where \(\sigma_{t}^{2}=c^{2}t\), given some fixed \(\mathbf{x}_{0}\) and \(\mathbf{x}_{t}\), _i.e._\(T=1\), and \(\mathbf{x}_{t}\) sampled from Eq. (8),
\[\lim_{c\to 0}\Big{[}\mathbf{f}(\mathbf{x}_{t},t)-c^{2}g^{2}(t)\Big{(}\frac{1}{2} \nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}\mid\mathbf{x}_{0},\mathbf{x}_{1} )-\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{1}\mid\mathbf{x}_{t}))\Big{)} \Big{]}=\mathbf{x}_{1}-\mathbf{x}_{0} \tag{15}\]
where inside the bracket is the drift of probability flow ODE in Eq. (7) given \(\mathbf{x}_{0}\) and \(\mathbf{x}_{1}\), and the right hand side is exactly the straight line path term. In other words, these methods learn to match the drift in the bridge probability flow ODE (with a specific VE schedule) in the noiseless limit. The score model can then be matched against \(\mathbf{x}_{T}-\mathbf{x}_{0}\), with some additional caveat to handle additional input \(\mathbf{x}_{T}\), our framework exactly reduces to that of OT-Flow-Matching and Rectified Flow (details in Appendix A.4).
## 7 Experiments
In this section we verify the generative capability of DDBM, and we want to answer the following questions: (1) How well does DDBM perform in image-to-image translation in pixel space? (2) Can DDBM perform well in unconditional generation when one side of the bridge reduces to Gaussian distribution? (3) How does the additional design choices introduced affect the final performance? Unless noted otherwise, we use the same VE diffusion schedule as in EDM for our bridge model by default. We leave further experiment details to Appendix B.
### Image-to-Image Translation
We demonstrate that DDBM can deliver competitive results in general image-to-image translation tasks. We evaluate on datasets with different image resolutions to demonstrate its applicability on a variety of scales. We choose Edges\(\rightarrow\)Handbags (Isola et al., 2017) scaled to \(64\times 64\) pixels, which contains image pairs for translating from edge maps to colored handbags, and DIODE-Outdoor (Vasiljevic et al., 2019) scaled to \(256\times 256\), which contains normal maps and RGB images of real-world outdoor scenes. For evaluation metrics, we use Frechet Inception Distance (FID) (Heusel et al., 2017) and Inception Scores (IS) (Barratt and Sharma, 2018) evaluated on all training samples translation quality, and we use LPIPS (Zhang et al., 2018) and MSE (in \([-1,1]\) scale) to measure perceptual similarity and translation faithfulness.
We compare with Pix2Pix (Isola et al., 2017), SDEdit (Meng et al., 2022), DDIB (Su et al., 2022), Rectified Flow (Liu et al., 2022), and I\({}^{2}\)SB (Liu et al., 2023) as they are built for image-to-image translation. For SDEdit we train unconditional EDM on the target domain, _e.g_. colored images, and initialize the translation by noising source image, _e.g_. sketches, and generate by EDM sampler given the noisy image. The other baseline methods are run with their respective repo while using the same network architecture as ours. Diffusion and transport-based methods are evaluated with the same number of function evaluations (\(N=40\), which is the default for EDM sampler for \(64\times 64\) images) to demonstrate our sampler's effectiveness in the regime when the number of sampling steps are low. Results are shown in Table 2 and additional settings are specified in Appendix B.
We observe that our model can perform translation with both high generation quality and faithfulness, and we find that VP bridges outperform VE bridges in some cases. In contrast, Rectified-Flow as an OT-based method struggles to perform well when the two domains share little low-level similarities (_e.g_. color, hue). DDIB also fails to produce coherent translation due to the wide differences in pixel-space distribution between the paired data. I\({}^{2}\)SB comes closest to our method, but falls short when
Figure 3: Qualitative comparison with the most relevant baselines.
limited by computational constraints, _i.e_. NFE is low. We additionally show qualitative comparison with the most performant baselines in Figure 3. More visual results can be found in Appendix B.1.
### Ablation Studies
We now study the effect of our preconditioning and hybrid samplers on generation quality in the context of both VE and VP bridge (see Appendix B for VP bridge parameterization). In the left column of Figure 4, we fix the guidance scale \(w\) at 1 and vary the Euler step size \(s\) from 0 to 0.9 to introduce stochasticity. We see a significant decrease in FID score as we increase \(s\) which produces the best performance at some value between 0 and 1 (_e.g_. \(s=0.3\) for Edges\(\rightarrow\)Handbags). Figure 3 also shows that the ODE sampler (_i.e_. \(s=0\)) produces blurry images while our hybrid sampler produces considerably sharper results. On the right column, we study the effect of \(w\) (from 0 to 1) with fixed \(s\). We observe that VE bridges are not affected by the change in \(w\) whereas VP bridges heavily rely on setting \(w=1\). We hypothesize that this is due to the fact that VP bridges follow "curved paths" and destroy signals in between, so it is reliant on Doob's \(h\)-function for further guidance towards correct probability distribution.
We also study the effect of our preconditioning in Table 3. Our baseline without our preconditioning and our sampler is a simple model that directly matches output of the neural network to the training target and generates using EDM (Karras et al., 2022) sampler. We see that each introduced component further boosts the generation performance. Therefore, we can conclude that the introduced practical components are essential for the success of our DDBM.
### Unconditional Generation
When one side of the distribution becomes Gaussian distribution, our framework exactly reduces to that of diffusion models. Specifically, during training when the end point \(\mathbf{x}_{T}\sim\mathcal{N}(\alpha_{T}\mathbf{x}_{0},\sigma_{T}^{2}I)\)
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Edges\(\rightarrow\)Handbags-64\(\times\)64} & \multicolumn{4}{c}{DIODE-256\(\times\)256} \\ \cline{2-9} & FID \(\downarrow\) & IS \(\uparrow\) & LPIPS \(\downarrow\) & MSE \(\downarrow\) & FID \(\downarrow\) & IS \(\uparrow\) & LPIPS \(\downarrow\) & MSE \(\downarrow\) \\ \hline Pix2Pix (Isola et al., 2017) & 74.8 & 4.24 & 0.356 & 0.209 & 82.4 & 4.22 & 0.556 & 0.133 \\ \hline DDIB (Su et al., 2022) & 186.84 & 2.04 & 0.869 & 1.05 & 242.3 & 4.22 & 0.798 & 0.794 \\ \hline SDEdit (Meng et al., 2022) & 26.5 & **3.58** & 0.271 & 0.510 & 31.14 & 5.70 & 0.714 & 0.534 \\ \hline Rectified Flow (Liu et al., 2022a) & 25.3 & 2.80 & 0.241 & 0.088 & 77.18 & 5.87 & 0.534 & 0.157 \\ \hline l2SB (Liu et al., 2023) & 7.43 & 3.40 & 0.244 & 0.191 & 9.34 & 5.77 & 0.373 & 0.145 \\ \hline \hline
**DDBM (VE)** & 2.93 & 3.58 & **0.131** & **0.013** & 8.51 & 6.03 & **0.226** & **0.0107** \\
**DDBM (VP)** & **1.83** & **3.73** & 0.142 & 0.0402 & **4.43** & **6.21** & 0.244 & 0.0839 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative evaluation of pixel-space image-to-image translation.
Figure 4: Ablation studies on Euler step ratio \(s\) and guidance scale \(w\): \(w=1\) for all ablation on \(s\) and \(s\) is set to the best-performing value for each dataset for ablation on \(w\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Our \\ precond. \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Our \\ sampler \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} E\(\rightarrow\)H-64\(\times\)64 \\ VE \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} DIDDE-256\(\times\)256 \\ VE \\ \end{tabular} } \\ \hline ✗ & ✗ & 14.02 & 11.76 & 126.3 & 96.93 \\ \hline ✓ & ✗ & 13.26 & 11.19 & 79.25 & 91.07 \\ \hline ✗ & ✓ & 13.11 & 29.91 & 91.31 & 21.92 \\ \hline ✓ & ✓ & **2.93** & **1.83** & **8.51** & **4.43** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on the effect of sampler and preconditioning on FID. Cross mark on our preconditioning means no output reparameterization and directly use network output to match training target. Cross mark on our sampler means we reuse the ODE sampler from EDM with the same setting. E\(\rightarrow\)H is a short-hand for Edges\(\rightarrow\)Handbags.
our intermediate bridge samples \(\mathbf{x}_{t}\) follows the distribution \(\mathbf{x}_{t}\sim\mathcal{N}(\alpha_{t}\mathbf{x}_{0},\sigma_{t}^{2}\mathbf{I})\). We empirically verify that using our bridge sampling and the pred-\(\mathbf{x}\) objective inspired by EDM, we can recover its performance by using our more generalized parameterization.
We evaluate our method on CIFAR-10 (Krizhevsky et al., 2009) and FFHQ-\(64\times 64\)(Karras et al., 2019) which are processed according to Karras et al. (2022). We use FID score for quantitative evaluation using 50K generated images and use number of function evaluations (NFE) for generation efficiency. We compare our generation results against diffusion-based and optimal transport-based models including DDPM (Ho et al., 2020), DDIM (Song et al., 2020a), DDPM++ (Song et al., 2020b), NCSN++ (Song et al., 2020b), Rectified Flow (Liu et al., 2022a), EDM (Karras et al., 2022). Quantitative results are presented in Table 4 and generated samples are shown in Figure 5.
We observe that our model is able to match EDM performance with negligible degradation in FID scores for CIFAR-10 and marginal improvement for FFHQ-\(64\times 64\). This corroborates our claim that our method can benefit from advances in diffusion models and generalize many of the advanced parameterization techniques such as those introduced in EDM.
## 8 Conclusion
In this work, we introduce Denoising Diffusion Bridge Models, a novel class of models that builds a stochastic bridge between paired samples with tractable marginal distributions in between. The model is learned by matching the conditional score of a tractable bridge distribution, which allows one to transport from one distribution to another via a new reverse SDE or probability flow ODE. Additionally, this generalized framework shares many similarities with diffusion models, thus allowing us to reuse and generalize many designs of diffusion models. We believe that DDBM is a significant contribution towards a general framework for distribution translation. In the era of generative AI, DDBM has a further role to play.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{CIFAR-10} & FFHQ-\(64\times 64\) \\ \cline{2-5} & NFE \(\downarrow\) & FID \(\downarrow\) & NFE \(\downarrow\) & FID \(\downarrow\) \\ \hline DDPM [13] & 1000 & 3.17 & 1000 & 3.52 \\ \hline DDIM [44] & 50 & 4.67 & 50 & 5.18 \\ \hline DDPM++ [46] & 1000 & 3.01 & 1000 & 3.39 \\ \hline NCSN++ [46] & 1000 & 3.77 & 1000 & 25.95 \\ \hline Rectified Flow [24] & 127 & 2.58 & 152 & 4.45 \\ \hline EDM [18] & 35 & **2.04** & 79 & 2.53 \\ \hline \hline
**DDBM** & 35 & 2.06 & 79 & **2.44** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation of unconditional generation.
Figure 5: Generation on CIFAR-10 and FFHQ-\(64\times 64\). |
2305.19776 | Off-By-One Implementation Error in J-UNIWARD | J-UNIWARD is a popular steganography method for hiding secret messages in
JPEG cover images. As a content-adaptive method, J-UNIWARD aims to embed into
textured image regions where changes are difficult to detect. To this end,
J-UNIWARD first assigns to each DCT coefficient an embedding cost calculated
based on the image's Wavelet residual, and then uses a coding method that
minimizes the cost while embedding the desired payload.
Changing one DCT coefficient affects a 23x23 window of Wavelet coefficients.
To speed up the costmap computation, the original implementation pre-computes
the Wavelet residual and then considers per changed DCT coefficient a 23x23
window of the Wavelet residual. However, the implementation accesses a window
accidentally shifted by one pixel to the bottom right.
In this report, we evaluate the effect of this off-by-one error on the
resulting costmaps. Some image blocks are over-priced while other image blocks
are under-priced, but the difference is relatively small. The off-by-one error
seems to make little difference for learning-based steganalysis. | Benedikt Lorch | 2023-05-31T12:09:01Z | http://arxiv.org/abs/2305.19776v2 | # Off-By-One Implementation Error in J-UNIWARD
###### Abstract
J-UNIWARD is a popular steganography method for hiding secret messages in JPEG cover images. As a content-adaptive method, J-UNIWARD aims to embed into textured image regions where changes are difficult to detect. To this end, J-UNIWARD first assigns to each DCT coefficient an embedding cost calculated based on the image's Wavelet residual, and then uses a coding method that minimizes the cost while embedding the desired payload.
Changing one DCT coefficient affects a \(23\times 23\) window of Wavelet coefficients. To speed up the costmap computation, the original implementation pre-computes the Wavelet residual and then considers per changed DCT coefficient a \(23\times 23\) window of the Wavelet residual. However, the implementation accesses a window accidentally shifted by one pixel to the bottom right.
In this report, we evaluate the effect of this off-by-one error on the resulting costmaps. Some image blocks are over-priced while other image blocks are under-priced, but the difference is relatively small. The off-by-one error seems to make little difference for learning-based steganalysis.
## 1 Background
J-UNIWARD is a popular steganography method for hiding secret messages in JPEG cover images [1]. While embedding the desired payload, J-UNIWARD aims to minimize the distortion introduced by the embedding. The distortion is calculated as the relative sum of changes between cover image \(\mathbf{X}\) and stego image \(\mathbf{Y}\) after Wavelet filtering,
\[D(\mathbf{X},\mathbf{Y})=\sum_{k=1}^{3}\sum_{u=1}^{n_{1}}\sum_{v=1}^{n_{2}}\frac{ \left|W_{uv}^{(k)}\left(\mathbf{X}\right)-W_{uv}^{(k)}\left(\mathbf{Y}\right)\right| }{\sigma+\left|W_{uv}^{(k)}\left(\mathbf{X}\right)\right|}\enspace. \tag{1}\]
The Wavelet filter bank consists of three directional filters, which are shown in Fig. 1. The residuals are denoted as \(W^{(k)}(\mathbf{X})\), where \(k\in\{1,2,3\}\). The subscripts \(u,v\) denote the spatial position and iterate from \(1\) to the image height \(n_{1}\) and image width \(n_{2}\), respectively. The constant \(\sigma\) controls the sensitivity of the distortion to the image content. The authors set it to \(2^{-6}\)[1].
The denominator in Eq. 1 discourages embedding changes in regions where the cover's Wavelet residual is small, i.e., when the region is smooth in at least one direction.
The distortion is defined in the spatial domain, but steganography with JPEG images embeds into their quantized DCT coefficients. To quantify the distortion introduced by changing quantized DCT coefficents, J-UNIWARD decompresses the quantized DCT coefficients and evaluates the distortion in the spatial domain.
After computing the distortion for each embeddable element, the steganographer uses a coding method such as syndrome trellis codes (STCs) to embed their secret message while minimizing the distortion.
## 2 Original implementation
The steganographer computes for each DCT coefficient the distortion (aka cost) introduced by changing that coefficient. The result is also called costmap. A naive implementation would iterate over all DCT coefficients of the cover image \(\mathbf{X}\), increment1 one DCT coefficient at a time, and evaluate the distortion between the cover and the modified cover \(\mathbf{Y}\).
Footnote 1: Because of the absolute values in Eq. 1, the distortion of incrementing and decrementing the DCT coefficient is the same.
The authors of J-UNIWARD published a Matlab and a C++ implementation.2 Both implementations pre-compute several quantities to speed up the costmap computation. Note that numerator in Eq. 1 is independent of the image content. Hence, the numerator can be pre-computed by first evaluating the spatial impact of changing one of the \(8\times 8\) DCT coefficients, and then evaluating the impact on the Wavelet residual. Changing one DCT coefficient impacts the whole \(8\times 8\) spatial block. Because the Wavelet filter kernels have size \(16\times 16\), changing one DCT coefficient impacts a block of \(23\times 23\) Wavelet residual coefficients.3 As a result, the implementation pre-computes for each of the three filter kernels a look-up table. Each of the three look-up tables contains how much changing one of the 64 DCT coefficients impacts the \(23\times 23\) Wavelet residual coefficients.
Footnote 2: [http://dde.binghamton.edu/download/stego_algorithms/](http://dde.binghamton.edu/download/stego_algorithms/), accessed on 02.05.2023
Footnote 3: \(8+16-1=23\)
The denominator requires the Wavelet residual of the cover image, which needs to be computed only once.
After pre-computing these quantities, the steganographer iterates over all DCT coefficients. The numerator can be read from the look-up table. For the denominator, the steganographer only needs to extract the right \(23\times 23\) window from the pre-computed Wavelet residual. Note that the \(23\times 23\) window in the denominator is the same for all DCT coefficients within the current \(8\times 8\) block.
## 3 Off-by-one implementation error
While extracting the \(23\times 23\) window from the pre-computed Wavelet residual, the implementation takes a window accidentally shifted by one pixel to the bottom right. The following steps describe the implementation error. For illustration purposes, we use a cover image of size \(8\times 8\), as shown in Fig. 2.
1. The cover image of size \(8\times 8\) is expanded by 16 pixels in all directions using symmetry padding. The resulting image has size \(40\times 40\). The top left pixel is now at the spatial location \((16,16)\) (0-based index).
Figure 1: The three directional Wavelet filter kernels (\(16\times 16\)) used by J-UNIWARD evaluate the smoothness along the horizontal (LH), vertical (HL), and diagonal (HH) directions.
Figure 2: The blue area shows an \(8\times 8\) cover image. The Wavelet residual (top green plane) is obtained as follows: (1) The image is expanded by \(16\) pixels in all directions using symmetry padding (light gray area). (2) The \(16\times 16\) filter kernel (dark gray) is slid over the image. Additional zero-padding is used to maintain the input resolution. (3) The first position in the resulting Wavelet residual that is affected by the cover pixel \((0,0)\) is located at the spatial offset \((8,8)\), highlighted in dark green. However, the original implementation accesses the Wavelet residual starting from offset \((9,9)\). The figure was created based on [https://github.com/vdumoulin/conv_arithmetic/](https://github.com/vdumoulin/conv_arithmetic/).
2. The \(16\times 16\) Wavelet filter kernel is slid over the image using _same_ mode and zero-padding. The resulting Wavelet residuals have the size \(40\times 40\). To obtain the same output resolution, Matlab's imfilter and scipy's correlate2d prepend 7 zero pixels. Hence, the first output is computed from 7 padded zeros to the top left and the top-left \(9\times 9\) values from the padded cover (indices 0 through 8). As a result, the first filter output that interacts with a "real" pixel value is at the output location \((8,8)\).
3. When the cover pixel \((0,0)\) is changed, this affects the Wavelet residual in the window \((8,8)\) through \((30,30)\) (upper bound is inclusive). However, the original implementation accesses the window \((9,9)\) through \((31,31)\).4
Footnote 4: This notation already accounts for the fact that Matlab indices start at 1.
This off-by-one error is present both in the Matlab and the C++ implementation. The S-UNIWARD implementation is not affected because it does not do pre-computation. The implementations of side-informed UNIWARD (SI-UNIWARD) and perturbed quantization (PQ-UNIWARD) appear to contain the same error.
While off-by-one errors often raise out-of-bounds access exceptions, such an exception does not occur here because the cover image was previously padded to avoid boundary artifacts.
_% Original Matlab implementation_
subRows = row-modRow-6+padSize:row-modRow+16+padSize;
subCols = col-modCol-6+padSize:col-modCol+16+padSize;
_% Suggested fix_
subRows = row-modRow-7+padSize:row-modRow+15+padSize;
subCols = col-modCol-7+padSize:col-modCol+15+padSize;
## 4 Impact
We evaluate what impact this off-by-one error has, first on the costs per block, then on the embedding probabilities per DCT coefficient, and last for a steganalysis CNN.
Difference in block costsFigure 3 shows the J-UNIWARD cost per \(8\times 8\) block. The cost per block is calculated similar to Eq. 1, when the numerator is set to a constant 1. The left panel in Fig. 3 shows the cover image _00001.jpg_ from the ALASKA2 dataset at JPEG quality 75. The second panel shows the cost per block calculated with the original implementation. The third panel shows the cost per block calculated after fixing the off-by-one implementation error. The right panel shows the difference between the original costmap and the fixed costmap. Negative values (blue) mean that the original implementation under-prices the block; positive values (red) show over-priced blocks. It can be seen that the cost per block ranges between 0 and 17000, while the two costmaps differ in cost by less than 1500.
Figure 3(a) again compares the two costmaps but only in one dimension. Each sample represents one \(8\times 8\) block. The x-coordinate shows the block cost calculated by the original implementation, the y-coordinate shows the block cost calculated by the fixed implementation. Of particular interest are samples above the diagonal, where the original implementation under-prices the block. However, the block costs are very similar between the two implementations, i.e., the points are all close to the diagonal.
Difference in embedding probabilitiesDuring the embedding, the costs are converted to embedding probabilities represented by a Gibbs distribution. The embedding probabilities for cover image _00001.jpg_ is shown in Figure 3(b). Each sample represents one DCT coefficient. The x-coordinate shows the embedding probability calculated by the original implementation, the y-coordinate shows the embedding probability calculated by the fixed implementation. Compared to the block costs, the embedding probabilities deviate more from the diagonal. The spread probably comes from the exponential scaling in
the Gibbs distribution. Nevertheless, the differences between the original and the fixed implementations are still small.
Influence of image contentWe evaluate which image regions are particularly affected by the off-by-one error. For this purpose, we create synthetic images that alternate between smooth and textured regions. Two examples are shown in Fig. 5. The cover in the top row alternates from left to right. The cover in the bottom alternates from both left to right and top to bottom. As expected, smooth regions have high cost per block, while textured regions have low cost per block. In transitions from smooth to textured regions, the original implementation under-prices the block cost (blue). In transitions from textured to smooth regions, the original implementation over-prices the block cost (red). There is almost no difference in regions with a constant level of (non-)smoothness.
Influence of JPEG qualityOverall, the costs per block increase for lower JPEG quality factors. We assume this is because removing high-frequency content makes the image appear more smooth, thereby increasing the cost of embedding. Lower quality factors seem to increase the relative cost difference between the two implementations, but the difference is still small.
SteganalysisWe trained an EfficientNet-B0 to distinguish cover images and stego images with JPEG quality 75. The luminance channel of the stego images was embedded with the original J-UNIWARD implementation and embedding rate 0.4 bpnzAC. The accuracy on in-distribution test images is 0.887.
We generated the same test images but with the fixed J-UNIWARD costmap implementation. The test accuracy is 0.885. The barely noticeable difference in accuracy suggests that the off-by-one error has little impact.
Experiments with JPEG quality 95 and 30 showed similar behavior.
|
2309.11941 | A Digital Marketplace Combining WS-Agreement, Service Negotiation
Protocols and Heterogeneous Services | With the ever increasing importance of web services and the Cloud as a
reliable commodity to provide business value as well as consolidate IT
infrastructure, electronic contracts have become very important. WS-Agreement
has itself established as a well accepted container format for describing such
contracts. However, the semantic interpretation of the terms contained in these
contracts, as well as the process of agreeing to contracts when multiple
options have to be considered (negotiation), are still pretty much dealt with
on a case by case basis. In this paper we address the issues of diverging
contracts and varying contract negotiation protocols by introducing the concept
of a contract aware marketplace, which abstracts from the heterogeneous offers
of different services providers. This allows for the automated consumption of
services solely based on preferences, instead of additional restrictions such
as understanding of contract terms and/or negotiation protocols. We also
contribute an evaluation of several existing negotiation concepts/protocols. We
think that reducing the complexity for automated contract negotiation and thus
service consumption is a key for the success of future service and Cloud
infrastructures. | Ralph Vigne, Juergen Mangler, Erich Schikuta | 2023-09-21T09:56:49Z | http://arxiv.org/abs/2309.11941v1 | A Digital Marketplace Combining WS-Agreement, Service Negotiation Protocols and Heterogeneous Services
###### Abstract
With the ever increasing importance of web services and the Cloud as a reliable commodity to provide business value as well as consolidate IT infrastructure, electronic contracts have become very important. WS-Agreement has itself established as a well accepted container format for describing such contracts. However, the semantic interpretation of the terms contained in these contracts, as well as the process of agreeing to contracts when multiple options have to be considered (negotiation), are still pretty much dealt with on a case by case basis. In this paper we address the issues of diverging contracts and varying contract negotiation protocols by introducing the concept of a contract aware marketplace, which abstracts from the heterogeneous offers of different services providers. This allows for the automated consumption of services solely based on preferences, instead of additional restrictions such as understanding of contract terms and/or negotiation protocols. We also contribute an evaluation of several existing negotiation concepts/protocols. We think that reducing the complexity for automated contract negotiation and thus service consumption is a key for the success of future service and Cloud infrastructures.
## I Introduction
Over the last years, WS-Agreement (WSAG) [1, 2] has become a broadly used standard in the research field of Grid and Cloud computing for providing electronic contracts [3, 4, 5, 6]. Their success is based on a flexible means to (1) define involved parties, (2) define the context of agreements, and (3) provide necessary terms for guarantees. This makes them observable and consumable and therefore enables automated service integration in Grid and Cloud based systems (see e.g. [7, 8]).
While the flexibility of WSAG allowed it to become the de-facto standard for defining electronic contracts, there is no similar standard to describe negotiation protocols. Research defines automated service negotiation (e.g. [9, 10, 11, 12, 13, 14]) as the process of establishing business relations between service providers and consumers. It builds on the fact that services providers can guarantee different levels of service to consumers (traditionally: bandwidth, cpu,...). These service properties of course (1) have effects on each other, (2) lead to different prices, and (3) have to be matched to the preferences of service consumers. FIPA already standardized some custom negotiation protocols (e.g. [15]) for the Grid / Cloud communities. These protocols often are very specific to their respective application domains, i.e. dealing with a specific set of properties, how to balance them, and how to match them to user preferences in order to maximize the benefit for all participants. However, for all these approaches, providers and consumers have to implement and conform to the negotiation protocol. A generic way to describe complex protocols is still missing.
The contribution of this paper is a means to integrate arbitrary negotiation protocols and enable service domains to leverage WSAG-based electronic contracts. The vision is to allow service consumers to participate in negotiations solely based on their preferences, not on their understanding of the negotiation protocol, thus allowing for the coexistence of negotiation protocols, instead of their standardization.
In order to achieve this feat we built on the Marketplace presented in Vigne et al. [16]. The basic idea behind this Marketplace is to provide a repository of heterogeneous services (different APIs, protocols) and a set of microflows (small code snippets in a process language like e.g. BPEL). While similar approaches (semantic repositories; UDDI+OWLS) concentrate on finding services through semantic annotations, the Marketplace focuses on standardizing the interaction with services for particular application domains. We further improve on this marketplace concept by...
1. Utilizing the Marketplace information and conventions in order to automatically generate contracts. This allows participating service providers to benefit from WSAG without the burden of building WSAG handling and negotiation into their services.
2. Explaining how to facilitate the basic concepts of the Marketplace in order to standardize electronic contracts for similar services from different service providers.
3. Explaining how to use the concept of microflows (see above) in order to realize bilateral (one-to-one) and multilateral (one-to-many) automated service negotiation. The goal is to abstract from particular services, and instead allow service consumers to participate in a
negotiation solely based on their preferences, not based on their knowledge of negotiation protocols.
We further provide an evaluation of the viability of our approach by means of a case study. This case study consists of a categorization of analysis of current Grid and Cloud negotiation protocols and how they can be realized in the Marketplace framework.
Our research in this area is based on our experience with large scale data stores [17] and complex applications in Grids and Clouds [18, 19, 20], and strongly motivated by our focus on Web-based workflow optimizations [21, 22] and their respective management [23].
In Section II we provide a short recap about the used Marketplace and its provided concepts and functionality. In Section III we discuss related work from the fields of WSAG negotiation for Grid- and Cloud-based systems. How WSAG handling is integrated into the Marketplace is discussed in Section IV. A case study about bilateral and multilateral negotiation protocols and additionally an example for strategic negotiation is given in Section V. In Section VII we conclude this work and provide an outlook for future work.
## II The Marketplace
As this work is based on the Marketplace presented by Vigne et al. [16], it is essential for further discussions to have a basic understanding of the implemented concepts and functionality.
As can be seen in Figure 1, the Marketplace holds a set of microflows that describe **how to interact with services**. The Marketplace is strictly **passive**, connotes it does not act as a middleware or proxy. Its purpose is to allow a service consumer to implement a consumer logic against a standardized API for a particular application _domain_.
There are two kinds of microflows contained in the Marketplace:
* Service microflows that implement an interaction with a particular service regarding the requirements of the domain API. This group of microflows is from now on referred to as **instance level operations**.
* Domain microflows that implement logic to fetch and invoke service microflows to select particular services for particular inputs. Each microflow correlates with one operation in the domain API. Thus we will further on refer to this group of microflows as **class level operations**.
From the service consumer's point of view, the process of doing business with service providers that participate in the Marketplace is totally transparent:
1. The consumer logic fetches the microflow that represent a particular **class level operation** from the Marketplace.
2. The consumer logic instantiates a Workflow Engine (WFE) with the microflow, and invokes the microflow with a set of input parameters (the required input parameters are also defined at the class level, e.g. as a contract).
3. While the class level microflow is running, it fetches and executes further **instance level operations** from the Marketplace, in order to find the service provider best matching the consumer's preferences and use it. This (potentially complex) selection of services is furthermore called **negotiation**, and is one of the subjects of this paper. An instance level microflow (also executed by the WFE) directly interacts with a service, and transforms / filters information to make it usable for class level microflows.
4. When a particular class level microflow is finished, its results can be used as part of the consumer logic.
For this paper we again reuse the Cinemas example presented in [16]. The cinema domain consist of three class level operations: (1) **Search** to search for shows of a particular movie, (2) **Book** to book tickets for a particular show and (3) **Search & Book** which combines the two (to demonstrate a complex, multi-part negotiation).
## III Related Work
As mentioned before, WS-Agreements have become the de-facto standard for electronic contracts in the web service context. In this section we give an overview of WSAG and autonomous Service Level Agreement (SLA) negotiation related research.
Since Andrieux et al. specified WSAG [2] in 2004 it has become a widely used standard in the area of web services to define contracts between two or more parties. Inside a WSAG, all agreed information is structured into different parts. The main parts are namely:
1. **Context:** This part contains the meta-data of the agreement e.g. participant's and life-time.
2. **Terms:** This part contains data about the agreement itself. It consists of **Service Terms**, which represent the functionality that will be delivered (so called Items), and the **Guarantee Terms**, which define assurances on the service quality associated with the defined Service Terms.
This formalized and comprehensive way of describing agreements and the according guarantees make WSAG **automatically computable and observable**[8, 24]. Sakellariou
Fig. 1: The Marketplace
et al. [4] extended Guarantee Terms to be represented as the **result of a function** instead of static values and therefore making it more flexible when observing complex ones.
As WSAG is originally designed for SOAP-based services, Kuebert et al. [25] introduced a way to use them also for **RESTful services** which increased in numbers rapidly over the last few years.
Haq identifies in [5] the **compensation of the service consumer's high dependency on the service provider**, as a result of using WSAG, as an additional reason for their common use in the field of Grid and Cloud computing. He further states that the emerging of Composite Service Providers leads to **complex value chains** and therefore consist of aggregated WSAG. While our approach provides a simple concatenation of these WSAGs, he states that not every information, created during the composition, should be exposed to each participating party. Therefore he introduced **SLA-Views**, which are basically a customized view of the aggregated WSAG for each participating party.
Broberg et al. [3] gives an overview about **state-of-the-art and future directions** in the field of market-based Grid and Utility computing. Although they focus on negotiation strategies related to scheduling problems, they additionally conclude that it is important to overcome the restrictions in flexibility of negotiation protocols to exploit the benefits for service consumers generated by a market with competing service providers.
Zulkernine et al. [6, 26] developed a flexible system for WSAG negotiation with the focus on **multilateral negotiation strategies** based on decision support systems. They also conclude that a way to define flexible and provider specific protocols would further increase the quality of negotiation results.
Faratin et al. [14] give a detailed discussion how such sets of decisions and preferences could look like. Within their example implementation of **autonomous service negotiation**, different offers are evaluated and counteroffers are created on the base of a value scoring algorithm. They define **negotiation tactic** as the short-term decision making focused on a narrow set of criteria while the **negotiation strategy** represents the weighting of criteria over time.
As more complex negotiation protocols come into focus of research, WSAG reaches its limitation as it offers **only two messages**, namely _offer_ (as an input for negotiation) and _agreement_ (as the output of an negotiation) and therefore makes it only feasible for the "Contract Net Interaction Protocol" [27] (see also Figure 4). Hung et al. elaborated that a formalized negotiation must consist of three different parts which they further used as groundwork when developing the "WS-Negotiation Specification" [28]. These three parts are namely:
1. **Negotiation Message** describes the messages which are sent and received during a negotiation. These messages can be of different types, namely _Offer, Counteroffer, Rejected, Accepted, Expired, SinglePartySigned, Signed_, and _Unsigned_.
2. **Negotiation Protocol** defines a set of rules and mechanisms the negotiating parties must follow. It uses negotiation primitives to describe different interactions between the parties and what pre- and post-conditions are associated to each primitive (similar to declarative workflow descriptions). The following primitives are defined: _Call for Proposal, Propose, Accept, Terminate, Reject, Acknowledge, Modify,_ and _Withdraw_.
3. **Negotiation Decision Making** represents an internal and private decision making process based on a cost-benefit function and therefore represents the _negotiation strategy_.
Venugopal et al. [9] focused on the negotiation protocol and proposed Rubinstein's approach of "Alternating Offers" [29] (see also Figure 5) as one way to increase the quality of **bilateral** service negotiation. A **multilateral** adaptation of this protocol has been standardized by [15]. Yan et al. [7] further extended this protocol (see also Figure 6) because of its shortcomings in **negotiation coordination** i.e. coordinating multiple counteroffers for various service providers.
## IV WS-Agreements & The Marketplace
As stated in the introduction, we try to reuse the Marketplace to commoditize the utilization of contracts (in our case WS-Agreements (WSAG)). We envision this to work similar to the way the Marketplace deals with heterogeneous services (i.e. same application domain but different APIs, slightly different semantics).
### _Concepts_
For services in the same application domain, it is possible to provide a unified contract template [30], that covers possible terms and restrictions for all services, but has a certain characteristic for particular services. We again propose a split analog to the class level / instance level differentiation described in Section II:
* By providing domain contracts that define the interaction with the Marketplace, as well as common criteria for selecting any service from the domain.
* By providing service contracts that all have the same structure and define the level of service (quality) that can be expected from particular services.
Contracts are not some information that is fetched from a repository, but they are something that is tightly coupled to the usage of services. They are created, negotiated, and accepted while using certain class level operations of the Marketplace. Contracts may of course differ from customer to customer, and are strongly depending on the kind of service that a service consumer tries to utilize (i.e. the input parameters).
Thus before the consumer logic can execute certain operations from the Marketplace (i.e. fetch microflows and execute them in a WFE as describe in Section II), it has to ensure that the properties of a service match its preferences. This can be done by means of a WSAG.
The typical effort for utilizing WSAG includes (1) fetching a contract template, (2) making an offer, and (3) receiving
the agreement (or rejection). Of course additional logic may be necessary to (a) ponder which services to actually make an offer, and (b) decide between multiple possible agreements (under the condition that it is not necessary to enact on all agreements).
Furthermore we have to consider that services may know nothing about WSAG, by still offer an API to achieve something similar. We conclude: it will be necessary to (1) transform WSAG, or (2) implement logic to create WSAG templates and / or decide if a service accepts an offer. This logic can, as explained in Section II be expressed in microflows at service level.
For Figure 2 keep in mind, that the Marketplace is no middleware. It just depicts which kind of information have to be fetched by the WFE, and which kind of information has to be provided by the user and services in order to enable WSAG and SLA negotiation.
As illustrated, the service consumer, after having received a domain level template, a user has to provide the necessary information to create a _domain offer_ i.e. WSAG offer. This domain offer is used as input for the according domain operation (microflow enacted by the WFE).
First the operation _collects (COL)_ the _templates_ of all service providers. This is done executing the according _class and succeeding instance level microflows_. When all templates are collected, they are filtered using the information provided via the _domain offer_.
Next, these _templates are aggregated_ and presented to the user who _selects (SEL)_ according to personal preferences. Now the selected templates are supplemented with information included in the _domain offer_ and therefore become _service offers_.
During the _negotiation (NEG)_ this _service offers_ are used to negotiate with the according service providers (see Section V for details about negotiation protocols) using a negotiating class level operation (microflow) and in succession the specific instance level operation. At the end of the negotiation, a _service agreement_, authorizing for the usage of the service, is defined.
Providing this _service agreement_, the service consumer's WFE can _use the service (USAGE)_. How this is done is discussed in detail in [16].
After the service is used, a _domain agreement_ is created, using among others the _service agreement_ received during the negotiation. This _domain agreement_ and the _execution results_ of the service execution, representing the defined output of the domain operation, are made persistent for the user.
### _Implementation_
The implementation of a WSAG aware system requires utilizing existing information from the Marketplace about the (1) included services (their properties) and (2) their usage.
* Service properties are a set of static attributes that each service provider must provide to participate in a certain domain of the Marketplace. Examples for the cinema domain are address, number of cinemas/seats, smoking allowed, food corner available,...Service properties lead to simple WSAG guarantee terms, with a particular unit and/or value range, that may be observable from the outside. Also these properties can be used to create the WSAG business values.
* Required input to class level operations (which is transformed to suitable input for services in instance level operations). Examples include the actual seat to book. The presence of such input parameters may be required or optional, and again expressed in particular units or as a value range. Thus such input parameters lead to WSAG qualifying conditions and/or creation constraints (how to fill out a WSAG template in order to create a valid offer).
* Output of class level operations are the most important part. For them the main purpose of WSAG, to monitor the quality of results, fully applies. They lead to WSAG guarantee terms with a particular unit and/or value range. The outside observability may of course be very different across a range of services. Thus it has to be linked to a class level operation, that in turn may invoke monitoring functionality at the instance level (e.g. Sakellariou et al. [4]).
The creation of a WSAG _domain template_ is strictly internal, and fully transparent for the Marketplace consumers. For its implementation we are building on the existing mechanisms of the Marketplace, mainly the definition of properties, and input / output schemas.
```
1<rng:grammarxmls:datatypeLibrary="..."xmls:rng="..."xmls:wag="..."xmls:wag="...">
2<rng:start>
3<prop:propertiesxmls:prop="...xmls:d="...">
4<rng:elementname="prop:address=">
5<d:captionxml:lang="=">Address</d:caption>
6<d:captionxml:lang="de>Address</d:caption>
7<rng:elementname="prop:ZIP"wsa:item="Zip>
Fig. 2: Abstract Communication Overview of Using the Marketplace
* <d:caption xml:lang="en">ZIPCode</d:caption>
* <d:caption xml:lang="de">PLZ</d:caption>
* <rng:data type="integer"><<l-4 digits->
* <param name="minInclusive">1000</param>
* <param name="maxInclusive">9999</param>
* </rng:data>
* <wagg:extension>
* <l-> specific WSAG information->
* </wagg:extension>
* </rng:element>
* </rng:element>
* </rng:properties>
* </rng:general>
* </rng:grammar>
As can be seen in Listing 1, we added a WSAG specific section to the already existing XML RNG Schema, to allow for WSAG specific information to be added (**wagg:item**). As properties are class level information, it is up to a service domain expert to shape this basis for all contracts.
```
1<rng:grammar xmls:datatypeLibrary="..."xmlns:rng="...">>
2<rng:start>
3...
4<rng:element name="input-message">
5...
6<rng:element xmlns:d="..."name="date"wss:Item="DatatypeShow">
7<d:caption xml:lang="en">Date</d:caption>
8<d:caption xml:lang="de">Datum</d:caption>
9<rng:data type="date">
10<wss:extension>
11<l--specific WSAG information->
12</wss:extension>
13</rng:element>
14<rng:elementxmlns:d="..."name="seats"wss:Item="NumberOfStatus">
15<d:caption xml:lang="en">Seats</d:caption>
16<d:caption xml:lang="de">Sitze</d:caption>
17<rng:data type="integer">
18<paramname="minInclusive">1</param>
19<paramname="maxInclusive">6</param>
20</rng:data>
21<wss:extension>
22<l--specific WSAG information->
23</wss:extension>
24</rng:element>
25</rng:element>
26<wss:general>
27<l--
28General informationabout the WSAG template
29
30</wss:general>
31</rng:start>
32</rng:grammar>
33
In Listing 2 an example class level input schema can be seen. Again it is possible to reuse datatype related information, but additionally units and/or expected ranges could be further described in WSAG.
### _Aggregation_
As defined in [16] the Marketplace allows to define class level operations (microflows) that in turn invoke instance level operations (microflows). Invocation does not mean that the microflows are executed by the Marketplace, but can be fetched from the marketplace in order to execute them in a Workflow Engine (WFE) at the service consumer's side.
But for class level operations it is also allowed to invoke other class level operations in order to e.g. realize a _search & book_ operation, which is the fusion of existing operations. As the domain designer has full control over the class level (API), it is not possible that semantically identical parameters with different names may be used (except as an error by the domain designer). Thus aggregating WSAG is straight forward, pieces of information can just be concatenated, duplicate information can be removed.
As properties are valid at class level for all operations, when concatenating the pieces, identifiers have to be prefixed.
```
1<wss:Templatexmlns:xsi="..."xmlns:xs="..."xmlns:wss="..."xsi:schemaLocation="..."xmlns:prop="..."xmlns:msg="...">
4<wss:Name>CimeasDomain:Searchhood</wss:Name>
5<wss:Genext</><-|containsgeneralinformation->
6<wss:CreationConstraints>
7<wss:Itemwss:name="Zip>
8<wss:Location>
9<wss:>Creation>
10</wss:>Creation>
11<wss:><minInclusivexs:value="1000">
13<xs:maxInclusivexs:value="9999">
14<wss:Item>
15<wss:Itemwss:name="DateOfShow">
16<wss:Location>
17<wss:>ServiceDescriptionTerm/msg:date
18</wss:>Location>
19<wss:Item>
20<wss:CreationConstraints>
21<l--additiongeneratedinformationisplacedhere->
22</wss:Template> ```
Listing 3: Class Level Template
This results in a simple concatenation algorithm depicted in Listing 3. As already described in Subsection IV-A, the aggregated WSAG template can then be filled out by a user and utilized to make offers for a subset of services.
## V Case Study
In this Section we justify our approach by analyzing a set of negotiation protocols. As already mentioned in the
Fig. 3: Class Level Templates: Aggregation and Transformation of Interfaces
introduction, negotiation is the process of establishing business relations between service providers and consumers. In the context of this paper this means that a WS-Agreement (WSAG) can be derived, which can be used in order to call services. Existing class level operations can be extended to include the functionality to:
**(1)**: Collect WSAG templates by invoking instance level operations.
**(2)**: Present aggregated templates to users, by utilizing callbacks that can be injected into class level operations (microflows) as presented in [16]. It is important to note that all templates will have the same structure / terms included, but may contain different WSAG creation constraints.
**(3)**: Present offers to services by invoking instance level operations with filled out WSAG templates.
**(4)**: Enact service calls as a result of accepted offers.
The instance level operations (microflows) mentioned in (1) and (3) can contain any kind of calls to actual services. This allows for a high degree of flexibility: (a) for WSAG unaware services, the logic to create a WSAG could be either embedded in a microflow, or (b) for WASG aware services results can be just transformed.
Furthermore, as described in (2) the service consumer (user) can select a subset of templates that match his preferences. As the service consumer (user) has also an overview of its past decisions, it can employ strategic or tactic negotiation as described in the related work. The same goes for service providers as described in (3).
Because class level operations interact with all participating service providers, its microflow represents a **multilateral negotiation protocol**. A WSAF complaint class level operation has to define a WSAG offer (_domain offer_) as an input and zero to many WSAG agreements (_domain agreements_) as output.
On the other hand passing a particular WSAG offer to an instance level operation for a particular service, can be considered to be an example of a **bilateral negotiation protocol**.
As already mentioned in the introduction, this separation between class and instance level is perfectly suited to map arbitrary per service negotiation protocols in a transparent fashion. The service consumer can just fetch and execute microflows, without having to worry about transforming input or results.
To further exemplify the invoked principles, we evaluated several common concepts in more detail.
### _Bilateral Negotiation Protocols_
Bilateral negotiation protocols represent how a service consumer is able to negotiate with a **specific service provider (one-to-one)**. As for all negotiations, at the beginning is a _service offer (SO)_ and at the end there is either a _service agreement (SA)_ or a cancellation of the negotiation. What happens in between is covered by the negotiation protocol (e.g. _send offer_, _confirm agreement_ or _quit negotiation_). In our concept, bilateral negotiation protocols are always defined using instance level operations. This is completely in line with their original intent to provide service specific information to the service consumer which is also considered as one-to-one information.
In the following, we illustrate our concept with two commonly used bilateral negotiation protocols.
#### Iii-A1 Contract Net Interaction Protocol
The Contract Net Interaction Protocol was introduced by Smith [27] in 1980 and represents the protocol which is today often referred to as "Supermarket Approach". This means that the service consumer offers a contract (_send offer_) for the requested functionality and the service provider either accepts this offer or rejects it, without providing any counteroffer ("take it or leave it"). Because there is no counteroffer provided for the service consumer, its strategic negotiation service is not necessary. As an example how a microflow, representing this protocol in its simplest form (without any parameter transformation), could look like see Figure 4.
We decided to include this protocol in our case study as it is widely used by today's real world services (e.g. online-shops, cinema service providers,...). Furthermore is it referred in a lot of today's research (e.g. [2, 6, 12, 13]) and thus relevant for the Marketplace.
#### Iii-A2 Alternate Offers Protocol
The "Alternate Offers Protocol" was defined by Rubinstein [29] in 1982 and provides the possibility for service providers (SP) and service consumers (SC) to bargain about the delivered functionality and usage conditions. As illustrated in Figure 5 the protocol allows both parties to propose counteroffers (_SC: create counteroffer_ and _SP: request counter template_) if one is not willing to accept the current offer. In WSAG, each term defines permissible values [2] used as constraints when creating the counteroffer. This cycle goes on until both parties either accept the current offer or one party quits the negotiation process.
Because this protocol allows counteroffers, the service consumer must provide a _strategic negotiation_ logic, to decide how the negotiation should continue and create counteroffers (see Section V-C for details). Doing so allows each service consumer to realize his own bargaining strategy without customizing the instance level operation defined by the service provider.
Although, as to our best knowledge, not much of today's real world services support this protocol, it is quite popular
Fig. 4: Bilateral Negotiation: Contract Net Interaction Protocol [27]
among researchers (e.g. [7, 10, 31]). We agree with Rubinstein et al. [29] that bargaining is essential for the creation of mature markets.
### _Multilateral Negotiation Protocols_
So far we focused on bilateral negotiation protocols, defining the interaction between one specific service provider and a service consumer (one-to-one). In this Section we extend our focus to multilateral negotiation protocols, as only these allow a service consumer to use offers from **competing service providers (one-to-many)** for its negotiation strategy. As discussed in e.g. [3, 29], if this information is used to create counteroffers, the overall benefits for the service consumer will significantly increase, because the service consumer is completely informed about the market.
Multilateral Negotiation is similar to the "Alternate Offers Protocol" (see Section V-A2), only this time the bargaining is defined at _class level instead of instance level_. The according operation therefore executes the negotiation protocols, defined by each service provider, every time a new service agreement should be negotiated1. If re-negotiation with a particular service providers is not desired, class level logic can easily exclude them. Again, the service consumer provides a _strategic negotiation_ logic, which is in charge of creating counteroffers representing the preferences of the service consumer.
Footnote 1: It can therefore happen that the “Alternate Offers Protocol” is used inside an iterated multilateral negotiation.
The definition of such protocols is built on the interaction of class level and instance level operations. As this interaction can be utilized in various ways, a lot of room for domain specific customization is left open, e.g. service enactment [31], service decomposition [7].
We decided to implement the "Iterated Contract Net Interaction Protocol", which was standardized by the FIPA [15] and extended with negotiation coordination capabilities by Yan et al. [7].
As illustrated in Figure 6, during the negotiation _service offers (SO)_ are sent to various service providers (_send offers_) using the individual bilateral negotiation protocol (_Instance Level Operation: Negotiate_) of each service provider. Afterwards the collected _service agreements (SA)_ are sent to the strategic negotiation logic of the service consumer (_check agreements_). The result of the strategic negotiation is either to accept some of the agreements (_confirm agreements_), to quit the negotiation (_quit negotiation_) or to propose counteroffers. If counteroffers are placed, first the _counter templates (ST)_ of the service providers are collected (_request counter templates_). Then the _strategic negotiation_ logic of the service consumer filters these templates and creates counteroffers (_SO_) for all service providers included in the next iteration (_filter templates and create counteroffers_).
This way of bargaining is also known as "reverse auction" where many different service providers compete. Therefore, in each iteration either the price decreases or the offered functionality increases.
### _Strategic Negotiation_
Strategic negotiation is used as a means to enact the service consumers / providers preferences. Implementing strategic negotiation as part of class / instance level operations allows to realize individual service preferences without changing the negotiation protocol provided by the Marketplace and focus on the outcome of a specific negotiation.
How these preferences are expressed and how competing offers are rated is strongly affected by micro economics and therefore beyond the focus of this paper. As pointed out by Faratin et al. [14], for autonomous service negotiation three areas must be considered: (1) Negotiation Protocols, which are discussed in this case study, (2) Negotiation Issues, which are covered by WSAG (see Section IV), and (3) Negotiation Strategies to define a reasoning model for the service customers preferences.
They further define that their approach for autonomous negotiation uses a _value scoring algorithm_ as a foundation. In this algorithm, each single criteria of the negotiation is represented by a _tactic_. Tactics are functions calculating the value of the assigned criteria. To create an offer, all tactics are weighted and combined. By varying the weight of each tactic over time, the **long-term negotiation strategy** is realized.
Fig. 5: Bilateral Negotiation: Alternate Offers Protocol [9, 29]
Fig. 6: Multilateral Negotiation: Iterated Contract Net Interaction Protocol [7, 15]
As an example for a negotiation strategy they allege that, in the beginning of multilateral negotiations, the behavior of competing service providers may influence the counteroffer more than the value of a single criteria. Therefore behavior representing tactics are weighted higher than other tactics. However, at the end of the negotiation phase, single criteria may gain importance. How tactics and strategy have to be combined to achieve the best negotiation results, can be educed from various sources e.g. historical data or user's preferences.
In our approach all this logic is provided on behalf of the service consumer, thus not directly affects the Marketplace.
## VI Extended Cinemas Example
So far we introduced our concept for WSAG integration and flexible bilateral and multilateral negotiation protocols. In this Section we integrate them into the cinemas example introduced in Section II and [16] as illustrated in Figure 7.
In order to use this operation, the service consumer provides a _domain offer_ for the class level operation _Search & Book_ as only input.
Calling the class level operation _Search_ is the same as described in [16]. Only the necessary parameter transformation before and after the call are different, as the interface of _Search & Book_ and the internal data representation has changed.
Now that all offered shows are known, negotiations for each show are performed. To do so the according _class level operation_ is called (_negotiate show_). It is defined inside the _domain interface_ of this operation that a reference to a particular show must be included in the input and agreements for each show are the output. Using this input message, the operation is capable to educ the correct service provider and calls its _instance level operation_ for negotiation (_perform negotiation_). Also parameter transformations to create _service offers (SO)_ and to compute _service agreements (SA)_ must be performed.
After all shows are negotiated, the service consumer (acting as the _strategic negotiation service_) selects the preferred show (_select show_). Further must all _service agreements_, expect the selected one, be canceled. This is done as described in the agreement e.g. sending the agreement to a specific URI. It must be noted that we excluded this message flow in our illustration to keep it neat.
In the end, the selected _service agreement (SA)_ is used as an input for the class level operation _Book_. In our example this operation defines in its class level operation interface that either a _service agreement_ or a reference to a particular show is valid input. Depending on this input, the operation decides (_parse input message_) if it must negotiate the show (_negotiate show_) before using it or not. In the end the service operation _Book_ of the according service provider can be called. It defines in its _instance level operation interface_ that a _service agreement (SA)_ must be included to authorize the usage. As output the _execution results_ of the service are define.
This _execution results_ and the selected _service agreement_ represent the defined output of _Search & Book_.
## VII Conclusion and Future Work
In this paper we presented flexible ways to (a) define WSAG contracts for arbitrary services, and to (b) transparently allow different services to utilize custom negotiation algorithms and APIs. We showed that both goals could be achieved by building on an existing Marketplace concept, that promotes microflows (small pieces of code executable by a Workflow Engine) to achieve a high level of abstraction for service consumers.
We first described how to extend the concept of unified interfaces to create unified contracts. We then introduced how to use the concepts of class (domain) and instance (service) level to provide flexible negotiation protocols. Finally we provided a case study to justify our approach.
Within the case study we discussed how (1) bilateral negotiation protocols are defined using instance level operations, (2) how multilateral negotiation protocols are defined using class level operations, and (3) how service consumers can include customized negotiation strategies. We presented how to implement the "Contract Net Interaction Protocol" (because of its prevalence for real world services) as well as the "Alternate Offers Protocol" (as prerequisite for mature markets). For multilateral negotiation protocols we presented the "Iterated Contract Net Interaction Protocol" in order to allow various service providers to compete about business. Additionally we presented one negotiation strategy based on a value scoring algorithm.
The conclusion we can draw from this work is, that by embracing existing workflow-related concepts, it is possible to build a lightweight solution for dealing with electronic contracts, in an environment where not all services support them (or support them in the same way).
Fig. 7: Multilateral Negotiation: Cinemas Example
Future work will be to exploit the complexity of different types of negotiation protocols and to implement some custom negotiation strategy services. The possibilities to monitor complex guarantee terms using class level operations in the context of service composition is also promising. This can lead to much improved error handling and service selection, as it may allow for the prediction of service behavior.
|
2303.18080 | One-shot Unsupervised Domain Adaptation with Personalized Diffusion
Models | Adapting a segmentation model from a labeled source domain to a target
domain, where a single unlabeled datum is available, is one the most
challenging problems in domain adaptation and is otherwise known as one-shot
unsupervised domain adaptation (OSUDA). Most of the prior works have addressed
the problem by relying on style transfer techniques, where the source images
are stylized to have the appearance of the target domain. Departing from the
common notion of transferring only the target ``texture'' information, we
leverage text-to-image diffusion models (e.g., Stable Diffusion) to generate a
synthetic target dataset with photo-realistic images that not only faithfully
depict the style of the target domain, but are also characterized by novel
scenes in diverse contexts. The text interface in our method Data AugmenTation
with diffUsion Models (DATUM) endows us with the possibility of guiding the
generation of images towards desired semantic concepts while respecting the
original spatial context of a single training image, which is not possible in
existing OSUDA methods. Extensive experiments on standard benchmarks show that
our DATUM surpasses the state-of-the-art OSUDA methods by up to +7.1%. The
implementation is available at https://github.com/yasserben/DATUM | Yasser Benigmim, Subhankar Roy, Slim Essid, Vicky Kalogeiton, Stéphane Lathuilière | 2023-03-31T14:16:38Z | http://arxiv.org/abs/2303.18080v2 | # One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models
###### Abstract
Adapting a segmentation model from a labeled source domain to a target domain, where a single unlabeled datum is available, is one the most challenging problems in domain adaptation and is otherwise known as one-shot unsupervised domain adaptation (OSUDA). Most of the prior works have addressed the problem by relying on style transfer techniques, where the source images are stylized to have the appearance of the target domain. Departing from the common notion of transferring only the target "texture" information, we leverage text-to-image diffusion models (e.g., Stable Diffusion) to generate a synthetic target dataset with photo-realistic images that not only faithfully depict the style of the target domain, but are also characterized by novel scenes in diverse contexts. The text interface in our method **D**ata **A**ugmen**T**ation with diff**U**sion **M**odels (DATUM) endows us with the possibility of guiding the generation of images towards desired semantic concepts while respecting the original spatial context of a single training image, which is not possible in existing OSUDA methods. Extensive experiments on standard benchmarks show that our DATUM surpasses the state-of-the-art OSUDA methods by up to +7.1%. The implementation is available at : [https://github.com/yasserben/DATUM](https://github.com/yasserben/DATUM)
## 1 Introduction
Semantic segmentation (SS) is one of the core tasks in computer vision [58, 9, 55], where a neural network is tasked with predicting a semantic label for each pixel in a given image [12]. Given its importance, SS has received significant attention from the deep learning community and has found numerous applications, such as autonomous driving [6, 8], robot navigation [28], industrial defect monitoring [36].
The task of semantic segmentation is known to require pixel-level annotations which can be costly and impractical in many real-world scenarios, making it challenging to train segmentation models effectively. Moreover, the issue of _domain shift_[48] can cause segmentation models to underperform during inference on unseen domains, as the distribution of the training data may differ from that of the test data. To make learning effective without needing annotations on the target domain, several Unsupervised Domain Adaptation (UDA) methods have been proposed for the task of semantic segmentation [49, 54, 11, 54]. Fundamentally, the UDA methods collectively use the labeled (or _source_) and the unlabeled (or _target_) dataset to learn a model that works well on the target domain. Despite being impressive in mitigating the domain gap, the UDA methods rely on the assumption that a considerably large dataset of unlabelled images is at disposal. However, collecting a large target dataset before adaptation poses as a bottleneck in the rapid adoption of segmentation models in real-world applications. To circumvent this issue, several works have investigated the feasibility of using just a small subset of the unlabeled target samples (at times just _one_ sample) to adapt the model. This adaptation scenario is known as One-Shot Unsupervised Domain Adaptation (OSUDA) [3, 17, 57, 3], where, in addition to the source dataset, only a single _unlabelled_ target sample is available.
While the OSUDA setting is realistic and cost-effective,
Figure 1: In existing OSUDA methods data augmentation is done via stylization [17, 31]. In our proposed approach, we prompt the text-to-image diffusion models [38] to generate new images that not only depict the style of the target domain, but also more faithfully capture the diversity of the scene content.
relying solely on a single target image makes it challenging for traditional UDA methods to estimate and align distributions. To address the lack of target data, the OSUDA approaches generally overpopulate the target dataset with source images _stylized_ as target-_like_ ones [17, 31]. Albeit effective, these methods result in a target dataset that is limited to the scene layouts and structures inherent to the source dataset (Fig. 1 left). In this work, we argue that simply mimicking the style of the target is insufficient to train a robust target model, especially when only limited information about the target domain is available. Thus, we seek for diversifying the scene content and spatial layout, more than what the source images can offer. Moreover, generating high-fidelity images is yet another challenging problem. Thus, in this work, we focus on denoising diffusion models (DM) [21, 38], a family of generative models with excellent capability in generating high-quality images. We propose to leverage DMs to augment the target dataset with images that not only resemble the target domain at hand, but also contain diverse and plausible scene layouts due to rich prior knowledge encoded in DMs (see Fig. 1 right).
In detail, we fine-tune a DM [21, 38] on the single target sample to generate an auxiliary large target dataset. Following recent work [15, 42], we represent the target image with a special, rare and unique _token_ that encapsulates its visual appearance. Then, we exploit the vast knowledge of DMs about the objects (or _things_ classes) present in the source domain for a driving scenario [30, 14, 43]. Specifically, we prompt the model to generate a target dataset depicting such objects in a multitude of scenes, while maintaining the appearance tethered to the overall target domain _style_ via the unique token. Once an augmented target dataset is made available, any UDA method can be used to adapt to the target domain. We thus present our method **D**ata **A**ugmen**T**ation with diff**U**sion **M**odels (**DATUM**), for addressing OSUDA, as a connotation to the setting of having access to a single "datum" from the target domain. Our approach has the advantage of making any UDA method compatible with the one/few-shot setting. In our experiments, we add DATUM to existing UDA methods and compare against the state-of-the-art OSUDA. Our results and analysis demonstrate the efficacy of DATUM and its ability to diversify the target dataset. We believe that DATUM can contribute significantly to semantic segmentation as a _plug-and-play_ module.
Our **contributions** are three-fold: **(i)** We demonstrate, for the first time in the context of SS, the importance of generating semantically diverse and realistic target-like images in OSUDA. **(ii)** We propose DATUM, a generic data augmentation pipeline powered by DMs, for addressing the challenging yet relevant task of OSUDA, and **(iii)** while being conceptually simple, we show with extensive experiments, on standard sim-to-real UDA benchmarks, that DATUM can easily surpass the state-of-the-art OSUDA methods.
## 2 Related Works
**Unsupervised domain adaptation.** To bridge the domain gap between the source and target datasets, unsupervised domain adaptation (UDA) methods have been proposed, which can be roughly categorized into three broad sub-categories depending on the level where the distribution _alignment_ is carried out in the network pipeline. First, the feature-level alignment methods aim at reducing the discrepancy between the source and target domains in the latent feature space of the network under some metric. As an example, these methods include minimizing the Maximum Mean Discrepancy (MMD) [4] or increasing the domain confusion between the two domains with a discriminator network [23, 26, 32, 45, 54]. The latent space being high dimensional, the second category of UDA methods [54, 50, 34, 51] exploits the lower dimensional and more structured output space of the network to mitigate domain shift, while borrowing, adversarial alignment techniques. The third category includes methods [22, 23, 26, 29, 32, 45, 47, 54] that align the source and the target domains in the input (or pixel) space by generating target-like source images via style transfer [16, 27, 63]. There is yet another successful line of UDA works that exploit self-training using a student-teacher framework [24, 25, 2].
While the above UDA methods are effective under the standard adaptation setting to varying degrees, where the entire target dataset is available for training, style transfer-based methods are particularly effective when the target data is inadequate to approximate a distribution. Different from the existing methods [17, 3], which are just capable of transferring style (or "appearance") information to the source images, our proposed DATUM can additionally generate novel and structurally coherent content in the target domain.
**Few-shot adaptation.** To improve the sample efficiency of the (UDA) methods, _supervised_ few-shot domain adaptation (FSDA) methods [13, 56, 33] relax the need of having a large unlabeled target dataset, in favour of assuming access to a few but _labeled_ samples of the target domain. The FSDA methods [59, 62] exploit the labeled target samples to construct prototypes to align the domains. The setting of OSUDA is a more challenging version of FSDA, where a _single_ target sample is available _without_ any annotation. Due to the lack of means of constructing prototypes or aligning distributions with a single target sample, OSUDA methods [17, 31, 57] are based on transferring style from the target sample to the source dataset to artificially augment the target dataset. Once augmented, UDA methods such as self-training [17], consistency training [31], prototypical matching [57], are applied. Similar to [17], we use the self-training framework DAFormer [24] to adapt to the generated target images. However, unlike the prior OSUDA works [17, 31, 57], DATUM's data generation pipeline is stronger, conceptually simpler and does not rely on many heuristics.
Diffusion models.Very recently, diffusion models (DM) [21, 46] have brought a paradigm shift in the generative modeling landscape, showing excellent capabilities at generating photo-realistic text-conditioned images [35, 38, 43]. To allow personalized and more fine-grained generation, works such as DreamBooth [42], Textual Inversion [15] and ControlNet [60] have extended DMs with different levels of fine-tuning, offering more flexibility. However, a handful of recent works [1, 19, 44] has explored the possibility of using a latent diffusion model [38], a fast alternative to DM, for generating class-conditioned _synthetic_ datasets, as replacements of the _real_ counterparts, to solve image recognition tasks. In contrast to these approaches, we specifically address the problem of domain adaptation by augmenting the target domain. We adopt a fine-tuning strategy [42] that explicitly incorporates the appearance of the target domain. Our approach associates a word identifier with the content of the target image, resulting in high-fidelity synthetic generations.
## 3 Method
In this work, we propose **D**ata **A**ugmen**T**ation with diff**V**l**s**ion **M**odels (**DATUM**), a generic method for creating _synthetic_ target dataset by using a single real sample (and hence, _one-shot_) from the target domain. The synthetic dataset is then used for adapting a segmentation model. Sec. 3.1 introduces the task and gives a background about DM, while Sec. 3.2 describes DATUM.
### Preliminaries
**Problem formulation.** In this work, we address the problem of One-Shot Unsupervised Domain Adaptation (OSUDA), where we assume access to \(N^{\mathrm{S}}\) labeled images from a source domain \(D^{\mathrm{S}}=\{(X_{i}^{\mathrm{S}},Y_{i}^{\mathrm{S}})\}_{i=1}^{N^{\mathrm{ S}}}\), where \(X_{i}^{\mathrm{S}}\in\mathbb{R}^{H\times W\times 3}\) represents an RGB source image and \(Y_{i}^{\mathrm{S}}\in\mathbb{R}^{H\times W\times|\mathcal{C}|}\) the corresponding one-hot encoded ground-truth label, with each pixel belonging to a set of \(\mathcal{C}\) classes. Unlike, traditional UDA methods [22, 54], in OSUDA we have have access to a _single unlabeled_ target sample \(X^{\mathrm{T}}\), where \(X^{\mathrm{T}}\in\mathbb{R}^{H\times W\times 3}\).
In the context of semantic segmentation, the goal in OSUDA is to train a segmentation model \(f\colon\mathcal{X}\to\mathcal{Y}\) that can effectively perform semantic segmentation on images from the target domain. Given the sheer difficulty in training \(f(\cdot)\) with the single target image, our method seeks to generate a synthetic target dataset by leveraging a text-to-image DM.
**Background on Diffusion Models.** Diffusion Models (DM) [21] approach image generation as an image-denoising task. We obtain a sequence of \(T\) noisy images \(X_{1}...,X_{T}\) by gradually adding random Gaussian noises \(\epsilon_{1}...,\epsilon_{T}\) to an original training image \(X_{0}\). A parameterized neural network \(\epsilon_{\theta}(\cdot,t)\) is trained to predict the noise \(\epsilon_{t}\) from \(X_{t}\) for every denoising step \(t\in\{1,...,T\}\). Denoising is typically carried out with a U-Net [40]. To enable conditioning, the network \(\epsilon_{\theta}(X_{t},y,t)\) is conditioned on an additional input \(y\). In the case of text conditioning, the embeddings from a text-encoder \(\tau_{\theta}\) for the text \(y\) are used to augment the U-Net backbone with the cross-attention mechanism [52]. For a given image-caption pair, the conditional DM is learned using the following objective:
\[\mathcal{L}_{DM}=\mathbb{E}_{X,y,\epsilon\sim\mathcal{N}(0,1),t}\bigg{[}|| \epsilon-\epsilon_{\theta}(X_{t},t,\tau_{\theta}(y))||_{2}^{2}\bigg{]} \tag{1}\]
To improve efficiency, we employ a DM, which operates in the latent space of a pre-trained autoencoder [38].
### Data Augmentation with Diffusion Models
Our proposed DATUM works in three stages and is shown in Fig. 2. In the first stage, called the **personalization stage**, we fine-tune a pre-trained text-to-image DM model by using multiple crops from the single target image (see Fig. 1(a)). This steers the DM towards the distribution of the target domain of interest. Next, in the second **data generation stage**, we prompt the just fine-tuned text-to-image DM to generate a synthetic dataset that not only appears to be sampled from the target domain, but also depicts desired semantic concepts
Figure 2: **The three stages of DATUM. In the **personalization stage** (a), we learn to map a unique token \(V_{*}\) with the appearance of the target domain using a single target image. In the **data generation stage** (b), we employ the personalized model to generate a large dataset corresponding to the target distribution. Class names are used to improve diversity. Finally, the **adaptive segmentation stage** (c) consists in training an existing UDA framework on the labeled source and the generated unlabeled pseudo-target datasets
(see Fig. 1(b)). Finally, the **adaptive segmentation** stage culminates the three stage pipeline of DATUM, where we combine the labeled source data with the synthetic pseudo-target data and train with a general purpose UDA method (see Fig. 1(c)). Next, we describe each stage in detail.
**Personalization stage.** The goal of the personalization stage is to endow the pre-trained DM with generation capabilities that are relevant to the downstream task. This stage is crucial because simply generating out-of-domain photo-realistic images is not useful for the downstream task. As an example, as shown in Fig. 3(b), when an out-of-the-box DM is prompted with \(p\) = "_a photo of_ [CLS]", where CLS represents a user-provided object class from the dataset, the DM generates high-fidelity images that truly depict the desired semantic concept. However, when compared to the real target domain (see Fig. 3(a)) the DM generated images of Fig. 3(b) have little to no resemblance in appearance. Given that the labeled source dataset already provides a rich prior to the segmentation model about the object classes of interest, having more unrelated and unlabeled images is unappealing.
Thus, we strive to imprint the appearance of the target domain into the synthetic dataset, while just using a single real target sample, in order to obtain more targetted synthetic data. Towards that end, we use DreamBooth [42], a recently proposed technique for fine-tuning the DM, that allows for the creation of novel images while staying faithful to the user-provided subset of images. In detail, DreamBooth associates a unique identifier \(V_{*}\) to the subset of images as provided by the user by fine-tuning the DM weights. Similarly, we fine-tune the DM on the single target image while conditioning the model with the prompt \(p\) = "_a photo of_\(V_{*}\)_urban scene_". This results in the unique identifier \(V_{*}\) capturing the target domain appearance. Once trained, we prompt the fine-tuned DM with \(p\) = "_a photo of_\(V_{*}\)_urban scene_" and report the results in the Fig. 3(c). We observe the stark improvement in the overall visual similarity with the reference target domain images depicted in Fig. 3(a). As a result of the personalization step with \(V_{*}\), we can now condition the DM to generate more samples of the desired target domain.
However, a thorough inspection of the generated images in Fig. 3(c) reveals that the images lack diversity. The DM overfits to the single target image and loses its ability to generate many other objects. For instance, some classes (such as _car_) are repeated whereas others (such as _bus_, _bike_, _truck_) never appear. To prevent this overfitting issue, we train the DM for a limited number of iterations. Moreover, we disable the class-specific _prior-preservation_ loss used in DreamBooth [42], designed for not forgetting other concepts, since our goal is to capture the essence of the target domain, rather
Figure 3: **Qualitative study illustrating the underlying motivations of our three-stage approach. (a) Real images from the Cityscapes target domain. (b) Out-of-the-box Stable Diffusion (SD) can generate photo-realistic images given the [CLS] name in the prompt, but barely have any resemblance to Cityscapes. (c) Fine-tuning SD on a single target image (personalization stage) leads to generations that truly mimic the Cityscapes domain, but at the cost of losing diversity. (d) Our proposed prompting strategy (data generation stage) leads to synthetic generations that are both photo-realistic and also ressembles Cityscapes-like images.**
than generating a desired object in many unrealistic and unnatural scenarios. For fine-tuning the DM, we optimize the training objective described in Eq. (1).
**Data generation stage.** In the post personalization stage, our goal is to generate a dataset of synthetic images of the target domain. As we use just a single target image in the personalization stage, the generation capability of the fine-tuned DM model can still be limited to few scenes. Therefore, to elicit diverse generations from the fine-tuned DM, at inference we use more targetted prompts than the ones used during training. Specifically, we employ class-wise prompts in the form of: "_a photo of a_\(V_{*}\) [CLS]". The [CLS] corresponds to the name of the _things_ classes (_e.g_., _bus_, _person_, etc.) we want to generate, as defined in [5]. Our choice of using only the "things" classes is motivated by the fact that in a driving application, the "things" classes mostly co-occur with "stuff" classes (_e.g_., _building_, _sky_). Thus, explicitly prompting the model to generate stuff classes is redundant. As shown in Fig. 3(d), injecting the "things" class names into the inference prompt leads to an improved diversity in the generations, while staying close to the target domain in appearance. This helps in combating the long-tailed phenomenon of the semantic segmentation datasets, where some minority classes (_e.g_., _bike_) appear less frequently than others, such as _cars_, and _road_.
**Adaptive segmentation stage.** While the pseudo-target images in the synthetic dataset contain the user-desired object, they still lack pixel-level information. To overcome this limitation, we resort to UDA techniques that enable a segmentation model to be adapted to an unlabeled target dataset. In this work, we leverage UDA methods such as DAFormer [24] and HRDA [25], but our approach is not exclusive to these two methods. Notably, the optimization objective of these two UDA methods remain unaltered. In summary, our proposed DATUM can transform any UDA method into an effective OSUDA method.
## 4 Experiments
### Experimental set up
Dataset and settings.We follow the experimental settings established in the OSUDA literature [31, 17, 57] and conduct experiments on two standard _sim-to-real_ benchmarks: GTA \(\rightarrow\) Cityscapes and SYNTHIA \(\rightarrow\) Cityscapes, where GTA [37] and SYNTHIA [41] are the source domains in the respective settings, and Cityscapes [10] is the target domain. In details, the GTA dataset comprises 24,966 synthetic images with a resolution of 1914 \(\times\) 1052. and SYNTHIA contains 9400 synthetic images of resolution 1280 \(\times\) 760. Cityscapes contains 2975 training images and 500 validation images of size 2048 \(\times\) 1024, is captured under real-world driving conditions. Note that, since we operate in the one-shot adaptation scenario, as in [31, 17, 57], we assume to have access to a _datum_ from the target domain, which is chosen at random during training.
**Implementation details.** We employ the Stable Diffusion (SD) implementation of Latent Diffusion Models (LDM) [38]. We use the publicly available _Diffusers_ library [53] for all the experiments related to generating synthetic data. In particular, for generating synthetic images in the target domain, we start from the Stable Diffusion v1.4 checkpoint [39] and fine-tune it using the DreamBooth [42] method. We refer the reader to LDM [38] for details about the encoder, U-Net, and decoder architectures.
For fine-tuning SD, we randomly crop patches of 512 \(\times\) 512 from the original 2048 \(\times\) 1024 resolution, and use a generic prompt \(p\) = "_a photo of a \(V_{*}\) urban scene_", given that the target domain Cityscapes was captured in an urban set-up [10]. We train SD for 200 iterations, and we find that for the one-shot setting longer training leads to overfitting on the target scene. Once trained, we generate a synthetic target dataset of cardinality 2975, which is equivalent in size to the Cityscapes training set, by utilizing inference prompts of the form \(p\) = "_a photo of a \(V_{*}\)_ [CLS]". DreamBooth generates images at the same resolution as the input, which is 512 \(\times\) 512. This generated dataset then serves as the target domain for adaptation, as in UDA.
For training the final segmentation model on the source and generated datasets, we use the network architecture from state-of-the-art UDA methods [24] that use MiT-B5 [58] as the encoder and a context-aware fusion [24] as the decoder. This is analogous to the most popular ResNet-101 [18] as a backbone, and DeepLabV2 [9] as the decoder. We also experiment with another UDA method: HRDA [25]. For both these experiments with DAFormer and HRDA, we keep the training protocol and hyperparameters unchanged. Both the ResNet-101 and MiT-B5 are pre-trained on ImageNet-1k.
**Evaluation metrics.** Following the standard protocol [31], we evaluate all models with the mean Intersection over Union (mIoU). For the GTA \(\rightarrow\) Cityscapes benchmark, we compute mIoU over 19-classes, whereas for SYNTHIA \(\rightarrow\) Cityscapes, we report both mIoU\({}^{13}\) and mIoU\({}^{16}\) for 13 and 16 classes, respectively [17]. Note, all results are computed on the validation set of Cityscapes.
### Comparison with the state-of-the-art
Baselines.We compare our proposed method with the state-of-the-art OSUDA methods and UDA methods adapted to the OSUDA setting: CycleGAN [63], AdaptSeg [50], DACS [49], ProDA [61], CBST [54], OST [3], SMPPM [57], ASM [31], CACDA [17], DAFormer [24] and HRDA [25]. Given that DATUM focuses primarily on data generation, we pair it with the UDA methods DAFormer and HRDA under the OSUDA setting. We denote these models as DAFormer + DATUM and HRDA + DATUM, which use **purely synthetic** target dataset generated by DATUM, alongside source.
For a fair comparison with baselines, we use the DAFormer network architecture (with MiT-B5 backbone), which has demonstrated superior effectiveness compared to weaker counterparts, such as ResNet-101 [24]. However, as performance metrics for some older OSUDA methods [3, 31] are not available with a DAFormer-like architecture, we also experiment using DeepLabV2 with ResNet-101 backbone.
**Main results.** In Tab. 1 and Tab. 2 we report the results on the GTA \(\rightarrow\) Cityscapes and the SYNTHIA \(\rightarrow\) Cityscapes benchmarks, respectively, under the traditional UDA as well as the OSUDA setting. The traditional UDA setting [24] is denoted as \(\overline{\text{All}}\), as it uses all target samples, while the OSUDA setting is denoted as \(\overline{\text{One}}\) since we have access to only a single datum. Following the standard practices from the one/few shot learning literature, we report our results averaged over 3 independent runs using randomly sampled unlabeled real target datum. Also note that in our experiments we report the model performance after the last training iteration, instead of picking the maximum mIoU.
From the Tab. 1 we notice that using our generated target dataset for training the state-of-the-art UDA methods in the OSUDA setting, greatly improves their performances, independent of the backbone. For instance, DAFormer + DAFormer \(\circ\)[24], All \(95.7\)\(70.2\)\(89.4\)\(53.5\)\(48.1\)\(49.6\)\(58.9\)\(54.9\)\(54.
OSUDA setting. Compared to the best competitor method CACDA, our data generation technique helps in achieving the new state-of-the-art results, by comprehensively outperforming CACDA by +2.0% and +3.1% in mIoU[13]. We believe that these findings are highly significant in bridging the gap between OSUDA and standard UDA.
**Comparison with style-transfer methods.** Given that DATUM is akin to data augmentation in image stylization, we compare it against two style transfer techniques used in existing OSUDA methods: RAIN [31] and PT+CDIR [17]. Tab. 3 reports the results. We observe that generating novel scenes with DATUM is more impactful than simply augmenting the source images with the target style as in the other methods.
### Ablation analysis
To examine the effectiveness of DATUM, in this section we conduct thorough ablation analyses of each component associated with it. All ablations are carried out on GTA \(\rightarrow\) Cityscapes benchmark with DAFormer [24] using only one random datum from the target domain.
**Impact of number of shots.** To investigate the impact of the number of real target samples (or #TS) on the OSUDA performance, we conduct an ablation study where we vary the #TS and personalize SD with DATUM for a varied number of training iterations. In Fig. 4 we plot the performance of DAFormer + DATUM for different #TS and compare it with SD. We observe that for lower #TS (_i.e_., one shot) DATUM achieves the best performance, and the mIoU gradually degrades with prolonged training. This is because the SD overfits on the single target image and loses its ability to generate diverse scenes. The issue is less severe when the #TS
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c c c c c} \hline \hline & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirowrow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirowrow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirowrow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirowrow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*}{\#TS} & \multirow{3}{*} \#TS & \multirow{3}{*}{\#TS} & \multirow{3}
increases to 10 (_i.e_., ten shot), and the mIoU is fairly stable. Nevertheless, DATUM generates more informative target images than SD, highlighting the need for incorporating the target style into the synthetic dataset generation process.
**Impact of prompts.** Since DATUM depends on the choice of prompts used during training and inference, here we ablate the impact of training and inference prompts by quantitatively measuring the mIoU for different combinations and report the results in Tab. 4. We observe that the combination of training prompt \(p\) = "_a photo of a \(V_{*}\) urban scene_" and class-aware inference prompt \(p\) = "_a photo of a \(V_{*}\)_[CLS]" leads to the best results (second row). When compared to the class agnostic inference prompt \(p\) = "_a photo of a \(V_{*}\) urban scene_" (first row), the performance increases by +4.3%. This demonstrates that grounding DATUM with things/objects of interest leads to more meaningful scene composition, and provides more information to the segmentation model. Using the _stuff_ classes (_e.g_., _sky_, _building_) in the inference prompts (third row) results in a slightly lower performance compared to using only _things_ classes (second row).
Given that the target dataset Cityscapes is captured with sensors mounted on a car, we make an attempt to tailor the inference prompts for such a use-case. Specifically, we use the inference prompt \(p\) = "_a photo of a \(V_{*}\) seen from the dash cam_". We notice that usage of such prompt does not bring any improvement, and rather leads to worsened performance.
Next, we make the training prompt more suited for a driving scenario by using \(p\) = "_a photo of a \(V_{*}\) scene from a car_" and experiment with some inference prompts that are essentially nuanced variations of the training prompt. The results are reported in the lower part of Tab. 4. We observe that adding the phrase "_scene from a car_" to the training prompt has no positive impact in the training of DATUM. It is worth noting that the retention of the _prior preservation_ loss caused our best result to decrease from 57.2% to 54.8%.
**Impact of generated dataset cardinality.** Here, we examine the impact of the cardinality of the target dataset generated by DATUM using a single real target image (_i.e_., one-shot) on the performance of the segmentation model. In Fig. 5 we plot the mIoU from DAFormer versus the generated dataset size and also compare with training on the real target dataset of the same cardinality. We observe that having the same quantity of real target samples leads to better performance with respect to purely synthetic data. This is expected as real data always contains more targetted information than synthetic data. However, one must appreciate the fact that having 1000 synthetic data leads to a better performance than having 10 real samples, which can be difficult to collect in some applications. Thus, our DATUM is most effective when working with a very small budget of real target data.
## 5 Conclusions
We proposed a synthetic data generation method DATUM for the task of one-shot unsupervised domain adaptation, that uses a single image from the target domain to personalize a pre-trained text-to-image diffusion model. The personalization leads to a synthetic target dataset that faithfully depicts the style and content of the target domain, whereas the text-conditioning ability allows for generating diverse scenes with desired semantic objects. When pairing DATUM with modern UDA methods, it outperforms all state-of-the-art OSUDA methods, thus paving the path for future research in this few-shot learning paradigm.
**Acknowledgements.** This paper has been supported by the French National Research Agency (ANR) in the framework of its JCJC program (Odace, project ANR-20-CE23-0027). This work
\begin{table}
\begin{tabular}{c c c|c} \hline \hline
**Training prompt** & **Inference prompt** & **classes** & **mIoU** \\ \hline \multirow{3}{*}{“a photo of a \(V_{*}\) urban scene”} & “a photo of a \(V_{*}\) urban scene” & - & 52.9 \\ \cline{2-4} & “a photo of a \(V_{*}\) [CLS]” & things & **57.2** \\ \cline{2-4} & “a photo of a \(V_{*}\) [CLS]” & things + stuff & 56.7 \\ \cline{2-4} & “a photo of a \(V_{*}\) [CLS] seen from the dash cam” & things & 55.5 \\ \hline \hline \multirow{3}{*}{“a photo of a \(V_{*}\) scene from a car”} & “a photo of \(V_{*}\) scene from a car” & things & 53.0 \\ \cline{2-4} & “a photo of a \(V_{*}\) [CLS]” & things & 56.8 \\ \cline{2-4} & “a photo of [CLS]” & things & 55.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Impact of training and inference prompts on the mIoU.
Figure 4: Impact of number of shots (#TS) on the mIoU (in %)
Figure 5: Impact of the cardinality of the generated target dataset on the mIoU. It is compared with adaptation on the real data
was granted access to the HPC resources of IDRIS under the allocation AD011013071 made by GENCI.
|
2309.05934 | MatSciML: A Broad, Multi-Task Benchmark for Solid-State Materials
Modeling | We propose MatSci ML, a novel benchmark for modeling MATerials SCIence using
Machine Learning (MatSci ML) methods focused on solid-state materials with
periodic crystal structures. Applying machine learning methods to solid-state
materials is a nascent field with substantial fragmentation largely driven by
the great variety of datasets used to develop machine learning models. This
fragmentation makes comparing the performance and generalizability of different
methods difficult, thereby hindering overall research progress in the field.
Building on top of open-source datasets, including large-scale datasets like
the OpenCatalyst, OQMD, NOMAD, the Carolina Materials Database, and Materials
Project, the MatSci ML benchmark provides a diverse set of materials systems
and properties data for model training and evaluation, including simulated
energies, atomic forces, material bandgaps, as well as classification data for
crystal symmetries via space groups. The diversity of properties in MatSci ML
makes the implementation and evaluation of multi-task learning algorithms for
solid-state materials possible, while the diversity of datasets facilitates the
development of new, more generalized algorithms and methods across multiple
datasets. In the multi-dataset learning setting, MatSci ML enables researchers
to combine observations from multiple datasets to perform joint prediction of
common properties, such as energy and forces. Using MatSci ML, we evaluate the
performance of different graph neural networks and equivariant point cloud
networks on several benchmark tasks spanning single task, multitask, and
multi-data learning scenarios. Our open-source code is available at
https://github.com/IntelLabs/matsciml. | Kin Long Kelvin Lee, Carmelo Gonzales, Marcel Nassar, Matthew Spellings, Mikhail Galkin, Santiago Miret | 2023-09-12T03:08:37Z | http://arxiv.org/abs/2309.05934v1 | # MatSciML: A Broad, Multi-Task Benchmark
###### Abstract
We propose MatSci ML, a novel benchmark for modeling **M**aterials **S**cience using **M**achine **L**earning methods focused on solid-state materials with periodic crystal structures. Applying machine learning methods to solid-state materials is a nascent field with substantial fragmentation largely driven by the great variety of datasets used to develop machine learning models. This fragmentation makes comparing the performance and generalizability of different methods difficult, thereby hindering overall research progress in the field. Building on top of open-source datasets, including large-scale datasets like the OpenCatalyst, OQMD, NOMAD, the Carolina Materials Database, and Materials Project, the MatSci ML benchmark provides a diverse set of materials systems and properties data for model training and evaluation, including simulated energies, atomic forces, material bandgaps, as well as classification data for crystal symmetries via space groups. The diversity of properties in MatSci ML makes the implementation and evaluation of multi-task learning algorithms for solid-state materials possible, while the diversity of datasets facilitates the development of new, more generalized algorithms and methods across multiple datasets. In the multi-dataset learning setting, MatSci ML enables researchers to combine observations from multiple datasets to perform joint prediction of common properties, such as energy and forces. Using MatSci ML, we evaluate the performance of different graph neural networks and equivariant point cloud networks on several benchmark tasks spanning single task, multitask, and multi-data learning scenarios. Our open-source code is available at [https://github.com/Intellabs/matsciml](https://github.com/Intellabs/matsciml).
## 1 Introduction
Solid-state materials provide the foundation for a diverse set of modern technologies, such as computer hardware, batteries, biomedical implants, and catalysts. Discovering, modeling, evaluating, and understanding of solid-state materials will therefore continue to play a significant role in complex technological challenges of the future, such as clean energy and transportation, sustainable agriculture, and personalized healthcare. The ability to accurately and efficiently model materials properties, as well as complex materials behavior under diverse conditions remains a major challenge in materials design. As such, machine learning (ML) methods have been increasingly applied to develop property prediction models that exhibit significantly greater computational efficiency compared to traditional physics-based methods,
such as density functional theory (DFT) [5; 30]. Given this challenge, a variety of deep learning models and methods have been proposed to solve concrete challenges involving DFT data [19; 13; 6]. Generally, the research has focused on datasets targeted for concrete applications, such as the OpenCatalyst Dataset (OCP)[5; 43] for catalytic materials and Materials Project (MP) [25] for a broad range of solid-state materials with relevance to clean energy. Many of the aforementioned methods often focus on a distinct set of properties (e.g., energy and force prediction), which often have limited use for practical applications [15].
Given the current state-of-the-art, there is a need for more comprehensive ways to evaluate the modeling capabilities of machine learning models for solid-state materials. Evaluations should contain both a broader range of materials systems and their associated properties with the goal of enabling the design of more generalizable and versatile models. Based on the success of benchmarks inspiring research advances in computer vision [10], natural language processing [44; 41], molecular modeling [47; 4; 17; 23] and protein modeling [50] amongst other fields, we develop a benchmark for **M**aterials **S**cience modeling using **M**achine **L**earning modeling (**MatSci ML**) 2 targetting periodic crystal structures. MatSci ML brings the following capabilities and features towards comprehensive solid-state materials benchmarking:
Footnote 2: [https://github.com/IntelLabs/matsciml](https://github.com/IntelLabs/matsciml)
1. **Data Diversity:** MatSci ML integrates multiple open-source datasets, leading to a broader diversity of materials structures and properties covered by the benchmark as described in Section 3.
2. **Multi-Task Training:** MatSci ML includes support for multi-task training methods across multiple regression and classification targets for ML models. This enables researchers to leverage multi-task training methods for solid-state materials modeling on both graph-based and point cloud based representations as shown in Section 5.
3. **Multi-Dataset Integration:** MatSci ML enables joint training of machine learning models on heterogeneous data from different datasets in a unified manner. This facilitates and encourages research towards generalizable, efficient, and accurate ML models and methods for solid-state materials as described in Section 5.
To the best of our knowledge, MatSci ML is the first benchmark to enable multi-task and multi-dataset learning for solid-state materials. We describe related work in Section 2, introduce benchmark tasks in Section 3, formally define all learning settings in Section 4, and provide an analysis of their performance in Section 5.
## 2 Related Work
Research at the intersection of materials science and machine learning has been growing in recent years [31; 41; 45]. While adjacent research work in molecular modeling has seen significant increases in recent years, modeling of solid-state materials with periodic crystal structures has been comparatively underexplored.
Molecular Modeling:Applying machine learning to predict properties and design molecules has been an active area of research in recent years. This research has spanned many different dimensions including the development of benchmarks for property prediction [32; 35; 1; 14; 22; 47] and molecular design [47; 4; 17; 23; 1]. This, in turn, has facilitated the development of a diverse set of machine learning methods for molecular property prediction, many of which are based on graph neural networks and geometric deep learning models that include various types of useful inductive biases [20; 39; 18; 16; 19]. Additionally, there has also been a significant amount of research exploring graph-based molecular generation algorithms whose performance is evaluated on the aforementioned benchmarks [53; 52; 3; 40; 26; 56]. Solid-state materials differ significantly from molecules given their periodic crystal structure, which greatly affects their properties and behavior. This periodic structure creates the need for different representations and modeling methods that resolve greater degrees of symmetries and geometrical features found in solid-state materials [8].
Solid-State Materials Modeling:Compared to molecular structures, the study of solid-state materials has seen significantly less ML research activity. While there has been some work on graph-based property prediction for solid-state materials [51, 7, 6, 27, 48, 9, 5, 30], many papers evaluate their proposed methods on different datasets making it difficult to compare their overall performance. This tendency also holds in research work on the generation of solid-state crystal structures [49, 21, 46] where each method is evaluated according to the setting the authors propose. While there has been some work aiming to standardize the evaluation of machine learning models for property prediction [12], much of this work has been limited to small-scale datasets. Additionally, unlike for molecules where graph and text-based representations have been quite successful, descriptive and scalable representations for crystal structures remain an active area of research [8]. Overall, this creates a need for more comprehensive benchmarks for solid-state materials with large and chemically diverse datasets that enable more thorough studies of learned representations of solid-state materials, in addition to the development of new machine learning methods.
## 3 Benchmark Tasks
The MatSci ML benchmark comprises 10 tasks across 3 different task groups. MatSci ML leverages the Open MatSci ML Toolkit [30] as the backbone platform with basic primitives, including support for graph and point-cloud based data structures, as well as modeling capabilities to support the diverse set of tasks. Building on top of the Open MatSci ML Toolkit enables the addition of new tasks and datasets in a modular manner for desired future capabilities ranging from generative modeling to ML potentials for dynamical simulations. We outline the different task definitions, data sources, dataset statistics and evaluation metrics in Table 1 and will describe them in detail in subsequent sections.
### Energy Prediction Tasks
Energy prediction is one of the most common property prediction tasks in both molecular and solid-state crystal structure modeling, and is generally included in most relevant datasets [5, 25, 2, 32]. Energy is a critical property of a material system that indicates how stable the materials system is. Moreover, the energy can be used to understand many different aspects of materials behavior, and has also inspired methods development in machine learning, such as "energy-based learning" methods. The ubiquity of energy labels in various datasets allows us to combine multiple datasets in a _multi-data_ setting. The collection of data in energy prediciton spans \(\sim\)1.5 million bulk materials from various sources and relaxation trajectory data diverse adsorbate + surface + bulk combinations from OpenCatalyst.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Task** & **Task Category** & **Data Source** & **\#Train** & **\#Validation** & **\#Test** & **Metric** \\ \hline \multicolumn{6}{c}{**Energy Prediction Tasks**} \\ \hline
**S2EF** & Property Reg. & OpenCatalyst Project [5] & 2,000,000 & 1,000,000 & - & MSE \\
**IS2RE** & Property Reg. & OpenCatalyst Project [5] & 500,000 & 25,000 & - & MSE \\
**Formation Energy** & Property Reg. & Materials Project [25] & 108,159 & 30,904 & 15,456 & MSE \\
**LIPS** & Property Reg. & LiPS [2] & 17,500 & 5,000 & 2,500 & MSE \\
**OQMD** & Property Reg. & OQMD [28] & 818,076 & 204,519 & - & MSE \\
**NOMAD** & Property Reg. & NOMAD [11] & 111,056 & 27,764 & - & MSE \\
**CMD** & Property Reg. & Carolina Materials Database [55] & 171,548 & 42,887 & - & MSE \\ \hline \multicolumn{6}{c}{**Force Prediction Tasks**} \\ \hline
**S2EF** & Property Reg. & OpenCatalyst Project [5] & 2,000,0001 & 1,000,000 & - & MAE \\
**LiPS** & Property Reg. & LiPS [2] & 17,500 & 5,000 & 2,500 & MAE \\ \hline \multicolumn{6}{c}{**Property Prediction Tasks**} \\ \hline
**Material Bandgap** & Property Reg. & Materials Project [25] & 108,159 & 30,904 & 15,456 & MSE \\
**Fermi Energy** & Property Reg. & Materials Project [25] & 108,159 & 30,904 & 15,456 & MSE \\
**Stability** & Property Class. & Materials Project [25] & 108,159 & 30,904 & 15,456 & ACC \\
**Space Group** & Property Class. & Materials Project [25] & 108,159 & 30,904 & 15,456 & ACC \\ \hline \hline \end{tabular}
\end{table}
Table 1: Benchmark task descriptions. Each task, along with its category, the source of dataset, the size of each split and evaluation metric are shown below. _Abbr._, Reg.: regression; Class.: classification; ACC: accuracy; MSE: mean-square error; MAE: mean average error
Structure to Energy & Forces (S2EF)from OCP [5] requires prediction of the adsorption energy of a molecular adsorbate on a catalyst surface. We directly adopt the dataset splits from OCP containing a training set, an in-distribution validation set, and a set of out-of-distribution validation sets based on different molecular absorbates or catalysts. Accurate prediction of adsorbate-surface interactions is necessary for effective materials design in many applications, including catalysts and semiconductors.
Initial Structure to Relaxed Energy (IS2RE)from OCP [5] involves the prediction of relaxed adsorption energy of a molecular adsorbate on a solid-state catalyst surface. We directly adopt the dataset splits from OCP containing a training set, an in-distribution validation set, and a set of out-of-distribution validation sets based on different molecular adsorbates or catalysts. Predicting the relaxed adsorption energy of joint molecular and solid-state materials from an initial structure has substantial impact on the design of catalytic materials which can help accelerate a variety of chemical reactions. This task can help understand the influence of solid-state material composition and structure, as well as its interactions with molecules.
Formation Energyfrom MP [25] involves predicting the energy of the material relative to its constitutions, as a function of the relative three-dimensional arrangement of atoms in the unit cell. MP normalizes the formation energy based on the stoichiometry of the material (e.g. H\({}_{2}\)O, SiO\({}_{2}\)) in units of eV/atom. We construct a dataset split of MP where the representation of different crystal structures is consistent across training, validation, and test sets. Formation energy, along with entropy, determines the thermodynamic stability of a material, and thus how feasible it is for the material to be experimentally synthesized and what applications it may be suitable for. This task could be applicable to materials design of bulk solid-state materials, as opposed to the exposed surfaces found in OCP.
LiPS Energyfrom the LiPS dataset [2] involves the prediction of the energy of LiPS material structures as they evolve dynamically relative to a reference configuration, in units of meV/atom. We construct a random dataset split based on the original dataset similar to prior work [15]. Reliably accurate predictions of the energy of a configuration, meaning atoms in space, are needed for ML potentials used in simulations of materials under physically relevant conditions, such as room temperature and atmospheric pressure.
Oqmdfrom the OQMD dataset [28] involves the prediction of the formation energy of a material structure measured in eV/atom based on the DFT calculations. We construct a random dataset split based of 1,022,595 bulk material structures in the dataset with a 20% validation split. OQMD represents the largest collection of bulk material formation energy calculation, including more sample than Materials Project, NOMAD and CMD combined.
Nomadfrom the NOMAD dataset [11] involves the prediction of the formation energy of a material structure measured in eV/atom based on crowdsourced calculations. We construct a random dataset split based of 138,820 bulk material structures in the dataset with a 20% validation split.
Cmdfrom the Carolina-MatDB dataset [55] involves the prediction of the formation energy of a material structure measured in eV/atom based on the DFT calculations of structures discovered by machine learning methods. We construct a random dataset split based of 214,435 bulk material structures in the dataset with a 20% validation split.
### Force Prediction Tasks
Many workflows for machine-learned potentials harness automatic differentiation available in modern ML frameworks to produce a conservative potential energy function \(U\), linked to the force \(\vec{f}\)_via_ the gradient: \(\vec{f}=-\nabla U\). While this conservative formulation could be important for fine-scale thermodynamic stability of simulations, for some applications learning to predict forces independently from energy--either in a rotation-equivariant or non-rotation-equivariant way--may also suffice. All models described here derive forces from the gradient of a conservative potential energy.
Structure to Energy & Forces (S2EF)from OCP [5] includes both energy and force labels; the latter represents the force exerted on each atom within the molecular adsorbate in units of eV/A. We adopt the same dataset splits found in OCP. Predicting forces on each atom for a snapshot of particle configurations is needed for structure relaxation and other optimization methods used to find low-energy states of materials systems. Accurate force predictors also provide concrete opportunities to incorporate machine learning models into classical materials modeling workflows such as molecular dynamics simulations [2, 6, 15].
LiPS Forcesfrom the LiPS dataset [2] includes per-atom forces (in meV/A) based on a random split. Similar to S2EF, predicting the atomic forces of a system in a generalizable way would enable applying machine learning to further understand materials behavior.In contrast to S2EF, this dataset comprises many frames of a single Li-ion system, as opposed to a diverse set of compositions and structures.
### Property Prediction Tasks
For all Materials Project (MP) [25] property prediction tasks in this section, we apply the same dataset split as for the formation energy described above. In this case, the representation of different crystal structures is consistent across training, validation, and test sets.
Material Bandgapinvolves the prediction of the bandgap of a solid-state material in units of eV, corresponding to the amount of energy required to promote a valence electron into the conduction band. Larger bandgaps imply low electronic conductivity of the material (e.g. insulators), while small bandgap imply large electronic conductivity (e.g. metals) with many materials being somewhat conductive (e.g. semiconductors). Predicting the bandgap of a material is critical for many electronic materials and their applications, such as semiconductors for computer hardware and photovoltaics. This task aims to understand how the design (e.g. composition and configuration) of crystal structures affects the bandgap.
Fermi Energyis the highest occupied energy level of a material at absolute zero temperature measured in eV, which correlates with the conductivity of a material. The Fermi energy generally represents the halfway point between the valence and conduction band and is thereby closely related to the material bandgap. Predicting the Fermi energy can help understand the electric properties of a given material, which can in turn be used to engineer the conductivity characteristics of materials for new applications.
Stabilityis a binary classification task to predict whether a given material configuration is thermodynamically stable at absolute zero. Understanding material stability is particularly relevant for evaluating and conditioning generative models, e.g., preferentially sampling from stable configurations of chemical space should result in experimentally viable materials.
Space Groupis a multiclass classification task to predict which, of the 230 possible crystallographic space groups, a given material belongs to. Predicting the space group requires embedding the effect of symmetry operations (e.g. rotation and exchange) of a solid-state structure, which ultimately influences its physical properties and stability.
## 4 Training Methods
We apply a set of deep learning models and training methods to showcase the capabilities of the benchmark and derive some interesting insights. While we believe these baselines are representative of the general capabilities of deep learning methods for materials modeling, our experiments are unlikely to achieve the best possible modeling performance. As such, we encourage future work to leverage the benchmark to improve upon currently available methods, as well as further research into the development of new methods.
### Baseline Models
The deep learning model architectures used in this paper are outlined in Table 2 and span different model design frameworks described below.
Graph Neural Networks (GNNs)encode the material structure as a graph where the atoms generally represent the nodes and the edges are the connections between the atoms. Unlike molecular structures, solid-state materials do not have a canonical way to encode bonds between different atoms. As such, distance based radius graphs are used to construct the graph of the corresponding material. We apply MegNet [7] across all tasks in MatSci ML to understand the performance of domain-specific graph neural networks.
Equivariant Graph Neural Networksencode rotational equivariance into their architecture, which is a useful inductive bias for materials property prediction. Regular GNNs do not have rotational equivariance or scalar invariance by default in their architecture and have to be intentionally encoded. We apply E(n)-GNN [38] across all tasks in MatSci ML to understand the performance of equivariant graph neural networks.
Short-Range Equivariant Modelsoperate on a point cloud data structure where local neighborhoods in the point cloud receive the greatest importance in parameter updates of the neural network. The additional flexibility of the point cloud data structure also helps promote localized representations of relevant elements in the materials structure through targeted mathematical formulations, such as Clifford algebras [42, 37, 36], which facilitate efficient model training. We apply GALA [42] across all tasks in MatSci ML to understand the performance of short-range equivariant networks.
### Single Task vs Multi-Task Learning
Throughout this paper, we refer to a "task" a mapping from a given set of inputs (which may come from a specific dataset) to a desired single outcome (e.g., classification, regression) encapsulated by a single loss function. _Multi-task training_ refers to training a model on more than one type of loss function, such as regression _and_ classification jointly performed on the MP dataset. _Multi-data training_ refers to training a model on a similar type of label across multiple data sources, such as energy prediction on diverse materials drawn jointly from the OCP dataset and MP dataset. Next, we outline formal definitions of the three methods.
**Single Task Learning** is a common way to approach solid-state materials modeling by training a model exclusively on one task at a time. In this case, the model learns a mapping function (\(f\)) between input (\(x\)) and output (\(y\)) where \(x\in t_{n}\in\mathcal{T}\) is drawn from a pool of tasks \(\mathcal{T}\). The learning objective is summarized by a single loss \(\mathcal{L}_{t}\) that is minimized for may include multiple regression _or_ classification targets from the same dataset.
**Multi-Task Learning** aims to learn a mapping function (\(f\)) between input (\(x\)) and output (\(y\)) from different tasks \(t_{n}\), i.e. \(x=[x_{t_{1}},x_{t_{2}},...x_{t_{n}}]\) and \(y=[y_{t_{1}},y_{t_{2}},...y_{t_{n}}]\). In this paper, we study multi-task learning using a joint encoder with a predetermined model architecture
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **Model Type** & **Input Layer** & **Hidden Layers** & **\#Params.** \\ \hline \multicolumn{5}{c}{**Equivariant Neural Networks**} \\ \hline
**E(n)-GNN [38]** & GNN & \(\mathbf{X},\mathbf{H}\) & linear (hidden dim.:128) + ReLU & 700K \\ \hline \multicolumn{5}{c}{**Graph Neural Networks**} \\ \hline
**MegNet [7]** & GNN & \(\mathbf{X},\mathbf{E},\mathbf{H}\) & linear (hidden dim.:128) + ReLU & 1.3M \\ \hline \multicolumn{5}{c}{**Short-Range Equivariant Networks**} \\ \hline
**GALA [42]** & Transformer & \(\mathbf{X},\mathbf{H}\) & linear (hidden dim.:128) + SiLU & 2.0 M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Baseline model descriptions. _Abbr._, Params.: parameters; feats.: features; dim.: dimension; GNN: Graph Neural Network; \(\mathbf{X}\): positions, \(\mathbf{E}\): edge features, \(\mathbf{H}\): rotation-invariant features like atom types.
followed by task-specific output heads. To remain within reasonable compute budgets, we perform experiments on two tasks at a given time with a balanced loss between the two tasks: \(\mathcal{L}_{\theta}=\mathcal{L}_{t_{1}}+\mathcal{L}_{t_{2}}\). In this setting, both losses backpropagate gradients to the joint encoder in addition to their respective output heads. Additionally, we perform multi-task learning using PCGrad [54] which aims to minimize gradient conflicts between different tasks.
**Multi-Data Learning** aims to learn a mapping function (\(f\)) between input (\(x\)) and output (\(y\)) from different datasets \(d_{n}\), i.e. \(x=[x_{d_{1}},x_{d_{2}},...x_{d_{n}}]\) and \(y=[y_{d_{1}},y_{d_{2}},...y_{d_{n}}]\). In this case, the output (\(y\)) is a single property found among each of the datasets, such as a measurement of energy or atomic forces. Similar to multi-task learning, we study multi-data learning using a joint encoder with a predetermined model architecture followed by task-specific output heads. We perform experiments on two datasets at a time with a balanced loss between the two tasks: \(\mathcal{L}_{\theta}=\mathcal{L}_{d_{1}}+\mathcal{L}_{d_{2}}\).
## 5 Experiments
We perform various experiments across the different models and methods described in Section 4, including training all models for single-task and multi-task learning shown in Table 3 and Table 4, respectively, as well as multi-data learning for E(n)-GNN and MegNet shown in Table 5. We did not perform multi-data learning for GALA given the increased computational cost of training the model on the large combined dataset, especially S2EF and IS2RE, compared to the other methods. Our general results also indicate that GALA underperforms compared to other models, suggesting that it would be more productive to focus multi-data experiments on E(n)-GNN and MegNet.
### Single-Task Learning
We perform single task learning for all tasks in MatSci ML with the results summarized in Table 3. For additional reference, we add state-of-the-art results for OpenCatalyst OC-20 data based on the public OC20 leaderboard. The results on the leaderboard represent test data splits that are only available through the leaderboard interface, while our results are based on the publicly available validation data splits. For both Materials Project (MP) and LiPS, we create new dataset splits which make it difficult to compare to existing results reported in the literature. The results from Table 3 indicate that:
**Graph neural networks perform well across all tasks.** E(n)-GNN outperforms all other models across the energy prediction tasks, while MegNet performs best for force prediction and MP-based tasks. Both E(n)-GNN and MegNet outperform GALA across all tasks in
\begin{table}
\begin{tabular}{l|l||c|c|c|c} \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Metric**} & **Equivariant Neural Network** & **Graph Neural Network** & \multirow{2}{*}{**Point Cloud Network**} & \multirow{2}{*}{**Literature SOTA**} \\ \cline{3-3} \cline{5-6} & & **E(n)-GNN** & & & **MGNet** & **GALA** \\ \hline \hline \multicolumn{6}{c}{**Energy Prediction**} \\ \hline
**S2EF** & MSE & 0.826 & 1.252 & 6.611 & 0.227 (Equiformer [29]) \\
**IS2RE** & MSE & 0.186 & 0.229 & 5.133 & 0.300 (Equiformer [29]) \\
**MP** & MSE & 0.045 & 0.100 & 0.32 & - \\
**LPS** & MSE & 0.579 & 0.989 & 0.985 & - \\
**OOMD** & MSE & 0.244 & 0.276 & - & - \\
**NOMAD** & MSE & 0.209 & 0.215 & - & - \\
**CMD** & MSE & 0.029 & 0.141 & - & - \\ \hline \multicolumn{6}{c}{**Force Prediction**} \\ \hline
**S2EF** & MAE & 0.957 & 0.186 & 567.4 & 0.0138 (Equiformer [29]) \\
**LPS** & MAE & 0.443 & 0.443 & 1.078 & - \\ \hline \multicolumn{6}{c}{**Property Prediction**} \\ \hline
**Band** & MSE & 0.504 & 0.497 & 1.234 & - \\
**Fermi** & MSE & 0.859 & 0.849 & 3.506 & - \\ \hline
**Stable** & ACC & 79.9 & 83 & 77.2 & - \\
**Space** & ACC & 29.8 & **31.3** & 20.1 & - \\ \hline \end{tabular}
\end{table}
Table 3: Benchmark results on single-task learning. We report the validation set performance for each experiment and highlight the **best** performance among all models; **SOTA** model performance from literature is added where applicable; “-” indicates a non-applicable setting. Graph-based models perform better than point cloud based models on single task learning.
MatSci ML. This suggests that graph-based data structures provide a useful inductive bias for modeling solid-state materials although a more thorough study is required to further confirm this observation. The reported results from the OC20 leaderboard indicate that the evaluated models are far from SOTA performance in S2EF, both for energy and forces, but may be competitive for IS2RE.
**Space group classification is a difficult task for all models.** All evaluated models perform poorly on space group classification with MegNet reaching an accuracy of 31.3%. The difficulty associated with this task is twofold: the natural imbalance of class labels owing to the fact that materials of certain space groups are more prevalent than others, and that symmetry operations are hierarchical, thus requiring models to differentiate between groups with similar bases. The latter reinforces prior findings that models and representations which specifically include higher-order symmetry could be useful for solid-state materials [27].
### Multi-Task Learning
We then probe the multi-task learning scenario based on property prediction tasks from Materials Project data splits spanning both regression and classification. MP provides the greatest diversity of labels for evaluating different property prediction targets suitable for the multi-task setting. We study the multi-task performance under the settings described in Section 4.2 with additive task losses for joint backpropagation, as well as for PCGrad [54]. Based on the results in Table 4, we observe:
**Multi-task learning generally improves task performance on individual tasks.** Task performance in the multi-task setting generally improves across all of the tasks studied. This is particularly true for the regression tasks (bandgap and fermi energy) and less so for the classification tasks where performance remains similar to single-task learning. This suggests that many of the tasks in MP have a high degree of correlation leading to overall better learning.
**PCGrad offers small improvements in multi-task learning.** The results across all three models studied indicate that PCGrad provides little performance improvement compared to multi-task learning with additive losses. This further reinforces the idea that the tasks in MP have a high degree of correlation given that one of the primary goals of PCGrad is to resolve gradient conflicts between different tasks. Hence, a low degree of gradient conflicts in highly correlated tasks leads to only small performance gains.
### Multi-Data Learning
We perform multi-data learning for energy and force prediction across all different datasets. Based on the results shown in Table 5, we observe:
**IS2RE energy performance worsens with multi-data learning.** IS2RE energy worsens in the multi-data setting for both E(n)-GNN and MegNet. We hypothesize that is due to
\begin{table}
\begin{tabular}{l|c||c c c|c c|c c|c c c|c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Metric**} & \multicolumn{4}{c|}{**E(n)-GNN**} & \multicolumn{4}{c|}{**MegNet**} & \multicolumn{4}{c|}{**GALA**} \\ \cline{3-13} & & +**Band** & +**Fermi** & +**Stable** & +**SG** & +**Band** & +**Fermi** & +**Stable** & +**SG** & +**Band** & +**Fermi** & +**Stable** & +**SG** \\ \hline \hline \multicolumn{13}{c}{**Multitask Training (Additive Losses)**} \\ \hline
**Band** & MSE & 0.504 & 0.389 & 0.314 & 0.43 & 0.497 & 0.454 & 0.368 & 0.585 & 1.23 & 0.622 & 0.51 & 0.54 \\
**Fermi** & MSE & 0.211 & 0.859 & 0.25 & 0.499 & 0.284 & 0.849 & 0.263 & 0.686 & 0.51 & 0.508 & 0.676 \\ \hline
**Stable** & ACC & **31.6** & 77.4 & 29.9 & 18.1 & 80.6 & 75.9 & 83.0 & 76.9 & **78.0** & 77.0 & 77.2 & 76.9 \\
**Space** & ACC & 30.6 & 27.5 & 30.9 & 29.8 & 30.8 & 31.1 & 21.6 & 32.3 & 18.2 & 18.4 & 19.9 & 20.1 \\ \hline \multicolumn{13}{c}{**PCGrad Training [54]**} \\ \hline
**Band** & MSE & 0.504 & 0.389 & 0.312 & 0.406 & 0.497 & 0.454 & 0.314 & 0.537 & 1.23 & 0.622 & 0.511 & 0.563 \\
**Fermi** & MSE & 0.211 & 0.859 & 0.259 & 0.43 & 0.294 & 0.849 & 0.263 & 0.452 & 0.606 & 3.506 & 0.461 & 0.622 \\ \hline
**Stable** & ACC & **81.9** & 77.7 & 79.9 & 77.7 & 81.3 & 76.9 & 83.0 & 77.4 & 77.0 & 77.1 & 77.2 & 77.4 \\
**Space** & ACC & 30.3 & 27.5 & 30.1 & 29.8 & 23.8 & 25.4 & 26.6 & 31.3 & 19.1 & 22.2 & 23.2 & 20.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Benchmark results on materials project multi-task learning. We show the best performing along with single-task baseline with each multi-task run outperforming the single-task baseline also highlighted. Multi-task learning generally outperforms single task on regression tasks with only small performance difference between additive losses and PCGrad.
the fact that IS2RE aims to predict relaxed energy of a given structure, which is different from the single frame prediction present in all other datasets.
S2EF energy performance improves with multi-data learning.S2EF energy prediction generally improves in the multi-data setting for both E(n)-GNN and MegNet with the exception of E(n)-GNN S2EF + IS2RE. This reinforces the notion that S2EF energy prediction is naturally more correlated with the energy labels in MP and LiPS given that all datasets evaluate energy at the given frame, as opposed to IS2RE which evaluates energy for a final relaxed state--methods akin to \(\Delta\)-ML [33] may be required to bridge this gap.
MP and LiPS see varied results in multi-data learning.LiPS energy performance remains relatively stable for MegNet compared to the single-task performance and worsens for E(n)-GNN. MP energy prediction generally shows improvement when combined with S2EF and deterioration when combined with LiPS. This generally indicates that MP and LiPS are not very well correlated. MP improvements for MegNet in combination with S2EF and IS2RE may indicate that model is able to acquire more generalized knowledge on the larger datasets, which would have to be confirmed with more thorough studies.
Force prediction improves in multi-data learning.The improvements in force prediction between S2EF and LiPS further indicate a strong correlation between the tasks, which is also observed in energy prediction.
## 6 Material Generation Pipeline
We applied our Materials Project dataset (described in Appendix B.2) on the generative modeling task using CDVAE [49], a latent diffusion model that trains a VAE on the reconstruction objective with DimeNet++ [18] as an encoder and GemNet-dT [19] as a decoder on the denoising objective. For the sake of numerical stability, we trained and generated samples with 25 or less atoms in the structure that resulted in 64,251 training data points, 18,142 for validation, and 9,098 for testing (denoting this subset as mp25). Following the standard hyperparameters reported in Xie et al. [49] (with the only change being a larger decoder cutoff radius of 12A to account for larger structures than those in the original datasets), we trained a 5M parameter CDVAE model and sampled 10,000 structures using Langevin dynamics. The results are shown in Table 6.
The results presented in Table 6 expand upon the results in Xie et al. [49] given that original implementation only trained on a subset of 20k datapoints from Materials Project.
\begin{table}
\begin{tabular}{l|l|c c c c|c c c|c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Metric**} & \multicolumn{4}{c|}{**E(n)-GNN**} & \multicolumn{4}{c|}{**MegNet**} \\ \cline{3-10} & & **+S2EF** & **+IS2RE** & **+MP** & **+LiPS** & **+S2EF** & **+IS2RE** & **+MP** & **+LiPS** \\ \hline \multicolumn{10}{c}{**Energy Prediction**} \\ \hline
**S2EF** & MSE & 0.826 & 0.282 & 0.744 & 0.193 & 1.252 & 0.455 & 0.376 & 0.445 \\
**IS2RE** & MSE & 0.252 & 0.186 & 0.32 & 0.287 & 0.34 & 0.229 & 0.374 & 0.276 \\
**MP** & MSE & 0.044 & 0.064 & 0.045 & 0.385 & 0.077 & 0.086 & 0.100 & 1.038 \\
**LiPS** & MSE & 0.966 & 0.992 & 0.988 & 0.579 & 0.966 & 0.997 & 0.988 & 0.989 \\ \hline \multicolumn{10}{c}{**Force Prediction**} \\ \hline
**S2EF** & MAE & 0.957 & - & - & 0.185 & 0.186 & - & - & 0.177 \\
**LiPS** & MAE & 0.361 & - & - & 0.443 & 0.441 & - & - & 0.443 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Benchmark results on energy+forces multi-dataset learning. We show the best performing along with single-task baseline with each multi-task run outperforming the single-task baseline also highlighted. Multi-data outperforms the single-task baseline in some cases for both models. “-” indicates not applicable for this setting.
## 7 Conclusion
In this work, we introduce MatSci ML, a broad benchmark for applying machine learning models to solid-state materials. To the best of our knowledge, MatSci ML is the first benchmark to enable multi-task and multi-dataset learning for solid-state materials, thereby facilitating machine learning researchers to build more generalizable models that can significantly accelerate the deployment of machine learning tools in the design, discovery, and evaluation of new materials systems. The results in our evaluation also indicate that future work is needed on how to productively combine different large-scale solid-state material datasets to be able to train more performant models. Further avenues for future work include: (i) supporting generative models for crystal structures; (ii) deeper multi-task study to reveal the opportunities of creating _generalist_ models applicable in many practical downstream applications in the materials science domain. Such generalist models have shown promising results in many machine learning areas, including reinforcement learning [34] and neural algorithmic reasoning [24], with great potential to significantly advance the state of the art.
|
2306.17545 | Geometry and Kinematics of a Dancing Milky Way: Unveiling the Precession
and Inclination Variation across the Galactic Plane via Open Clusters | This Letter presents a study of the geometry and motion of the Galactic disk
using open clusters in the Gaia era. The findings suggest that the inclination
of the Galactic disk increases gradually from the inner to the outer disk, with
a shift in orientation at the Galactocentric radius of approximately 5 to 7
kpc. Furthermore, this study brings forth the revelation that the mid-plane of
the Milky Way may not possess a stationary or fixed position. A plausible
explanation is that the inclined orbits of celestial bodies within our Galaxy
exhibit a consistent pattern of elliptical shapes, deviating from perfect
circularity; however, more observations are needed to confirm this. An analysis
of the vertical motion along the Galactocentric radius reveals that the disk
has warped with precession, and that the line-of-nodes shifts at different
radii, aligning with the results from the classical Cepheids. Although there is
uncertainty for precession/peculiar motion in Solar orbit, after considering
the uncertainty, the study derives a median value of precession rate = 6.8
km/s/kpc in the Galaxy. This value for the derived precession in the outer disk
is lower than those in the literature due to the systematic motion in Solar
orbit (inclination angle = 0.6 deg). The study also finds that the
inclinational variation of the disk is significant and can cause systematic
motion, with the inclinational variation rate decreasing along the Galactic
radius with a slope of -8.9 uas/yr/kpc. Moreover, the derived inclinational
variation rate in Solar orbit is 59.1+-11.2(sample)+-7.7(VZsun) uas/yr, which
makes it observable for high precision astrometry. The all-sky open cluster
catalog based on Gaia DR3 and Galactic precession/inclinational variation fits
as well as Python code related to these fits are available at
https://nadc.china-vo.org/res/r101288/ | Zhihong He | 2023-06-30T11:01:51Z | http://arxiv.org/abs/2306.17545v2 | # Geometry and Kinematics of a Dancing Milky Way:
###### Abstract
This Letter presents a study of the geometry and motion of the Galactic disk using open clusters in the Gaia era. The findings suggest that the inclination \(\theta_{i}\) of the Galactic disk increases gradually from the inner to the outer disk, with a shift in orientation at the Galactocentric radius of approximately \(6\pm 1\) kpc. Furthermore, this study brings forth the revelation that the mid-plane of the Milky Way may not possess a stationary or fixed position. A plausible explanation is that the inclined orbits of celestial bodies within our Galaxy exhibit a consistent pattern of elliptical shapes, deviating from perfect circularity; however, more observations are needed to confirm this. An analysis of the vertical motion along the Galactocentric radius reveals that the disk has warped with precession, and that the line-of-nodes shifts at different radii, aligning with the results from the classical Cepheids. Although there is uncertainty for precession/peculiar motion in Solar orbit, after considering the uncertainty, the study derives a median value of \(\dot{\phi}_{LON}\) = 6.8 km s\({}^{-1}\) kpc\({}^{-1}\)in the Galaxy. This value for the derived precession in the outer disk is lower than those in the literature due to the systematic motion in Solar orbit (\(\theta_{i}\) = 0.6\({}^{\circ}\)). The study also finds that the inclinational variation of the disk is significant and can cause systematic motion, with the variation rate \(\dot{\theta}_{i}\) decreasing along the Galactic radius with a slope of -8.9 \(\mu\)as yr\({}^{-1}\) kpc\({}^{-1}\). Moreover, the derived \(\dot{\theta}_{i}\) in Solar orbit is 59.1 \(\pm\) 11.2\({}_{sample}\)\(\pm\) 7.7\({}_{V_{20}}\)\(\mu\)as yr\({}^{-1}\), which makes it observable for high precision astrometry. The all-sky open cluster catalog based on Gaia DR3 and Galactic precession/inclinational variation fits as well as Python code related to these fits are available at [https://nadc.china-vo.org/res/r101288/](https://nadc.china-vo.org/res/r101288/).
Galaxy: stellar content - star clusters: general -Galaxy: warp
## 1 Introduction
When observing galaxies, astronomers often find that the disk is bent or tilted (Jarrett et al., 2003) - this is known as the warp structure. It is believed that this phenomenon is caused by a variety of sources of gravitational influence, such as interactions with neighboring galaxies or dark matter halos (e.g. Hunter & Toomre, 1969; Sparke & Casertano, 1988; Quinn et al., 1993; Shen & Sellwood, 2006). The motion of stars and gas in a galaxy's disk can cause distortion in its shape by moving in different directions or orbits. To study the warp structure of galaxies, researchers use observations of the positions and movements of stars and gas clouds (e.g. Sancisi, 1976; Bosma, 1981; Thilker et al., 2005) along with simulations and models of galactic dynamics (e.g. Pringle, 1992; Velazquez & White, 1999; Robin et al., 2003; Roskar et al., 2010). A deeper understanding of the warp structure is crucial to comprehend the evolution of galaxies and the underlying processes that drive their formation and transformation in the universe.
The warped structure in the outer disk of our Milky Way has been known for some time, first detected through the observation of HI gas (Kerr, 1957; Burke, 1957). Studies have shown that the vertical angle increases to 3 degrees at a Galactocentric radius of around 16 kpc (Burton, 1988). Similarly, molecular CO clouds observed in the first and second Galactic quadrant have also demonstrated similar warp features in the outer arm (Dame & Thaddeus, 2011; Sun et al., 2015). However, the uncertainty of gas kinematic distances has made it difficult to study the geometric structure of the Galactic warp. It was only through the observation of classical Cepheids (Chen et al., 2019; Skowron et al., 2019, hereafter CCs) in recent years that a clear warp in the outer disk has become measurable. This allowed for the direct measurement of the Galactocentric radius (R\({}_{GC}\)) and scale height of the warp. Findings indicate that both the gas and young star disk warp upwards in the first and/or second quadrants, and downwards in the third and/or fourth quadrants. Moreover, the absolute vertical amplitude reaches 0.3 to 5 kpc above the Galactic plane beyond
R\({}_{GC}\) = 10 to 30 kpc (e.g. Gum et al., 1960; Lopez-Corredoira et al., 2002; Nakanishi and Sofue, 2003; Levine et al., 2006; Voskes and Butler Burton, 2006; Chen et al., 2019; Skowron et al., 2019, 2019; Lemasle et al., 2022), highlighting the significant dimensions of the warp structure.
Furthermore, the orbital inclination of the warp can lead to a systematic velocity in the vertical direction, particularly close to the line-of-node (LON) where the warp surface intersects with the Galactic plane. During the ascending semicycle, this systematic velocity often displays an upward trend while the opposite is observed in the descending semicycle. Multiple statistical studies have identified this trend using various stellar tracers (Lopez-Corredoira et al., 2014; Liu et al., 2017; Poggio et al., 2017), especially with the improved accuracy of proper motion data from later Gaia releases (Gaia Collaboration et al., 2018, 2021, 2022b). Additionally, the vertical velocity of stars in low-latitude regions near the Galactic plane can be determined by its proper motion \(\mu_{b}\) and is less influenced by the radial velocity (e.g. Poggio et al., 2018, 2020; Romero-Gomez et al., 2019; Wang et al., 2020; Li et al., 2020; Cheng et al., 2020; Li et al., 2023; Dehnen et al., 2023).
The question of whether the Milky Way's warped disk is precessing has sparked debate. Poggio et al. (2020) and Cheng et al. (2020) proposed that there is precession of around 10.9 and 13.6 km s\({}^{-1}\) kpc\({}^{-1}\)in the outer Galactic disk, respectively, while others argue that there is no significant precession (e.g. Wang et al., 2020; Chrobakova and Lopez-Corredoira, 2021). The debate stems from the fact that the precession velocity calculation depends on different tracers and warp models. Recently, Dehnen et al. (2023) introduced a new perspective on precession in the outer disk of the Milky Way based on classical Cepheids, which takes into account the unsteady inclination (known as the inclinational variation rate \(\dot{\theta}_{\rm r}\) in this context) for the first time. This study showed a precession rate that gradually decreases from 12 (R\({}_{GC}\) = 12 kpc) to 6 (R\({}_{GC}\) = 14 kpc) km s\({}^{-1}\) kpc\({}^{-1}\)without any inclination variations. However, the previously mentioned studies only indicate precession (or lack thereof) in the Galactic outer disk (R\({}_{GC}\) > 9 kpc), without any evidence of warping in the inner Galaxy.
Star clusters are essential in Galactic studies because they provide more precise distance and motion estimates compared to individual stellar tracers. However, only a few open clusters (OCs) located in the southern warp have been identified in previous study (Cantat-Gaudin et al., 2020). This is primarily due to the scarcity of distant OCs, making it difficult to explore the geometry of the Milky Way warp on a larger scale. Recently, we discovered nearly 1500 reliable open clusters through machine learning and visual inspection based on Gaia DR3 (He et al., 2023b). This new result represents a substantial improvement over previous studies, which identified only one-forth of the present sample's distant OCs located more than 4 kpc away. This immense dataset presents an opportunity to explore the wider disk via OCs' study that were previously unattainable. The primary objective of this study is to utilize this extensive sample of Gaia DR3-based OCs to gather vital geometric and kinematic information throughout the Galactic disk. By doing so, it may provide us with further insight into the Milky Way's warp.
Chapter 2 introduces the selection process and the warp structure inferred from open clusters and classical Cepheids. Chapters 3 and 4 present the geometry and kinematic results, respectively. We analyze how orbital inclination, precession, and inclinational variation changes at different radii. The study presents an in-depth analysis of these findings, providing necessary insights into the warp structure traced by the sample. Chapter 5 summarizes the paper's main findings and presents potential future research opportunities. Finally, Appendix A presents error analysis of the OC sample, and Appendix B show a sketch map of elliptical inclined plane.
## 2 OC Sample and warp structure
Our study utilized a large sample of 2017 open clusters from Cantat-Gaudin et al. (2020), previously used to study the structure of the Milky Way disk within approximately 4 kpc of the Solar system in the Gaia DR2 era. We then cross-matched the member stars of these clusters in Gaia DR3, selecting only clusters with 20 or more member stars (a total of 1837 clusters). Additionally, we obtained new star clusters from Gaia DR2 and EDR3 with a sample size of over 500, including 615 in He et al. (2021, 2022), 628 in Castro-Ginard et al. (2022), and 1656 in He et al. (2023a). We cross-matched the member stars of these clusters in Gaia DR3 to obtain additional line-of-sight velocity information. We removed 138 duplicate clusters (from He et al., 2021, 2022; Castro-Ginard et al., 2022) and 746 clusters with less than 20 member stars (mostly in He et al., 2023a). In total, these clusters contain 3852 open clusters, among which 92% are within 4 kpc. Our study's sample provides a more extensive dataset for analyzing the structure of the Milky Way's disk.
Recently, a machine learning algorithm was employed by He et al. (2023b) to identify and verify 2085 star clusters/candidates, out of which 1488 were reliable OCs (Type 1) verified through visual inspection. A total of 944 Type 1 OCs were located beyond 4 kpc, some with large extinction values (A\({}_{V}\) > 5 mag). Each star cluster underwent isochrone fitting and visual inspection, resulting in a more in-depth analysis of distant OCs' geometric structure. The coordinates of the above nearby OC samples, combined with Type 1 distant OCs, were transformed into the Galactocentric coordinate system (more details in Appendix A). A significant number of distant star clusters demonstrate the twisted/spiral structures of the Milky Way, as shown in Figure 1.
This is the second celestial body type, after classical Cepheids (as shown in Figure 1, grey symbols), to directly trace the twisted geometric structure. The highest scale height can reach over 1.5 kpc, both in the southern (\(\theta\) < 0) and northern (\(\theta\) > 0) Galaxy. However, the edge-on image from the anti-Galactocentric line shown in Figure 1 only shows the warp in the outer disk. Are there any other tilted or warped structures in the inner regions?
## 3 Disk inclination
To examine the spatial distribution of open clusters inside and outside the Galactic disk, we present an edge-on view of the plane at different radii using a 2 kpc bin (Figure 2), from the Galactic anticenter direction. The traced disc's degree of tilt by the cluster increases as the radius gets larger, demonstrating that the Milky Way's disc is tilted at various radii. Although the tilt is not very apparent at 6 kpc, a trend of tilt angle change can be observed. Remarkably, the distribution of Cepheid variables exhibits the same tilt pattern, and their distances obtained through the period-luminosity relation indicate consistency with the geometric distribution derived from Gaia's parallax measurements. Our analysis suggests that the tilted disc structure is not unique to outer disk, as previously thought, but rather extends to other regions. Such findings could help to refine our understanding of the Milky Way's structure.
To shed more light on our findings, we categorized the OC samples into three sub-groups based on their age: young (YOCs < 100 Myr), middle-aged (MOCs: 0.1-1 Gyr), and old (OOCs > 1 Gyr) OCs. Interestingly, the inclination angles observed by different age tracers show only slight variations, indicating that YOCs and MOCs are highly consistent. On the other hand, OOCs usually show varying degrees of tilt; however, the trend of the tilt angle change is consistent with the other samples. We attribute this inconsistency among older clusters to radial migration or incomplete sub-samples. As presented in Figure 2, there is a consistent inclination angle of about 0.6 degrees on the orbit where the Sun is located. This value remains consistent for YOCs, MOCs, and CCs, with only a slightly larger inclination in OOCs. However, in the case of older clusters, the member stars in the main sequence exhibit a tendency to be fainter compared to their younger counterparts. This difference in brightness has the potential to result in notable variations in the observed distribution between the older OCs and the younger ones, particularly under heavy extinctions. Consequently, the apparent strong pattern observed in older samples, particularly within the 3 to 5 kpc range of R\({}_{GC}\) (Figure 2), may not accurately reflect the true inclination.
Figure 1: The distribution of open clusters in the XY plane is depicted in both the face-on (a.) and edge-on (b.) views, with Galactocentric coordinates (\(\phi,\theta\)) also displayed. In the left panel, the black arrow indicates the rotation direction of the Milky Way, and the position of the Solar system is (-8.15, 0) kpc, according to Reid et al. (2019). Red dots represent distant open clusters from He et al. (2023b), while blue crosses denote other Gaia-based open clusters (Cantat-Gaudin et al., 2020; Castro-Ginard et al., 2022; He et al., 2021, 2022, 2023a). For comparison, the gray pluses signify the CC sample (Skowron et al., 2019a), showing a similar intuitive plot of the warp structure (based on CCs) from Chen et al. (2019); Lemasle et al. (2022); except for the southern warp at \(\phi\sim~{}270^{\circ}\), which still lack of OC samples there.
To calculate the Milky Way disc's inclination angle (\(\theta_{i}\)) and minimize uncertainties caused by bin selection and LON positions, we fitted OC sample values at different radii using a 0.5 kpc step and obtained the LON position from the OC kinematics (Section 4). During this process, we discovered that the disk did not share a cohesive plane at different radii and that the scale height of the Solar system varied in different literature. Additionally, The lopsided warps were presented in various radius, particularly in the outer arm regions. We believe this is due to the objects' orbits not being circular but instead elliptical (discussed in Section 5). Nevertheless, to simplify our calculation, we used a circular orbit approximation and added \(d_{Z_{0}}\) to eliminate orbital ellipticity/\(Z_{\odot}\) impact on the scale height:
\[Z(R_{GC},\phi_{GC},\phi_{LON})_{obs}=R_{GC}\times\sin(\phi_{LON}-\phi_{GC}) \times\sin\theta_{i}+d_{Z_{0}} \tag{1}\]
Here, \(\phi_{LON}\) and \(\phi_{GC}\) represent the position of LON and OC samples in Galactocentric coordinates, and \(Z_{obs}\) denotes the OCs' scale height. Figure 3-a displays the inclination angle of the disk at different radii, and the error bars reflect the uncertainty in using OC samples of varying age limits. From the inner disk to the outer disk, the tilt angle gradually increases, with the absolute value of the tilt angle being approximately 0\({}^{\circ}\) near R\({}_{GC}\) = 6 kpc, and the tilt direction changes. The tilt angle experiences a slight decrease near R\({}_{GC}\) = 10 kpc, followed by a rapid increase to 3 degrees at R\({}_{GC}\) = 14 kpc. Unfortunately, the insufficient OC samples within R\({}_{GC}\) = 4 kpc leaves the orbital tilt in the Milky Way bulge unknown. Nevertheless, the limited information we do have suggests a negative trend in the inclination angle at that location.
The consistent geometric features of tilt disks displayed by different sub-samples, as illustrated in Figure 2 and Figure 3, indicate that the samples utilized in this study are adequately representative for investigating the warp structures, especially for YOCs and MOCs. The degree of tilt at most radii appears to be less reliant on the sample selection. Additionally, as previously stated, orbital tilt is not solely reflected in geometric features; it also displays a vertical systemic velocity in kinematics, which is unaffected by extinction in the Galactic inner disk. As shown in Figure 4, a notable negative value is presented in the vertical velocity in the
Figure 2: The edge-on views display Gaia-based open clusters (colored dots) and classical Cepheids (black dots) from Skowron et al. (2019) located at varying Galactocentric radii around R\({}_{GC}\) = 4, 6, 8, 10, 12, and 14 kpc, all within a width range of \(\pm\) 1 kpc. The gray dashed line shows the linear regression of the inclination angle on the line of sight anti-Galactic center. Young (\(<10^{8}\) yr), middle-aged (\(10^{8}\) - \(10^{9}\) yr), and (\(>10^{9}\) yr) OCs are identified by the blue, magenta, and red symbols, respectively. As the radius increases, the thickness of the OC/CC disks gradually increase, displaying flaring characteristics. Additionally, compared to younger open clusters, the older ones tend to have higher scale heights, as stated in Cantat-Gaudin et al. (2020). It is essential to note that the LON are not situated at Y = 0 kpc, and the fitted \(\theta_{i}\) is visible in Figure 3-a.
inner Galactic disc. This value is still maintained despite uncertainties regarding the Sun's peculiar motion, orbital precession, and inclinational variation in the Galaxy. When combined with their geometric features, these compelling pieces of evidence demonstrate that the orbit of the most inner regions of the Milky Way is also tilted, and the direction of this tilt is opposite to that of the outer disk.
## 4 Galactic motion
As mentioned above, a tilted disc can cause a vertical velocity component that reaches its maximum near the LON and gradually decrease towards zero at the highest point of the warp 90 degrees away from the northern/southern solstices. Moreover, a positive LON rotation (precession, \(\dot{\phi}_{LON}\)) can weaken the V\({}_{\infty}\) systematical velocity caused by this tilted orbit. Figure 4 displays the measured results at R\({}_{GC}\) = 13 kpc, revealing that if only tilt and no precession existed, the observed vertical velocity would be higher. However, the observed velocity is consistent with a precession rate of \(\dot{\phi}_{LON}\) = 6.8 km s\({}^{-1}\) kpc\({}^{-1}\), which is lower than what some previous studies (Poggio et al., 2020; Cheng et al., 2020) have reported. Nonetheless, this outcome is independent of any warping or kinematic model, confirming the presence of the precession of the Milky Way.
The Galactic coordinate system exhibits hints of rotation, ranging from 0.05 to 0.8 mas yr\({}^{-1}\), based on various objects and observations (e.g. Miyamoto and Zhu, 1998; Zhu, 2000; Bobylev and Bajkova, 2019). Lindegren et al. (2018) identified in Gaia data that the rotation is not more than 0.15 mas yr\({}^{-1}\), showing 0.1 mas yr\({}^{-1}\) when several VLBA sources were compared (Lindegren, 2020). Such a magnitude of frame rotation can also affect the proper motion \(\mu_{b}\) or vertical velocity in Solar orbit systematically. For instance, a 50 \(\mu\)as yr\({}^{-1}\) rate may cause 0 (\(\phi_{LON}\) = 0\({}^{\circ}\)) to \(\sim\pm\) 2 km s\({}^{-1}\)(\(\phi_{LON}\) = \(\mp\) 90\({}^{\circ}\)) vertical velocity around the Solar orbit. Additionally, it is reasonable to consider the variation of inclination since the inclination itself needs to increase/decrease to the current positions. Nevertheless, previous investigations on warping disks did not consider its effect, except for recent work Dehnen et al. (2023). In this study, the precession \(\dot{\phi}_{LON}\) and mean (since the orbit is not a rigid body) inclinational variation
Figure 3: The derived parameters from Equation 1 and 2 are presented with error bars accounting for uncertainties in OC sample age limitations. The parameters include: (a) inclination angle \(\theta_{i}\), (b) LON positions \(\phi_{LON}\), (c) precession rate \(\dot{\phi}_{LON}\), (d) inclinational variation rate \(\dot{\theta}_{i}\), (e) differential scale height d\({}_{\rm co}\), and (f) uncertainty (from V\({}_{\odot}\)) of \(\dot{\phi}_{LON}\), plotted against Galactocentric radius R\({}_{GC}\). The typical uncertainty resulting from V\({}_{\odot}\) for \(\dot{\theta}_{i}\) is 4.3 \(\mu\)as yr\({}^{-1}\).
\(\dot{\theta}_{i}\) are described by the simple equation:
\[V_{Z}(R_{GC},\phi_{GC},\theta_{i})_{obs}=\frac{|\theta_{i}|}{\theta_{i}}\times( \frac{R_{GC}\times(\dot{\phi}_{GC}-\dot{\phi}_{LON})}{\sqrt{\frac{1+\tan^{-2}( \theta_{i})}{\cos(\phi_{LON}-\phi_{GC})}-1}})+\dot{\theta}_{i}\times R_{GC} \times\sin(\phi_{LON}-\phi_{GC})\times\cos(\theta_{i})-V_{Z\odot} \tag{2}\]
The most significant uncertainty when calculating the precession rate is V\({}_{Z\odot}\) due to the 0.6-degree inclination near the Solar orbit, along with potential uncertainties from Solar peculiar motion, precession, and inclinational variation near the Sun. To compensate, an uncertainty of V\({}_{Z\odot}\) = 1 \(\pm\) 1 km s\({}^{-1}\)was added. Using the OC kinematics, LON positions of all radii were fitted, leading to a wide distribution around \(\phi\) = 170\({}^{\circ}\) (Figure 3-b). The LON positions in the outer Galactic disk ranges from \(\phi\) = 145\({}^{\circ}\) (R\({}_{GC}\) = 8 kpc) to 180\({}^{\circ}\) (R\({}_{GC}\) = 14 kpc), consistent with the LON positions from CC samples (Chen et al., 2019), whereas the inner regions show an opposite trend. Additionally, our study revealed that the potential systematic movement associated with the rotation of the LON around the line-of-solstice is negligible, with an error bar of approximately \(\sim\)10 \(\mu\)as yr\({}^{-1}\).
In the inner disk, despite a large sub-sample size, the small absolute value of the tilt angle (especially at R\({}_{GC}\) = 5 to 7 kpc, where the \(\theta_{i}\) is close to zero) results in a significant influence of V\({}_{Z\odot}\) uncertainty on the outcome. A slight change in V\({}_{Z\odot}\) can cause a large differential \(\dot{\phi}_{LON}\) (Figure 3-f). Nevertheless, considering the uncertainty, a precession rate greater than zero ( > 5 km s\({}^{-1}\) kpc\({}^{-1}\)or greater) can be observed at R\({}_{GC}\) = 4 kpc, indicating the presence of precession characteristics in the inner disk, similar to the outer disk. Figure 3-d shows inclinational variation across the Milky Way disk, where \(\dot{\theta}_{i}\) is higher in the inner disk and decreases with a slope of -8.9 \(\mu\)as yr\({}^{-1}\) kpc\({}^{-1}\) towards the outer disk. The \(\dot{\theta}_{i}\) in the region where the Solar system is located is \(59.1\pm 11.2_{sample}\pm 7.7_{V_{Z\odot}}\)\(\mu\)as yr\({}^{-1}\), consistent with hints of previous findings (Lindegren et al., 2018; Bobylev and Bajkova, 2019; Lindegren, 2020). This inclinational variation across the disk suggests that the rotation of the LON reflects not only the motion of the Solar system but also the interior motion of the entire Galaxy.
## 5 Discussion and Conclusion
Figure 4: The edge-on views of OCs in the inner (left upper panel) and outer (right upper panel) Galactic disk, along with their corresponding vertical velocity distributions (displayed in the lower panels) are presented. A linear regression is applied (gray dashed line) to demonstrate the smaller inclination angles of the inner disk for clear visualization. Even though the inner disk does not exhibit the same significant tilt as the outer disk, a noticeable negative V\({}_{Z}\) is still observed, consistent with the direction of the tilt-induced velocity deviation. In the right lower panel, the black dashed line and cyan dashed line represent the vertical velocity when precession rate \(\dot{\phi}_{LON}\) = 0 and 6.8 km s\({}^{-1}\) kpc\({}^{-1}\), respectively. It can be observed that the \(\dot{\phi}_{LON}\) = 0 does not correspond to the observed V\({}_{Z}\) distribution, as supported by the study conducted by Poggio et al. (2020) utilizing proper motions of giant stars.
The findings presented in Figure 3-e reveal that the \(d_{Z_{0}}\) values are not zero, suggesting that the median height of the tilted Milky Way disk is not on the same plane. This is likely an indication that the orbital motion of the disk in the Milky Way is a tilted ellipse plane instead of a circular orbit. Upon closer inspection, it was observed that \(d_{Z_{0}}\) steadily increases from R\({}_{GC}\) = 4 to 9 kpc, which could be attributed to the varying position between the two foci at different orbital radii (as depicted in the Sketch map of Appendix B). However, relying solely on ellipse orbit, the observed trend would suggest that eccentricity of the orbits in the inner disk are greater than 0.2 and could even be as high as 0.3, which is not significant enough to serve as evidence. High eccentricity may cause the lopsided warp, but it also give rise to systematic radial motion near the solar orbit. Therefore, it is necessary to conduct further research to gain a more profound understanding of this phenomenon.
This study presents observational evidence for precession and inclinational variation spanning both the inner and outer disks for the first time. The analysis of different-aged OC samples produced significant results, revealing that the precession rates of the Galactic disk are lower than previously reported in the literature. This deviation is attributable to the fact that systematic vertical velocity, which affects the local standard of rest in Solar orbit, has been taken into account. Additionally, a global inclinational variation spread across the Galactic disk may suggest the disk galaxy is presently undergoing a shift in inclination. However, further observations and simulations are necessary to examine its origin better. It should be noted that the considerable rotation of the Solar orbit may impact high precision astrometry, necessitating additional analysis to identify any potential effects of precession and inclinational variation on the coordinate system.
## 6 Acknowledgements
This work has made use of data from the European Space Agency (ESA) mission GAIA ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the GAIA Data Processing and Analysis Consortium (DPAC,[https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the GAIA Multilateral Agreement. This work is supported by "Young Data Scientists" project of the National Astronomical Data Center, CAS (NADC2023YDS-07); and Fundamental Research Funds of China West Normal University (CWNU, No.21E030).
## Appendix A Data Processing and Error Analysis
We employed the Gaia DR3 data to derive the cluster position and velocity components from the member stars of 5340 open clusters. To do so, we first calculated the weighted means of the member stars' parameters and then adjusted the parallax by adding 0.017 mas (parallax zero point, Lindegren et al., 2021; Fabricius et al., 2021). By considering the Solar system's position: [R\({}_{GC_{0}}\), Z\({}_{\odot}\)] are [8.15 kpc, 5.5 pc], and Solar peculiar motion: [U\({}_{\odot}\), V\({}_{\odot}\), W\({}_{\odot}\)] are [10.6, 10.7, 7.6] km s\({}^{-1}\), and V\({}_{\phi 0}\) = 236 km s\({}^{-1}\)(Reid et al., 2019), we transformed these values into a coordinate system centered on the Galactic center (R\({}_{GC}\), \(\phi\), Z) and a three-dimensional velocity (V\({}_{R_{GC}}\), V\({}_{\phi}\), V\({}_{Z}\)). It is worth noting that 1269 of the 5340 clusters did not have radial velocities, and we estimated the median value by selecting OCs in (\(l\pm\) 10\({}^{\circ}\), \(b\pm\) 5\({}^{\circ}\), \(\varpi\pm\) 0.05 mas) and set the error to maximum of 50 km s\({}^{-1}\)and velocity dispersion of the sample.
It is worth mentioning that the uncertainty of the vertical velocity component V\({}_{\odot}\) had the most substantial impact on our results. Given the variation in W\({}_{\odot}\) values across different studies (e.g., 7.6, 8.6, and 6.9 km s\({}^{-1}\)from VLBA observations, Gaia data, and others, respectively. Reid et al., 2019; Gaia Collaboration et al., 2022; Semczuk et al., 2023), we included the effect of this uncertainty in our calculation (Section 4). Furthermore, we added \(d_{Z_{0}}\) to reduce the impact of the inclinational ecliptic orbit and scale height uncertainty in the Solar system (Section 3). To fit for parameters in Figure 3, we selected an open cluster sample with \(\mathrm{err}_{yz}\)\(<\) 1 km s\({}^{-1}\)and \(\mathrm{err}_{V_{\phi}}\)\(<\) 2.5 km s\({}^{-1}\), which is equivalent to the typical intrinsic velocity dispersion of OCs (around 1 km s\({}^{-1}\)or less, Mermilliod et al., 2009), without placing any distance-based constraints. However, we accounted for distance errors while estimating velocity errors, resulting in the exclusion of some star clusters with large distance errors in the velocity selections. Furthermore, we employed a bin size of 2 kpc while fitting the data, effectively minimizing the effects of distance errors (distance error \(<\) 500 pc for 93% of the clusters). Figure 5 depicts the histograms of the derived velocity errors.
## Appendix B Sketch Map of Elliptical Inclined Galactic Plane
Figure 6 depicts the sketch map of the scale height in the elliptical inclined galactic plane. The corresponding median scale height follows the trend of \(d_{Z_{0}}\) in R\({}_{GC}\) = 4 to 11 kpc, as seen in Figure 3-e.
|
2305.00569 | Covering the Crosspolytope with Crosspolytopes | Let $\gamma^d_m(K)$ be the smallest positive number $\lambda$ such that the
convex body $K$ can be covered by $m$ translates of $\lambda K$. Let $K^d$ be
the $d$-dimensional crosspolytope. It will be proved that $\gamma^d_m(K^d)=1$
for $1\le m< 2d$, $d\ge4$; $\gamma^d_m(K^d)=\frac{d-1}{d}$ for
$m=2d,2d+1,2d+2$, $d\ge4$; $\gamma^d_m(K^d)=\frac{d-1}{d}$ for $ m= 2d+3$,
$d=4,5$; $\gamma^d_m(K^d)=\frac{2d-3}{2d-1}$ for $ m= 2d+4$, $d=4$ and
$\gamma^d_m(K^d)\le\frac{2d-3}{2d-1}$ for $ m= 2d+4$, $d\ge5$. Moreover the
Hadwiger's covering conjecture is verified for the $d$-dimensional
crosspolytope. | Antal Joós | 2023-04-30T20:33:29Z | http://arxiv.org/abs/2305.00569v2 | # Covering the crosspolytope with crosspolytopes
###### Abstract.
Let \(\gamma_{m}^{d}(K)\) be the smallest positive number \(\lambda\) such that the convex body \(K\) can be covered by \(m\) translates of \(\lambda K\). Let \(K^{d}\) be the \(d\)-dimensional crosspolytope. It will be proved that \(\gamma_{m}^{d}(K^{d})=1\) for \(1\leq m<2d\), \(d\geq 4\); \(\gamma_{m}^{d}(K^{d})=\frac{d-1}{d}\) for \(m=2d,2d+1,2d+2\), \(d\geq 4\); \(\gamma_{m}^{d}(K^{d})=\frac{d-1}{d}\) for \(m=2d+3\), \(d=4,5\); \(\gamma_{m}^{d}(K^{d})=\frac{2d-3}{2d-1}\) for \(m=2d+4\), \(d=4\) and \(\gamma_{m}^{d}(K^{d})\leq\frac{2d-3}{2d-1}\) for \(m=2d+4\), \(d\geq 5\). Moreover the Hadwiger's covering conjecture is verified for the \(d\)-dimensional crosspolytope.
Key words and phrases:crosspolytope, Hadwiger's covering problem, covering 2010 Mathematics Subject Classification: 52C17, 52A20, 52C07, 52B11
## 1. Introduction
Let \(\mathbb{R}^{d}\) be the \(d\)-dimensional Euclidean space and let \(\mathcal{K}^{d}\) be the set of all convex bodies in \(\mathbb{R}^{d}\) with non-empty interior. Let \(p\) and \(q\) be points in \(\mathbb{R}^{d}\). Let \([p,q]\), \(|pq|\) and \(\overrightarrow{pq}\) denote, respectively, the line segment, the distance and the vector with initial point \(p\) and terminal point \(q\). If \(K\in\mathcal{K}^{d}\), then let \(c^{d}(K)\) denote the _covering number_ of \(K\), i.e., the smallest number of translates of the interior of \(K\) such that their union can cover \(K\). Levi [18] proved in 1955 that
\[c^{2}(K)=\left\{\begin{array}{ll}4&\mbox{if $K$ is a parallelogram,}\\ 3&\mbox{otherwise.}\end{array}\right.\]
In 1957 Hadwiger [12] considered this question in any dimensions and posed
**Conjecture 1** (Hadwiger's covering problem).: _For every \(K\in\mathcal{K}^{d}\) we have \(c^{d}(K)\leq 2^{d}\), where the equality holds if and only if \(K\) is a parallelepiped._
In the literature the Boltyanski's illumination problem is a similar problem (see. e.g. [8], [27], [9]). Lassak [16] proved in 1984 the Hadviger conjecture for the three dimensional centrally symmetric bodies. Rogers and Zong [22] presented the upper bound \(c^{d}(K)\leq\binom{2d}{d}(d\log d+d\log\log d+5d)\) for general \(d\)-dimensional convex bodies and \(c^{d}(K)\leq 2^{d}(d\log d+d\log\log d+5d)\) for centrally symmetric \(d\)-dimensional convex bodies.
Using the idea of [1] Huang et. al. [14] presented the upper bound \(c^{d}(K)\leq c_{1}4^{d}e^{-c_{2}\sqrt{d}}\) for some universal constants \(c_{1},c_{2}>0\). K. Bezdek [2] proved the Conjecture 1 for convex polyhedron in \(\mathbb{R}^{3}\) having anaffine symmetry and Dekster [11] verified it for convex bodies in \(\mathbb{R}^{3}\) symmetric about a plane.
For \(K\in\mathcal{K}^{d}\) and any positive integer \(m\), let \(\gamma_{m}^{d}(K)\) be the smallest positive number \(\lambda\) such that \(K\) can be covered by \(m\) translations of \(\lambda K\), i.e.,
\[\gamma_{m}^{d}(K)=\min\left\{\lambda\in\mathbb{R}^{+}:\exists(u_{1},\ldots,u_ {m})\in(\mathbb{R}^{d})^{m},\text{ s.t. }K\subseteq\bigcup_{i=1}^{m}(\lambda K+u_{i}) \right\}.\]
This is called the _covering functional_ of \(K\) with respect to \(m\)[28] or the \(m\)_-covering number_ of \(K\)[17][31]. A translate of \(\lambda K^{d}\) is called a _homothetic copy_ of \(K^{d}\). In the following we use the short notation \(\lambda K^{d}\) for a homothetic copy of \(K^{d}\) instead of \(\lambda K^{d}+\mathbf{v}\) where \(\mathbf{v}\) is a vector. Observe, \(\gamma_{m}^{d}(K)=1\), for all \(m\leq d\), and \(\gamma_{m}^{d}(K)\) is a non-increasing step sequence for all positive integers \(m\) and all convex bodies \(K\). Now \(c^{d}(K)\leq m\) for some \(m\in Z^{+}\) if and only if \(\gamma_{m}^{d}(K)<1\)[8]. Estimating covering functionals of convex bodies plays a crucial role in Chuanming Zong's quantitative program for attacking Conjecture 1 (see [31] for more details). Lassak [17] showed that for every two-dimensional convex domain K, \(\gamma_{4}^{2}(K)\leq\frac{\sqrt{2}}{2}.\) Zong [30] proved \(\gamma_{8}^{3}(C)\leq\frac{2}{3}\) for a bounded three-dimensional convex cone \(C\), and \(\gamma_{8}^{3}(B_{p})\leq\sqrt{\frac{2}{3}}\) for all the unit ball \(B_{p}\) of the three-dimensional \(l_{p}\) spaces. Wu and He [29] estimated the value of \(\gamma_{m}^{3}(P)\) where \(P\) is a convex polytope. Wu et al. [28] determined the value of \(\gamma_{m}^{3}(K)\) where \(K\) is the union of two compact convex sets having no interior points. (See [2], [3], [4], [5], [6], [7], [8], [10], [11], [12], [13], [15], [20], [21], [23], [24], [25], [26] for more information.)
Let \(K^{d}\) be the cross polytope in the \(d\)-dimensional Euclidean space with diameter \(2\), that is, \(K^{d}=\{(x_{1},\ldots,x_{d}):|x_{1}|+\ldots+|x_{d}|\leq 1\}\). In 2021 Lian and Zhang [19] proved
\[\gamma_{m}^{3}(K^{3})=\left\{\begin{array}{rl}1&\text{if}\quad m=1,\ldots,5,\\ 2/3&\text{if}\quad m=6,\ldots,9,\\ 3/5&\text{if}\quad m=10,\ldots,13,\\ 4/7&\text{if}\quad m=14,\ldots,17.\end{array}\right.\]
Let \(K\subset\mathbb{R}^{d}\) be a convex body, and denote by \(r\) and \(s\) points in \(K\) such that \(\overrightarrow{rs}\parallel\overrightarrow{pq}\) and \(|rs|\geq|r^{\prime}s^{\prime}|\) where \(\{r^{\prime},s^{\prime}\}\subset K\) and \(r^{\prime}s^{\prime}\parallel pq\). The \(K\)-length of \([p,q]\), or equivalently, the \(K\)-distance of \(p\) and \(q\) is \(2|pq|/|rs|\), and it is denoted by \(d_{K}(p,q)\). If \(K\) is the Euclidean \(d\)-ball, then \(d_{K}(p,q)\) is the Euclidean distance. Let \(||p||_{s}\) be the \(s\)-norm of \(p\), i.e. if \(p=(p_{1},\ldots,p_{d})\), then \(||p||_{s}=\sqrt[s]{\sum_{i=1}^{d}|p_{i}|^{s}}\) for \(s\geq 1\). The \(2\)-norm of \(p-q\) is the Euclidean distance of the points \(p\) and \(q\). Observe, the \(d_{K^{d}}(p,q)\) is the distance of \(p-q\) in \(1\)-norm, i.e. \(d_{K^{d}}(p,q)=||p-q||_{1}\).
**Remark 1**.: _Let \(p\) and \(q\) be different points in \(\mathbb{R}^{d}\). If \([p,q]\) lies in a homothetic copy \(\lambda K^{d}\), then \(\lambda\geq\frac{1}{2}d_{K^{d}}(p,q)=\frac{1}{2}||p-q||_{1}\)._
**Remark 2**.: _Let \(p\) be a point in \(\mathbb{R}^{d}\). If \(p\) lies in a homothetic copy \(\lambda K^{d}\) and \(c\) is the centre of this homothetic copy \(\lambda K^{d}\), then \(d_{K^{d}}(c,p)=||c-p||_{1}\leq\lambda\)._
**Theorem 1**.: _If \(1\leq m<2d\), then \(\gamma_{m}^{d}(K^{d})=1\)._
Proof.: Of course, if \(\lambda=1\), then there is a homothetic copy \(\lambda K^{d}\) such that \(K^{d}\) is covered by \(\lambda K^{d}\). Thus \(\gamma_{m}^{d}(K^{d})\leq 1\). Assume \(\gamma_{m}^{d}(K^{d})=\mu<1\). Since the number of the vertices of \(K^{d}\) is \(2d\), then there is a homothetic copy \(\mu K^{d}\) which contains two vertices of \(K^{d}\). Let \(v_{1}\) and
\(v_{2}\) be these two vertices. Observe, the opposite facets of \(K^{d}\) are parallel and any edge of \(K^{d}\) connects two vertices of opposite facets. By Remark 1, \(\mu\geq\frac{1}{2}||v_{1}-v_{2}||_{1}=\frac{1}{2}d_{K^{d}}(v_{1},v_{2})=1\), a contradiction.
**Lemma 1**.: _Let \(F\) be a facet of \(K^{d}\). If a vertex \(v\) of \(F\) is covered by \(K^{d}_{1}\) a homothetic copy of \(K^{d}\) with ratio \(\lambda\) (\(0<\lambda<1\)), then \(F\cap K^{d}_{1}\) is contained in the homothetic image of \(F\) with ratio \(\lambda\) and centre \(v\)._
Proof.: Let \(\alpha\) be the hyperplane \(x_{1}+\ldots+x_{d}=1\). Without loss of generality it may be assumed that \(v=(0,\ldots,0,1)\) and \(F\) is the facet on the hyperplane \(\alpha\). Let \(\mathcal{A}\) be the affine transformation such that \(K^{d}_{1}=\mathcal{A}(K^{d})\). It may be assumed that \(\mathcal{A}(o)\) lies in \(K^{d}\). Let \(w(w_{1},\ldots,w_{d})=\mathcal{A}(v)\). Let us assume that \(o\not\in K^{d}_{1}\) (the opposite case is similar). Let \(F_{1}\) be the facet of \(K^{d}\) on the hyperplane \(x_{1}+\ldots+x_{d-1}-x_{d}=1\). Let \(p(p_{1},\ldots,p_{d})\) be a point of \(F\cap\mathcal{A}(F_{1})\). First we will see that \(p_{d}\geq 1-\lambda\). Observe, the equation of the hyperplane containing \(\mathcal{A}(F_{1})\) is
\[x_{1}-w_{1}+x_{2}-w_{2}+\ldots+x_{d-1}-w_{d-1}-x_{d}+w_{d}-\lambda=\lambda.\]
Since \(p\) lies on \(F\) and \(\mathcal{A}(F_{1})\), we have
\[p_{1}+\ldots+p_{d}=1 \tag{1}\]
and
\[p_{1}+p_{2}+\ldots+p_{d-1}-p_{d}=w_{1}+\ldots+w_{d-1}-w_{d}+2\lambda. \tag{2}\]
From (1)-(2) we have
\[p_{d}=\frac{1}{2}+\frac{1}{2}(-w_{1}-\ldots-w_{d-1}+w_{d})-\lambda \tag{3}\]
Observe, the point \(w\) lies in the translated image of \(K^{d}\) by the vector \([0,\ldots,0,2]^{T}\). Since \(v\) is covered by \(K^{d}_{1}\), the point \(w\) lies in the halfspace bounded by the hyperplane \(-x_{1}-\ldots-x_{d-1}+x_{d}=1\) and does not contain the origin. Thus
\[-w_{1}-\ldots-w_{d-1}+w_{d}\geq 1.\]
Substituting this into (3), we have
\[p_{d}\geq\frac{1}{2}+\frac{1}{2}-\lambda=1-\lambda\]
Observe \(K^{d}_{1}\) lies in the halfspace bounded by the hyperplane \(\mathcal{A}(F_{1})\) and containing the vertex \(v\). Moreover \(p\) is an arbitrary point of the intersection of this halfspace and \(F\). This means any point from the intersection has the property that the last coordinate of the point is not less than \(1-\lambda\), which completes the proof of the lemma.
**Theorem 2**.: _If \(m=2d\), then \(\gamma^{d}_{m}(K^{d})=\frac{d-1}{d}\)._
Proof.: Let \(\Gamma=\{\frac{d-1}{d}K^{d}+(\pm\frac{1}{d},0\ldots,0),\ldots,\frac{d-1}{d}K^ {d}+(0\ldots,0,\pm\frac{1}{d})\}\). Since the homothetic copies in \(\Gamma\) cover the surface of \(K^{d}\) and the origin lies in each element in \(\Gamma\), \(K^{d}\) is covered by \(\Gamma\). Thus \(\gamma^{d}_{m}(K^{d})\leq\frac{d-1}{d}\).
Assume \(\gamma^{d}_{m}(K^{d})=\mu<\frac{d-1}{d}\). Since \(\mu<\frac{d-1}{d}<1\), there is no homothetic copy containing two vertices of \(K^{d}\). Let \(F\) be a facet of \(K^{d}\) lying on the hyperplane \(x_{1}+\ldots+x_{d}=1\) and let \(c\) be the centre of the \((d-1)\)-simplex \(F\). Observe \(c(1/d,\ldots,1/d)\). By Lemma 1, \(c\) and a vertex of \(F\) cannot lie in the same homothetic copy \(\mu K^{d}\). Since \(\frac{d-1}{d}<1\), \(c\) and a vertex of \(K^{d}\) on the plane \(x_{1}+\ldots+x_{d}=-1\) cannot lie in the same homothetic copy \(\mu K^{d}\), a contradiction.
Observe, the proof of Theorem 2 comes from Remark 1. Indeed, the \(1\)-norm of \(||c-v||_{1}\geq 2\frac{d-1}{d}\) where \(v\) is a vertex of \(K_{1}^{d}\).
**Corollary 1**.: _We have \(c^{d}(K^{d})\leq 2d\) for \(d\geq 4\). The Hadwiger's covering problem is solved for the \(d\)-dimensional crosspolitope for \(d\geq 4\)._
**Theorem 3**.: _If \(m=2d+1,2d+2\), then \(\gamma_{m}^{d}(K^{d})=\frac{d-1}{d}\) for \(d\geq 4\)._
Proof.: Since
\[\gamma_{2d+2}^{d}(K^{d})\leq\gamma_{2d+1}^{d}(K^{d})\leq\gamma_{2d}^{d}(K^{d}) =\frac{d-1}{d},\]
from \(\gamma_{2d+2}^{d}(K^{d})=\frac{d-1}{d}\) it comes the statement of the theorem. Let it be assumed that \(\gamma_{2d+2}^{d}(K^{d})=\mu<\frac{d-1}{d}\). Let \(c_{1},\ldots,c_{2^{d}}\) be the centres of the facets of \(K^{d}\) and let \(C\) be the \(d\)-cube with vertices \(c_{1},\ldots,c_{d^{2}}\). By Lemma 1, \(2d\) homothetic copies of \(K^{d}\) can cover the \(2d\) vertices of \(K^{d}\) and the points \(c_{1},\ldots,c_{2^{d}}\) are uncovered by these homothetic copies. Let \(S\) be the set containing the vertices of \(K^{d}\) and the centres \((1/d,\ldots,1/d)\) and \((-1/d,\ldots,-1/d)\). By Remark 1, any two points from \(S\) cannot lie in a homothetic copy of \(K^{d}\) with ratio \(\mu\), a contradiction.
**Lemma 2**.: _Let \(c_{1},\ldots,c_{2^{d}}\) be the centres of the facets of \(K^{d}\). If \(\lambda<1\) and the facets containing \(c_{i}\) and \(c_{j}\) are parallel facets, then a homothetic copy \(\lambda K^{d}\) cannot contain \(c_{i}\) and \(c_{j}\)._
Proof.: Without loss of generality it may be assumed that the two hyperplanes are \(\alpha_{1}:x_{1}+\ldots+x_{d}=1\) and \(\alpha_{2}:x_{1}+\ldots+x_{d}=-1\). In this case the centres are \(c_{i}(-1/d,\ldots,-1/d)\) and \(c_{j}(1/d,\ldots,1/d)\). Now \(||c_{i}-c_{j}||_{1}=d_{K^{d}}(c_{i},c_{j})=2\). By Remark 1, a homothetic copy \(\lambda K^{d}\) cannot contain \(c_{i}\) and \(c_{i}\).
**Lemma 3**.: _Let \(c_{1},\ldots,c_{2^{d}}\) be the centres of the facets of \(K^{d}\). If \(\lambda<\frac{d-1}{d}\) and the facets containing \(c_{i}\) and \(c_{j}\) have only a vertex in common, then a homothetic copy \(\lambda K^{d}\) cannot contain \(c_{i}\) and \(c_{j}\)._
Proof.: Without loss of generality it may be assumed that \(c_{i}(1/d,-1/d,\ldots,-1/d)\) and \(c_{j}(1/d,\ldots,1/d)\). Now the common vertex of the facets of \(c_{i}\) and \(c_{j}\) is \((1,0,\ldots,0)\). Thus \(||c_{i}-c_{j}||_{1}=d_{K^{d}}(c_{i},c_{j})=2\frac{d-1}{d}\) and by Remark 1, a homothetic copy \(\lambda K^{d}\) cannot contain \(c_{i}\) and \(c_{i}\).
**Theorem 4**.: _If \(m=2d+3\), then \(\gamma_{m}^{d}(K^{d})=\frac{d-1}{d}\) for \(d=4,5\)._
Proof.: By Theorem 3, \(\gamma_{2d+3}^{d}(K^{d})\leq\frac{d-1}{d}\). Let it be assumed that \(\gamma_{2d+3}^{d}(K^{d})=\mu<\frac{d-1}{d}\). Let \(c_{1},\ldots,c_{2^{d}}\) be the centres of the facets of \(K^{d}\) and let \(C\) be the \(d\)-cube with vertices \(c_{1},\ldots,c_{2^{d}}\). By Lemma 1, the \(2d\) homothetic copies of \(K^{d}\) can cover the \(2d\) vertices of \(K^{d}\) and the points \(c_{1},\ldots,c_{2^{d}}\) are uncovered by these homothetic copies. It will be proved that the vertices of the \(d\)-cube \(C\) cannot be covered by \(3\) homothetic copies of \(K^{d}\) with ratio \(\mu\). Let us assume, that the vertices of the \(d\)-cube \(C\) can be covered by \(3\) homothetic copies of \(K^{d}\) with ratio \(\mu\). Observe, \(c_{i}\) and \(c_{j}\) lie on two parallel facets of \(K_{d}\) if and only it \(c_{i}\) and \(c_{j}\) are the endpoints of a diagonal of \(C\). Thus if \(c_{i}\) and \(c_{j}\) are the endpoints of a diagonal of \(C\). Thus if \(c_{i}\) and \(c_{j}\) are the endpoints of a diagonal of the \(d\)-cube \(C\), then by Lemma 2, \(c_{i}\) and \(c_{j}\) cannot be covered a homothetic copy \(\mu K^{d}\). Observe, \(c_{i}\) and \(c_{j}\) lie on two facets and the two facets have only one vertex in common if and only if \(c_{i}\) and \(c_{j}\) are the endpoints of a diagonal of a facet of \(C\). Thus if \(c_{i}\) and \(c_{j}\) are the endpoints of a diagonal of a facet of \(C\), then by Lemma 3, \(c_{i}\) and \(c_{j}\) cannot be covered by a homothetic copy \(\mu K^{d}\). We distinguish \(2\) cases.
**Case 1.** We have \(d=4\).
By the Pigeonhole principle there is a homothetic copy - say \(H_{1}\) - that \(H_{1}\) contains 6 vertices of the 4-cube \(C\). Without loss of generality it may be assumed that \(c_{1}=(1/4,1/4,1/4,1/4)\) lies in \(H_{1}\). By Lemma 2, the vertex \((-1/4,-1/4,-1/4,-1/4)\) cannot lie in \(H_{1}\). By Lemma 3, the vertices \((1/4,-1/4,-1/4,-1/4)\), \((-1/4,1/4,-1/4,-1/4)\), \((-1/4,-1/4,1/4,-1/4)\) and \((-1/4,-1/4,-1/4,1/4)\) cannot lie in \(H_{1}\). Let \(c_{2}=(-1/4,\)\(1/4,1/4,1/4)\), \(c_{3}=(1/4,-1/4,1/4,1/4)\), \(c_{4}=(1/4,1/4,-1/4,1/4)\), \(c_{5}=(1/4,1/4,1/4,\)\(-1/4)\), \(c_{6}=(-1/4,-1/4,4,1/4)\), \(c_{7}=(-1/4,1/4,-1/4,1/4)\), \(c_{8}=(-1/4,1/4,1/4,\)\(-1/4)\), \(c_{9}=(1/4,-1/4,-1/4,4)\), \(c_{10}=(1/4,-1/4,1/4,-1/4)\), \(c_{11}=(1/4,1/4,-1/4,\)\(-1/4)\) and \(S^{4}=\{c_{2},c_{3},c_{4},c_{5}\}\). We distinguish 5 subcases.
**Subcase 1.1.** The homothetic copy \(H_{1}\) contains the vertices \(c_{2}\), \(c_{3}\), \(c_{4}\) and \(c_{5}\).
By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{6},\ldots,c_{11}\). Thus \(H_{1}\) cannot contain 6 vertices of \(C\), a contradiction.
**Subcase 1.2.** The homothetic copy \(H_{1}\) contains exactly three vertices from \(S^{4}\).
Without loss of generality it may be assumed that \(H_{1}\) contains \(c_{2}\), \(c_{3}\) and \(c_{4}\). By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{6},\ldots,c_{11}\). Thus \(H_{1}\) cannot contain 6 vertices of \(C\), a contradiction.
**Subcase 1.3.** The homothetic copy \(H_{1}\) contains exactly two vertices from \(S^{4}\).
Without loss of generality it may be assumed that \(H_{1}\) contains \(c_{2}\) and \(c_{3}\). By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{7},\ldots,c_{11}\). Thus \(H_{1}\) cannot contain 6 vertices of \(C\), a contradiction.
**Subcase 1.4.** The homothetic copy \(H_{1}\) contains exactly one vertex from \(S^{4}\).
Without loss of generality it may be assumed that \(H_{1}\) contains \(c_{2}\). By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{9},\ldots,c_{11}\). Thus \(H_{1}\) cannot contain 6 vertices of \(C\), a contradiction.
**Subcase 1.5.** The homothetic copy \(H_{1}\) does not contain any vertex from \(S^{4}\).
By Lemma 2, \(H_{1}\) cannot contain \(c_{6}\) and \(c_{11}\) in the same time. By Lemma 2, \(H_{1}\) cannot contain \(c_{7}\) and \(c_{10}\) in the same time. Thus \(H_{1}\) cannot contain 6 vertices of \(C\), a contradiction.
**Case 2.** We have \(d=5\).
By the Pigeonhole principle there is a homothetic copy - say \(H_{1}\) - that \(H_{1}\) contains 11 vertices of the 5-cube \(C\). Without loss of generality it may be assumed that \(c_{1}=(1/5,1/5,1/5,\)
\(1/5,1/5)\) lies in \(H_{1}\). By Lemma 2, the vertex \((-1/5,-1/5,-1/5,-1/5,-1/5)\) cannot lie in \(H_{1}\). By Lemma 3, the vertices \((1/5,-1/5,-1/5,-1/5,-1/5)\), \((-1/5,1/5,-1/5,-1/5,\)
\(-1/5,-1/5,1/5,-1/5,-1/5)\), \((-1/5,-1/5,-1/5,-1/5)\), \((-1/5,-1/5,1/5,-1/5)\) and \((-1/5,-1/5,\)
\(-1/5,-1/5,1/5)\) cannot lie in \(H_{1}\). Let \(c_{2}=(-1/5,1/5,1/5,1/5,1/5),c_{3}=(1/5,-1/5,1/5,1/5)\), \(c_{4}=(1/5,1/5,-1/5,1/5,1/5)\), \(c_{5}=(1/5,1/5,-1/5,1/5,1/5)\), \(c_{6}=(1/5,1/5,\)
\(-1/5,1/5,-1/5,1/5,1/5)\), \(c_{7}=(-1/5,-1/5,1/5,1/5,1/5)\), \(c_{8}=(-1/5,1/5,-1/5,1/5,1/5)\), \(c_{9}=(-1/5,1/5,1/5,-1/5,1/5,-1/5)\), \(c_{10}=(-1/5,1/5,1/5,1/5,-1/5)\), \(c_{11}=(1/5,-1/5,-1/5,-1/5,1/5)\), \(c_{12}=(1/5,-1/5,1/5,-1/5,1/5)\), \(c_{13}=(1/5,-1/5,1/5,-1/5,-1/5)\), \(c_{14}=(1/5,1/5,-1/5,-1/5,1/5)\), \(c_{15}=(1/5,1/5,-1/5,1/5,-1/5)\), \(c_{16}=(1/5,1/5,1/5,-1/5,\)
\(-1/5\)), \(c_{17}=(-1/5,-1/5,-1/5,1/5,1/5)\), \(c_{18}=(-1/5,-1/5,1/5,-1/5,1/5)\), \(c_{19}=(-1/5,-1/5,1/5,1/5,-1/5)\), \(c_{20}=(-1/5,1/5,-1/5,-1/5,1/5)\), \(c_{21}=(-1/5,1/5,-1/5,-1/5,1/5)\), \(c_{22}=(-1/5,1/5,-1/5,-1/5,-1/5,1/5)\), \(c_{24}=(1/5,-1/5,-1/5,1/5,-1/5)\), \(c_{25}=(1/5,-1/5,1/5,-1/5,-1/5)\), \(c_{26}=(1/5,1/5,-1/5,-1/5)\) and \(S^{5}=\{c_{2},c_{3},c_{4},c_{5},c_{6}\}\). We distinguish 6 subcases.
**Subcase 2.1.** The homothetic copy \(H_{1}\) contains the vertices \(c_{2}\), \(c_{3}\), \(c_{4}\), \(c_{5}\) and \(c_{6}\).
By Lemma 3, \(c_{17},\ldots,c_{26}\) cannot lie in \(H_{1}\). Let it be assumed that \(H_{1}\) contains \(c_{7}\). (The
cases \(H_{1}\) contains \(c_{8},\ldots,c_{15}\), or \(c_{16}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 3, \(c_{8}\) and \(c_{12}\) cannot lie in \(H_{1}\) in the same time. By Lemma 3, \(c_{9}\) and \(c_{13}\) cannot lie in \(H_{1}\) in the same time. By Lemma 3, \(c_{10}\) and \(c_{11}\) cannot lie in \(H_{1}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
**Subcase 2.2.** The homothetic copy \(H_{1}\) contains exactly four vertices from \(S^{5}\).
Without loss of generality it may be assumed that \(H_{1}\) contains \(c_{2}\), \(c_{3}\), \(c_{4}\) and \(c_{5}\). By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{17},\ldots,c_{26}\).
Let us assume that \(H_{1}\) contains \(c_{7}\). (The cases that \(H_{1}\) contains \(c_{8}\), \(c_{9}\), \(c_{11}\), \(c_{12}\) or \(c_{14}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{12}\), \(c_{13}\) or \(c_{16}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{11}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{9}\) and \(c_{11}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{14}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{14}\) (and does not contain the vertices \(c_{7},\ldots,c_{12}\) or \(c_{13}\)). (The cases \(H_{1}\) contains \(c_{15}\) or \(c_{16}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{8}\) and \(c_{12}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{9}\) and \(c_{13}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{11}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{9}\) (and does not contain the vertices \(c_{7},\ldots,c_{15}\) or \(c_{11}\)). (The cases \(H_{1}\) contains \(c_{10}\), \(c_{12}\), \(c_{13}\), \(c_{14}\) or \(c_{15}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{11}\), \(c_{13}\) or \(c_{15}\). Since \(H_{1}\) can contain the \(c_{1},\ldots,c_{4}\), \(c_{9}\), \(c_{10}\), \(c_{12}\), \(c_{14}\), \(c_{16}\), \(c_{17}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{16}\) (and does not contain the vertices \(c_{7},\ldots,c_{14}\) or \(c_{15}\)). Since \(H_{1}\) can contain the \(c_{1},\ldots,c_{4}\), \(c_{16}\), \(c_{17}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{17}\) (and does not contain the vertices \(c_{7},\ldots,c_{15}\) or \(c_{16}\)). Since \(H_{1}\) can contain the \(c_{1},\ldots,c_{4}\), \(c_{17}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
**Subcase 2.4.** The homothetic copy \(H_{1}\) contains exactly two vertices from \(S^{5}\).
Without loss of generality it may be assumed that \(H_{1}\) contains \(c_{2}\) and \(c_{3}\). By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{20},\ldots,c_{26}\).
Let us assume that \(H_{1}\) contains \(c_{7}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{8}\) and \(c_{12}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{9}\) and \(c_{13}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{11}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{8}\) (and does not contain the vertex \(c_{7}\)). (The cases \(H_{1}\) contains \(c_{9}\), \(c_{10}\), \(c_{11}\), \(c_{12}\) or \(c_{13}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{9}\) and \(c_{11}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{14}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{14}\) (and does not contain the vertices \(c_{7},\ldots,c_{12}\) or \(c_{13}\)). (The cases \(H_{1}\) contains \(c_{15}\) or \(c_{16}\) are similar.) Since \(H_{1}\) can contain \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{14}\), \(c_{19}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{17}\) (and does not contain the vertices \(c_{7},\ldots,c_{15}\) or \(c_{16}\)).
(The cases \(H_{1}\) contains \(c_{18}\) or \(c_{19}\) are similar.) Since \(H_{1}\) can contain the \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{17}\), \(c_{18}\), \(c_{19}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
**Subcase 2.5.** The homothetic copy \(H_{1}\) contains exactly one vertex from \(S^{5}\).
Without loss of generality it may be assumed that \(H_{1}\) contains \(c_{2}\). By Lemma 3, \(H_{1}\) cannot contain the vertices \(c_{23},\ldots,c_{26}\).
Let us assume that \(H_{1}\) contains \(c_{7}\). (The cases \(H_{1}\) contains \(c_{8}\), \(c_{9}\) or \(c_{10}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{8}\) and \(c_{12}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{9}\) and \(c_{13}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{11}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{17}\) and \(c_{22}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{18}\) and \(c_{21}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{11}\) (and does not contain the vertices \(c_{7},\ldots,c_{9}\) or \(c_{10}\)). (The cases \(H_{1}\) contains \(c_{12}\), \(c_{13}\), \(c_{14}\), \(c_{15}\), \(c_{16}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{16}\). By Lemma 2, \(H_{1}\) cannot contain \(c_{22}\). By Lemma 2, \(H_{1}\) cannot contain \(c_{12}\) and \(c_{21}\) in the same time. By Lemma 2, \(H_{1}\) cannot contain \(c_{13}\) and \(c_{20}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{17}\) (and does not contain the vertices \(c_{7},\ldots,c_{15}\) or \(c_{16}\)). (The cases \(H_{1}\) contains \(c_{18},\ldots,c_{22}\) are similar.) Since \(H_{1}\) can contain the \(c_{1}\), \(c_{2}\), \(c_{17},\ldots,c_{22}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
**Subcase 2.6.** The homothetic copy \(H_{1}\) does not contain any vertex from \(S^{5}\).
Let us assume that \(H_{1}\) contains \(c_{7}\). (The cases \(H_{1}\) contains \(c_{8},\ldots,c_{16}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{14}\), \(c_{15}\) or \(c_{16}\). By Lemma 2, \(H_{1}\) cannot contain \(c_{26}\). By Lemma 3, \(H_{1}\) cannot contain \(c_{8}\) and \(c_{12}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{9}\) and \(c_{13}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{10}\) and \(c_{11}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{17}\) and \(c_{25}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{18}\) and \(c_{24}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{19}\) and \(c_{20}\) in the same time. By Lemma 3, \(H_{1}\) cannot contain \(c_{21}\) and \(c_{23}\) in the same time. Thus \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction.
Let us assume that \(H_{1}\) contains \(c_{17}\) (and does not contain the vertices \(c_{7},\ldots,c_{15}\) or \(c_{16}\)). (The cases \(H_{1}\) contains \(c_{18},\ldots,c_{26}\) are similar.) By Lemma 3, \(H_{1}\) cannot contain \(c_{22}\), \(c_{25}\) or \(c_{26}\). Since \(H_{1}\) can contain the \(c_{1}\), \(c_{2}\), \(c_{17}\), \(c_{18}\), \(c_{19}\), \(c_{20}\), \(c_{21}\), \(c_{23}\), \(c_{24}\), \(H_{1}\) cannot contain \(11\) vertices of \(C\), a contradiction. \(\Box\)
Since \(\gamma_{2d+3}^{d}(K^{d})\leq\gamma_{2d+2}^{d}(K^{d})\) and \(\gamma_{2d+2}^{d}(K^{d})=\frac{d-1}{d}\) for \(d\geq 6\), \(\gamma_{2d+2}^{d}(K^{d})\leq\frac{d-1}{d}\) for \(d\geq 6\).
**Conjecture 2**.: _If \(m=2d+3\), then \(\gamma_{m}^{d}(K^{d})=\frac{d-1}{d}\) for \(d\geq 6\)._
**Theorem 5**.: _If \(m=2d+4\), then \(\gamma_{m}^{d}(K^{d})=\frac{2d-3}{2d-1}\) for \(d=4\) and \(\gamma_{m}^{d}(K^{d})\leq\frac{2d-3}{2d-1}\) for \(d\geq 5\)._
Proof.: First it will be proved that \(\gamma_{m}^{d}(K^{d})\leq\frac{2d-3}{2d-1}\) for \(d\geq 4\). Let \(\lambda=\frac{2d-3}{2d-1}\) and consider the following homothetic copies of \(K^{d}\). \(K_{1}^{d}=\lambda K^{d}+\big{[}\frac{2}{2d-1},0,0,\ldots,0\big{]}^{T}\), \(K_{2}^{d}=\lambda K^{d}+\big{[}0,\frac{2}{2d-1},0,\ldots,0\big{]}^{T}\), \(\ldots\), \(K_{d}^{d}=\lambda K^{d}+\big{[}0,\ldots,0,\frac{2}{2d-1}\big{]}^{T}\), \(K_{2d+1}^{d}=\lambda K^{d}+\big{[}-\frac{2}{2d-1},\frac{15}{2d-1},0,0,\ldots,0 \big{]}^{T}\), \(K_{2d+2}^{d}=\lambda K^{d}+\big{[}-\frac{1.5}{2d-1},\frac{1.5}{2d-1},0,0\ldots,0 \big{]}^{T}\), \(K_{2d+3}^{d}=\lambda K^{d}+\big{[}-\frac{1.5}{2d-1},-\frac{1.5}{2d-1},0,0 \ldots,0\big{]}^{T}\), \(K_{2d+4}^{d}=\lambda K^{d}+\big{[}\frac{1.5}{2d-1},-\frac{1.5}{2d-1},0,0\ldots,0 \big{]}^{T}\). Now it will be proved that the surface of \(K^{d}\) is covered by the homothetic copies \(K_{1}^{d},\ldots,K_{2d+4}^{d}\). Consider a facet \(F\) of \(K^{d}\). Without loss of generality it can be assumed that F lies on the hyperplane \(x_{1}+\ldots+x_{d}=1\). The homothetic image of \(F\) with ratio \(\lambda\) and centre \((1,0,0\ldots,0)\) (\((0,1,0\ldots,0),\ldots,(0,\ldots,0,1)\)
resp.) is covered by \(K_{1}^{d}\) (\(K_{2}^{d},\ldots,K_{d}^{d}\), resp.). Let \(n_{1}=\left(\frac{1}{2d-1},\frac{2}{2d-1},\frac{2}{2d-1},\ldots,\frac{2}{2d-1}\right)\), \(n_{2}=\left(\frac{2}{2d-1},\frac{1}{2d-1},\frac{2}{2d-1},\ldots,\frac{2}{2d-1}\right)\),..., \(n_{d}=\left(\frac{2}{2d-1},\ldots,\frac{2}{2d-1},\frac{1}{2d-1}\right)\). The convex hull of the points \(n_{1},\ldots,n_{d}\) is uncovered by \(K_{1}^{d},\ldots,K_{d}^{d}\). Since
\[\left|\left|\left(\frac{1.5}{2d-1},\frac{1.5}{2d-1},0,0\ldots,0\right)-n_{i} \right|\right|_{1}\]
\[\leq\max\left(2\frac{0.5}{2d-1}+(d-2)\frac{2}{2d-1},2\frac{0.5}{2d-1}+(d-3) \frac{2}{2d-1}+\frac{1}{2d-1}\right)=\frac{2d-3}{2d-1}=\lambda\]
for \(i=1,\ldots,d\), the points \(n_{1},\ldots,n_{d}\) are covered by \(K_{2d+1}^{d}\). Since \(K_{2d+1}^{d}\) and the convex hull of the points \(n_{1},\ldots,n_{d}\) are convex bodies, \(K_{2d+1}^{d}\) covers the convex hull of the points \(n_{1},\ldots,n_{d}\). Thus the facet \(F\) is covered by \(K_{1}^{d},\ldots,K_{d}^{d}\), \(K_{2d+1}^{d}\). Similarly any other facet of \(K^{d}\) is covered by \(K_{1}^{d},\ldots,K_{2d+4}^{d}\). Since the origin lies in each homothetic copy \(K_{1}^{d},\ldots,K_{2d+4}^{d}\), \(K^{d}\) is covered by \(K_{1}^{d},\ldots,K_{2d+4}^{d}\) and \(\gamma_{2d+4}^{d}(K^{d})\leq\frac{2d-3}{2d-1}\) for \(d\geq 4\).
Now consider the case \(d=4\). Let it be assumed that \(\gamma_{12}^{d}(K^{d})=\mu<\frac{5}{7}\). Let \(n_{1,1}=(1/7,2/7,2/7,2/7)\), \(n_{1,2}=(2/7,1/7,2/7,2/7)\), \(n_{1,3}=(2/7,2/7,1/7,2/7)\), \(n_{1,4}=(2/7,\)\(2/7,2/7,1/7)\), \(n_{2,1}=(-1/7,2/7,2/7,2/7)\), \(\ldots\), \(n_{10,3}=(2/7,-2/7,1/7,-2/7)\), \(\ldots\), \(n_{12,3}=(-2/7,-2/7,-1/7,2/7)\), \(\ldots\), \(n_{14,1}=(-1/7,2/7,-2/7,-2/7)\)\(\ldots\), \(n_{16,4}=(-2/7,-2/7,\)\(-2/7,-1/7)\). By Lemma 1, the \(8\) homothetic copies of \(K^{4}\) with ratio \(\mu\) can cover the \(8\) vertices of \(K^{4}\) and the points \(n_{1,1},\ldots,n_{16,4}\) are uncovered by these homothetic copies. It will be proved that four homothetic copies of \(K^{4}\) with ratio \(\mu\) cannot cover the points \(n_{1,1},\ldots,n_{16,4}\). Since \(||n_{1,1}-n_{10,3}||_{1}=||n_{1,1}-n_{12,3}||_{1}=||n_{1,1}-n_{14,1}||_{1}=||n _{10,3}-n_{12,3}||_{1}=||n_{10,3}-n_{14,1}||_{1}=||n_{12,3}-n_{14,1}||_{1}= \frac{10}{7}=2\lambda\), the points \(n_{1,1}\), \(n_{10,3}\), \(n_{12,3}\) and \(n_{14,1}\) cannot be covered by four homothetic copies of \(K^{4}\) with ratio \(\mu\), a contradiction.
**Conjecture 3**.: _If \(m=2d+4\), then \(\gamma_{m}^{d}(K^{d})=\frac{2d-3}{2d-1}\) for \(d\geq 5\)._
|
2305.19986 | Tripotential MOND theories | I present a new class of nonrelativistic, modified-gravity MOND theories. The
three gravitational degrees of freedom of these ``TRIMOND'' theories are the
MOND potential and two auxiliary potentials, one of which emerges as the
Newtonian potential. Their Lagrangians involve a function of three acceleration
variables -- the gradients of the potentials. So, the transition from the
Newtonian to the MOND regime is rather richer than in the aquadratic-Lagrangian
theory (AQUAL) and the quasilinear MOND theory (QUMOND), which are special
cases of TRIMOND, each defined by a Lagrangian function of a single variable.
In particular, unlike AQUAL and QUMOND whose deep-MOND limit (DML) is fully
dictated by the required scale invariance, here, the scale-invariant DML still
requires specifying a function of two variables. For one-dimensional (e.g.,
spherical) mass distributions, in all TRIMOND theories the MOND acceleration is
a (theory specific, but system independent) function of the Newtonian
acceleration; their variety appears in nonsymmetric situations. Also, they all
make the salient, primary MOND predictions. For example, they predict the same
DML virial relation as AQUAL and QUMOND, and thus the same DML $M-\sigma$
relation, and the same DML two-body force. Yet they can differ materially on
secondary predictions. Such TRIMOND theories may be the nonrelativistic limits
of scalar-bimetric relativistic formulations of MOND, such as BIMOND with an
added scalar. | Mordehai Milgrom | 2023-05-31T16:09:34Z | http://arxiv.org/abs/2305.19986v3 | # Tripotential MOND theories
###### Abstract
I present a new class of nonrelativistic, modified-gravity MOND theories. The three gravitational degrees of freedom of these "TRIMOND" theories are the MOND potential and two auxiliary potentials, one of which emerges as the Newtonian potential. Their Lagrangians involve a function of three acceleration variables - the gradients of the potentials. So, the transition from the Newtonian to the MOND regime is rather richer than in the quadratic-Lagrangian theory (AQUAL) and the quasilinear MOND theory (QUMOND), which are special cases of TRIMOND, each defined by a Lagrangian function of a single variable. In particular, unlike AQUAL and QUMOND whose deep-MOND limit (DML) is fully dictated by the required scale invariance, here, the scale-invariant DML still requires specifying a function of two variables. For one-dimensional (e.g., spherical) mass distributions, in all TRIMOND theories the MOND acceleration is a (theory specific, but system independent) function of the Newtonian acceleration; their variety appears in nonsymmetric situations. Also, they all make the salient, primary MOND predictions. For example, they predict the same DML virial relation as AQUAL and QUMOND, and thus the same DML \(M-\sigma\) relation, and the same DML two-body force. Yet they can differ materially on secondary predictions. Such TRIMOND theories may be the nonrelativistic limits of scalar-bimetric relativistic formulations of MOND, such as BIMOND with an added scalar.
## I Introduction
The "quadratic-Lagrangian" (AQUAL) theory [1] was the first full-fledged, nonrelativistic theory that embodies the basic tenets of MOND [2]. (For reviews of MOND see Refs. [3; 4; 5; 6; 7; 8].) AQUAL modifies the Poisson equation for the gravitational potential of a given distribution of masses, without modifying the equation of motion of the masses; it is thus a "modified-gravity" (MG) theory. Some twenty five years later another full-fledged nonrelativistic, MG Lagrangian theory for MOND has been proposed - the quasilinear MOND theory (QUMOND) [9]. These two theories - disparate in their amenability to solution, but yielding very similar prediction - have become the workhorses for studying various aspects of MOND, including many detailed studies that involve numerical solutions of the field equations of these theories (see reviews [3; 4; 5; 6; 7; 8]).
AQUAL and QUMOND satisfy the basic tenets of MOND - the involvement of one new, acceleration constant, \(a_{0}\); a Newtonian limit at high accelerations (\(a\gg a_{0}\), or \(a_{0}\to 0\)); and scale invariance in the low-acceleration, deep-MOND limit (DML), \(a_{0}\to\infty\). They thus share all the primary predictions of MOND - those that follow from only the basic tenets [10]. Examples of such predictions are, the asymptotic flatness of rotation curves \(V(r)\to V_{\infty}\); the relation between the total mass, \(M\), and the asymptotic speed: \(MGa_{0}=V_{\infty}^{4}\), (underlying the "baryonic Tully-Fisher relation"); a good approximation to the full rotation curves from the mass distribution; enhanced stability of disc galaxies; a relation between the total mass of a system in the DML and its mass-average, root-mean-squared velocity dispersion, \(\sigma\): \(MGa_{0}=\xi\sigma^{4}\), with \(\xi\sim 1\); and the qualitative prediction of an "external-field effect" (EFE).
Perhaps as a result of these two theories remaining the main tools for deriving quantitative results in MOND, the notion may have taken root, that they represent MOND in its entirety (in the nonrelativistic regime).
It has, however, been stressed repeatedly that there may well be other theories that satisfy the basic MOND tenets, and thus share its primary predictions, but that may differ from AQUAL and QUMOND, and among themselves, in making other, secondary predictions. Among such second-tier predictions one may count some fine details of the rotation curves (e.g., Refs. [11; 12; 13; 14; 15]); the exact value of \(\xi\) in the \(M\ -\sigma\) relation, and its dependence on dimensionless attributes of the system; the exact dependence of the effective two-body force on the masses; the exact nature and strength of the EFE - such as the exact effects of the galactic field on the dynamics of the solar system or of wide binaries in the galaxy; etc.
One possible avenue for looking for such other MOND formulations is that of so-called "modified-inertia" theories, which as described, e.g., in the recent Ref. [16], can make very different secondary predictions than AQUAL and QUMOND.
But secondary predictions can differ also among MG theories, as was demonstrated recently in Ref. [17].
AQUAL and QUMOND are quite restricted in their scope in the following sense: They each involves one "interpolating function" of a single acceleration variable that is introduced, by hand, in their Lagrangian. As a result, all the (secondary) predictions of these theories involve, and are dictated, by this one function. For example, this function dictates the prediction of the "mass-discrepancy-acceleration relation" (MDAR also known as the RAR) and can be extracted from the observed relation. It then largely determines all the other predictions of these theories. This is much too restrictive, and is not a situation that characterizes other, well known modifications of Newtonian dynamics.
For example, in quantum theory we do not introduce an interpolation function at the basic level. We do encounter different "interpolating functions" between the classical and the quantum regimes for different phenomena: The black-body function, the expression for the specific heat of solids, and the barrier penetration probability are only a few examples. The same is true in relativity vs Newtonian dynamics.
Likewise, one expects that in a more fundamental MOND theory, interpolating functions will not be put in by hand in the fundamental equations. They will be derivable from the theory, and will also differ among the different applications of the theory.
In distancing ourselves at least one step from the restricted AQUAL/QUMOND, we should realize that there are MOND theories whose Newtonian-DML interpolation scheme is richer, even if it is still implanted at the basic level of the theory.
Here I present a new class of nonrelativistic, MG formulations of MOND that involve three gravitational potentials, one that dictates the acceleration of masses - and is thus the "observed" MOND potential - with two auxiliary ones. The interpolating function depends on the gradients of the last two and is a function of three scalar, acceleration variables, offering a rather richer scope than AQUAL/QUMOND, which are special cases in this new class.
In Sec. II, I describe the TRIMOND Lagrangian and the resulting field equations for the three potentials, and consider their high- and low-acceleration limits. In Sec. III, I describe some of the properties of the solutions and various predictions of the TRIMOND theories. Section IV is a brief discussion of potential relativistic extensions.
## II Tripotential formulations
The tripotential theories involve three potentials as the gravitational degrees of freedom, with one of them, the MOND potential \(\phi\) coupling to matter directly. The other two are called \(\varphi\) and \(\psi\). The Lagrangian density of these theories is of the form
\[{\cal L}={\cal L}_{G}+\rho(\frac{1}{2}{\bf v}^{2}-\phi), \tag{1}\]
where its gravitational part is
\[{\cal L}_{G}=-\frac{1}{8\pi G}[2\vec{\nabla}\phi\cdot\vec{\nabla}\psi-a_{0}^{ 2}{\cal F}(x,y,z)], \tag{2}\]
with the scalar, acceleration variables
\[x\equiv(\vec{\nabla}\psi)^{2}/a_{0}^{2},\quad y\equiv(\vec{\nabla}\varphi)^{2 }/a_{0}^{2},\quad z=2\vec{\nabla}\psi\cdot\vec{\nabla}\varphi/a_{0}^{2}, \tag{3}\]
and \({\cal F}\) is a dimensionless function satisfying the basic tenets of MOND (see Secs. II.2 and II.3).
Variation over the particle degrees of freedom gives
\[\bar{\bf r}_{i}=-\vec{\nabla}\phi({\bf r}_{i}); \tag{4}\]
so \(\phi\) is the (MOND) gravitational potential, which dictates particle accelerations.
Variation over \(\phi\) gives
\[\Delta\psi=4\pi G\rho; \tag{5}\]
so, \(\psi\) equals the Newtonian potential sourced by \(\rho\). (As usual, we seek solutions for which the three gradients vanish at infinity for isolated systems.)
Varying over \(\varphi\) gives
\[\vec{\nabla}\cdot({\cal F}_{y}\vec{\nabla}\varphi)+\vec{\nabla}\cdot({\cal F} _{z}\vec{\nabla}\psi)=0. \tag{6}\]
Varying over \(\psi\) gives
\[\Delta\phi=\vec{\nabla}\cdot({\cal F}_{x}\vec{\nabla}\psi)+\vec{\nabla}\cdot( {\cal F}_{z}\vec{\nabla}\varphi)\equiv 4\pi G\dot{\rho}. \tag{7}\]
After \(\psi\) is solved for from Eq. (5), it is substituted in Eq. (6), which becomes a nonlinear, second order equation in \(\varphi\), with coefficients that depend on position through \(\vec{\nabla}\psi({\bf r})\). The choice of \({\cal F}\) should be such that Eq. (6) is elliptic, and then it has a unique solution for \(\varphi\) with our boundary conditions. Then, \(\varphi\) and \(\psi\) are substituted in the
right-hand side of Eq. (7) - a Poisson equation for \(\phi\) - to get the effective density, \(\hat{\rho}\), that sources it. The reason that this successive solution scheme is possible is that \(\phi\) and \(\varphi\) are not coupled directly.
As in AQUAL, the nonlinearity of Eq. (6) involves functions of only up to the first derivatives, the second derivatives appearing in a linear fashion with coefficients that are functions of position and the first derivatives1. Thus, after the substitution of \(\vec{\nabla}\psi\), equation (6) is of the form
Footnote 1: Such equations are called “quasilinear” in the mathematical literature, to be distinguished from what we call “quasilinear” in describing QUMOND.
\[a^{ij}({\bf r},\vec{\nabla}\varphi)\varphi_{,i,j}+b({\bf r},\vec{\nabla}\varphi )=0. \tag{8}\]
Then, the ellipticity condition is that the matrix \(a^{ij}\) has a definite sign [19].
The solution scheme is rather simple in the case where \({\cal F}_{z}=\epsilon\) is a constant. Eqs. (6)(7) can then be written as
\[\vec{\nabla}\cdot({\cal F}_{y}\vec{\nabla}\varphi)=-4\pi G\epsilon\rho. \tag{9}\]
\[\Delta\phi=\vec{\nabla}\cdot({\cal F}_{x}\vec{\nabla}\psi)+\epsilon\Delta\varphi. \tag{10}\]
Equation (5) is solved for \(\psi\), which is then substituted in Eq. (9), to get an AQUAL-type equation of the form \(\vec{\nabla}\cdot[\tilde{\mu}(|\vec{\nabla}\varphi|,{\bf r})\vec{\nabla}\varphi ]=-4\pi G\epsilon\rho\) (with the added dependence of \(\tilde{\mu}\) on position, which does not complicate the solution). Substituting the solutions for \(\psi\) and \(\varphi\) in the right-hand side of Eq. (10) gives the Poisson equation for the MOND potential.
In this case, the ellipticity condition for Eq. (9) can be shown to be that \(y{\cal F}_{y}\) is a monotonic function of \(y\)
### AQUAL and QUMOND as special cases
AQUAL is a special case of the tripotential class, gotten by putting \({\cal F}_{x}=0\), \({\cal F}_{z}=\epsilon\) a constant, so \({\cal F}=\epsilon z+\tilde{\cal F}(y)\), with \(\tilde{\cal F}(y)\) having the appropriate limits. With these choices we have \(\phi=\epsilon\varphi\), while \(\varphi\) satisfies an AQUAL equation
\[\vec{\nabla}\cdot(\tilde{\cal F}^{\prime}\vec{\nabla}\varphi)=-4\pi G\epsilon\rho. \tag{11}\]
In the high-acceleration limit \(\tilde{\cal F}^{\prime}\rightarrow-\epsilon^{2}\), and for small values of the argument \(\tilde{\cal F}\rightarrow-(2/3)\epsilon^{3}y^{3/2}\).
QUMOND is a special case gotten for the choice \({\cal F}_{y}={\cal F}_{z}=0\), and \({\cal F}(x)\) having the appropriate low- and high-acceleration limits. For these, Eq. (6) is satisfied trivially, and Eq. (7) becomes
\[\Delta\phi=\vec{\nabla}\cdot({\cal F}_{x}\vec{\nabla}\psi)\equiv 4\pi G\hat{\rho}, \tag{12}\]
which is the QUMOND field equation.2
Footnote 2: Remember that the ellipticity condition for AQUAL is that \(X\mu(X)\) be monotonic, and the QUMOND equations, which require solving only the Poisson equation are elliptic.
### Newtonian limit
To recover the Newtonian limit when \(a_{0}\to 0\) - one of the basic MOND axioms - we require that in this limit \({\cal F}_{z}\rightarrow\epsilon\), \({\cal F}_{y}\rightarrow\eta\), and \({\cal F}_{x}\rightarrow\beta\), all constants, with \(\beta-\epsilon^{2}/\eta=1\). This can be seen from the field equations, or by noting that with these choices the Lagrangian can be written as
\[{\cal L}=-\frac{1}{8\pi G}[(\vec{\nabla}\phi)^{2}-(\vec{\nabla}\theta)^{2}-( \vec{\nabla}\zeta)^{2}]+\rho(\frac{1}{2}{\bf v}^{2}-\phi), \tag{13}\]
where, \(\theta=\phi-\psi\), and \(\zeta=\epsilon\eta^{-1/2}\psi+\eta^{1/2}\varphi\), which is the Poisson Lagrangian for \(\phi\) (and implying the irrelevant \(\psi=\phi\), and \(\zeta=0\)).
### The deep-MOND limit
Another basic MOND tenet is that MOND gravity becomes space-time scale invariance in the limit \(a_{0}\to\infty\) (while \(G\to 0\), with \({\cal A}_{0}=Ga_{0}\) fixed) [18].
In general, scale invariance of a theory means that we can assign to each degree of freedom, and each independent variable - such as time and positions - \({\cal U}_{i}\), a scaling dimension \(\alpha_{i}\), such that the theory is invariant under \({\cal U}_{i}\to\lambda^{\alpha_{i}}{\cal U}_{i}\)3.
Footnote 3: The scaling in not necessarily related to the \([\ell][t][m]\) dimensions.
For MOND, we specifically dictate that the DML is space-time scale invariant, namely, that the scaling dimensions of length and time are the same (and can be taken as one). It is also dictated that masses do not scale (i.e., have scaling dimension zero)4. So, in our context, a scaling transformation is \((t,{\bf r},m)\to(\lambda t,\lambda{\bf r},m)\), and to ensure scale invariance, all terms in the DML of the Lagrangian have to transform in the same way under such scaling.
Footnote 4: When one speaks of scale invariance in the context of quantum theories, one requires, in contradistinction to MOND, that masses have scaling dimension \(-1\). This has to do with dimensions of the Planck constant.
The last term of Eq. (1) then dictates that the scaling dimension of \(\phi\) is zero, since velocities do not scale. Since masses do not scale, the scaling dimension of the baryon density, \(\rho\), has to be \(-3\). This then dictates that the scaling dimensions of all the terms in the DML of \({\cal L}\) have to be \(-3\). The first term in \({\cal L}_{{}_{C}}\) cannot be cancelled by terms in \({\cal F}\); so, its having scaling dimensions \(-3\) dictates that \(\psi\) has scaling dimensions \(-1\) (which should, of course, agree with \(\psi\) being the Newtonian potential, and scaling as \(m/r^{2}\)).
Now, \(\varphi\) is the only remaining degree of freedom, whose scaling dimension is not dictated by the MOND axioms and the explicit terms in the TRIMOND Lagrangian. To ensure MOND scale invariance of TRIMOND, the DML of \({\cal F}\), call it \({\cal F}_{{}_{D}}\), must also have scaling dimension \(-3\) for some assignment of scaling dimension, \(\alpha\), to \(\varphi\). This \(\alpha\) depends on the theory; examples of how it is determined are given in Sec. III.1.1.
Given this \(\alpha\) and with the above scaling dimensions, the arguments of \({\cal F}\) scale as \(x,y,z\to\lambda^{-4}x,\lambda^{2(\alpha-1)}y,\lambda^{\alpha-3}z\). Thus, to ensure MOND scale invariance, \({\cal F}_{{}_{D}}\) should be such that there is a choice of \(\alpha\) for which
\[{\cal F}_{{}_{D}}(\lambda^{-4}x,\lambda^{2\alpha-2}y,\lambda^{\alpha-3}z)= \lambda^{-3}{\cal F}_{{}_{D}}(x,y,z). \tag{14}\]
By taking, for example, \(\lambda=x^{1/4}\), we see that \({\cal F}_{{}_{D}}\) can be written as
\[{\cal F}_{{}_{D}}(x,y,z)=x^{3/4}{\cal F}_{{}_{D}}[1,yx^{(\alpha-1)/2},zx^{( \alpha-3)/4}]\equiv x^{3/4}\bar{\cal F}_{{}_{D}}[yx^{(\alpha-1)/2},zx^{(\alpha -3)/4}]. \tag{15}\]
Such functions are the most general that ensure scale invariance.
So, unlike AQUAL or QUMOND where there is no freedom do dictate an interpolating function in the deep-MOND limit, here the DML is described by a function of two variables.
## III Some consequences and predictions
### One-dimensional configurations
As in AQUAL and QUMOND, for systems of one-dimensional symmetry (planar, cylindrical, or spherical) the MOND acceleration \(g\) is a system-independent (but theory specific) function of the Newtonian acceleration, \(g_{{}_{N}}\):
\[g=\nu(g_{{}_{N}}/a_{0})g_{{}_{N}}. \tag{16}\]
To see this, note that in this one-dimensional case we can strip the divergences in the field Eqs. (6)(7) (e.g., by applying Gauss's theorem) and we can replace the vectorial relations by scalar ones with the absolute values of the respective accelerations replacing the potential gradient: \(\vec{\nabla}\psi=g_{{}_{N}}{\bf e},\ \vec{\nabla}\varphi=g_{{}_{\nu}}{\bf e},\ \vec{\nabla}\phi=g{\bf e}\) (where \({\bf e}\) is the radial unit vector). Equation (6) than reads (putting \(a_{0}=1\) for the nonce)
\[{\cal F}_{y}(g_{{}_{N}}^{2},g_{{}_{\varphi}}^{2},2g_{{}_{N}}g_{{}_{\varphi}}) g_{{}_{\varphi}}+\bar{\cal F}_{z}(g_{{}_{N}}^{2},g_{{}_{\varphi}}^{2},2g_{{}_{N}}g_{{}_{ \varphi}})g_{{}_{N}}=0, \tag{17}\]
from which \(g_{{}_{\varphi}}\) can be solved for as a function of \(g_{{}_{N}}\). \({\cal F}\) has to be such that the solution for \(g_{{}_{\varphi}}\) exists and is unique (the ellipticity of Eq. (6) should ensure this). Equation (7) gives, in the same vein,
\[g={\cal F}_{x}(g_{{}_{N}}^{2},g_{{}_{\varphi}}^{2},2g_{{}_{N}}g_{{}_{\varphi}}) g_{{}_{N}}+{\cal F}_{z}(g_{{}_{N}}^{2},g_{{}_{\varphi}}^{2},2g_{{}_{N}}g_{{}_{ \varphi}})g_{{}_{\varphi}}. \tag{18}\]
Substituting the \(g_{{}_{\nu}}\) values gotten from Eq. (17), we then get an equation of the form (16).
The function \(\nu\) is an example of an emerging interpolating function. For the general TRIMOND theories it is relevant only for one-dimensional configurations. It can be used as some tool for comparison with AQUAL and QUMOND, where it is the only such function that appears and that is relevant to all phenomena and configurations.
In terms of
\[{\cal F}^{*}(g_{{ N}},g_{{\varphi}})\equiv{\cal F}(g_{{ N}}^{2},g_{{ \varphi}}^{2},2g_{{ N}}g_{{ \varphi}}), \tag{19}\]
Eqs. (17) (18) can be written as
\[\frac{\partial{\cal F}^{*}}{\partial g_{{\varphi}}}=0, \tag{20}\]
\[g=\frac{1}{2}\frac{\partial{\cal F}^{*}}{\partial g_{{ N}}}. \tag{21}\]
For nonsymmetric systems - e.g., in predicting the rotation curves of disc galaxies - TRIMOND theories do differ from each other. In fact, AQUAL and QUMOND themselves differ in such predictions (e.g., Ref. [15]). But the fact that general TRIMOND theories revolve on Lagrangian functions of three variables can increase the variety in predictions. To bring out better the possible variety in behaviors for nonsymmetric systems we can work with the acceleration variable \(w\equiv(xy)-z^{2}/4\) instead of with \(z\) itself. Since \(w=0\) for one-dimensional configurations, the one-dimensional "interpolating function", \(\nu\), is oblivious to the dependence of \({\cal F}\) on \(w\). See more on this in Sec. III.1.1.
#### ii.1.1 The DML of one-dimensional configurations
For DML, one-dimensional configurations, we can further write, using the DML form, Eq. (15),
\[{\cal F}^{*}(g_{{ N}},g_{{\varphi}})=g_{{ N}}^{3/2}\bar{{\cal F}}_{ D}[g_{{ N}}^{a_{{ -}1}}g_{{\varphi}}^{2},2g_{{ N}}^{(a_{{ -}1})/2}g_{{\varphi}}]\equiv g_{{ N}}^{3/2}f(q), \tag{22}\]
where \(q\equiv g_{{\varphi}}g_{{ N}}^{(a_{{ -}1})/2}\). Equations (20) and (21) then become, respectively
\[f^{\prime}(q)=0,\ \ \ \ \ g=\frac{3}{4}g_{{ N}}^{i/2}f(q). \tag{23}\]
The first of Eqs. (23) tells us that its solution \(q=q_{{ 0}}\) is a universal constant of the theory. The ellipticity condition of the theory should imply, in particular, that this equation has a unique solution. Then,
\[g_{{\varphi}}=q_{{ 0}}a_{{ 0}}(g_{{ N}}/a_{{ 0}})^{(1-\alpha)/2},\ \ \ \ \ \ g=\frac{3}{4}f(q_{{ 0}})\sqrt{g_{{ N}}a_{{ 0}}}, \tag{24}\]
for all one-dimensional, DML configurations (where I reinstated \(a_{0}\)).
We see that the DML (and, in fact, the general) dependence of \(g_{{\varphi}}\) on \(g_{{ N}}\) in the one-dimensional case, is theory dependent; but, scale invariance dictates that \(g\propto\sqrt{g_{{ N}}a_{{ 0}}}\) in any MOND theory.
It is customary to take the normalization of \(a_{0}\) so that in the DML of one-dimensional systems we have \(g=\sqrt{g_{{ N}}a_{{ 0}}}\); so, for example, the predicted mass-asymptotic-speed relation (which underlies the baryonic Tully-Fisher relation) takes the exact form \(MGa_{0}=V_{{ N}}^{4}\).
Given the interaction Lagrangian function \({\cal F}\), satisfying the requirements of the Newtonian and the deep-MOND limits, this requirement determines the normalization of the DML behavior, by adjusting the normalization of \(f(q)\) so that \(f(q_{{ 0}})=4/3\).
For the sake of demonstration, consider a TRIMOND theory whose \({\cal F}\) has the DML form
\[{\cal F}(x,y,z)\ \ \ \ \ \stackrel{{ a_{0}\to\infty}}{{ \longrightarrow}}\ \ \ \ {\cal F}_{{ D}}=\epsilon x^{{ \beta}}z^{{ \gamma}}+\tilde{\cal F}(y)+\xi(4xy-z^{2})^{\gamma/2}Q[yx^{(\alpha-1)/2},zx^{( \alpha-3)/4}], \tag{25}\]
where \(\alpha\) is the scaling dimension of \({\varphi}\). The scaling dimension of the first term having to be \(-3\), implies that we have to take \(\alpha=-3(1-v-4\beta/3)/v\). The scaling dimension of \(y\) is \(-2(1-\alpha)\); so we have to have \(\tilde{\cal F}(y)=-sy^{3/2(1-\alpha)}\).
Also, we need to take \(\gamma=3/(3-\alpha)\), so the last term has a scaling dimension \(-3\), and can be written in the standard DML form, Eq. (15): \(\xi x^{3/4}\{4yx^{(\alpha-1)/2}-[zx^{(\alpha-3)/4}]^{2}\}^{\gamma/2}Q[yx^{( \alpha-1)/2},zx^{(\alpha-3)/4}]\), making \({\cal F}_{{ D}}\) scale invariant.
For one-dimensional configurations, this last term vanishes. Proceeding then as discussed above, one finds that
\[f(q)=2^{v}\epsilon q^{v}-sq^{u}, \tag{26}\]
where \(u=3v/(3-2v-4\beta)\). Take \(v\) and \(\beta\) such that \(u>0\); thus \(u>v\). For \(v\leq 1\), the first of Eqs. (23) has a unique solution5
Footnote 5: Note that for \(v>1\), we also have the unwanted solution \(q=0\), which gives the solution \(g=g_{\varphi}=0\); so such a theory dose not make sense. We thus have to take \(v\leq 1\).
\[q_{\circ}=\left(\frac{\epsilon v2^{v}}{su}\right)^{1/(u-v)}. \tag{27}\]
The condition \(f(q_{\circ})=4/3\) then fixes \(s\) in terms of \(\epsilon\).
For the special case of AQUAL, \(\mathcal{F}\) itself has the form (25), with \(v=1\), \(\xi=0\), and \(\beta=0\). Thus, \(f(q)=2\epsilon/q-sq^{3}\), \(q_{\circ}=(2\epsilon/3s)^{{}^{1/2}}\), and \(s=2\epsilon^{3}/3\).
For any isolated system of bounded mass distribution, the asymptotic region, far outside the mass, is described by a spherical, DML solution; so the above applies to it. In particulary, we have asymptotically \(g\to(MGa_{0})^{1/2}/r\).
We see that we can have different theories, with different interpolating functions, all being scale invariant in the DML, all having the same DML. MOND accelerations (though not the same \(\varphi\)) in the one-dimensional case, but not for other configurations. For example, the predicted phenomenology of one-dimensional configurations, or of the asymptotic dynamics, does not inform us on the last term in eq. (25). And, we know from our experience with numerical solutions of AQUAL, that, in general, asymmetric cases, \(\vec{\nabla}\varphi\) and \(\vec{\nabla}\psi\) are not aligned. So, contributions to the Lagrangian in which the last term in eq. (25) appears do not vanishes, and can be important in determining the MOND accelerations. It is difficult to make more concrete, quantitative statements without numerically solving various TRIMOND theories for various configurations.
### Deep-MOND virial relation and consequences
The tripotential theories, with the MOND limit defined by Eq. (14) fall under the general class of MG, MOND theories dealt with in Ref. [20]. It was shown there that such theories generally satisfy a very useful virial relation: For a deep-MOND, self-gravitating, isolated system of pointlike masses, \(m_{p}\), at positions \({\bf r}_{p}\), subject to (gravitational) forces \({\bf F}_{p}\), the following holds:
\[\sum_{p}{\bf r}_{p}\cdot{\bf F}_{p}=-(2/3)(Ga_{0})^{1/2}[(\sum_{p}m_{p})^{3/2} -\sum_{p}m_{p}^{3/2}]. \tag{28}\]
This then leads to the deep-MOND two-body force for arbitrary masses,
\[F(m_{1},m_{2},\ell)=\frac{2}{3}\frac{(a_{0}G)^{1/2}}{\ell}[(m_{1}+m_{2})^{3/2 }-m_{1}^{3/2}-m_{2}^{3/2}] \tag{29}\]
(\(\ell\) is the distance between the masses), and to a general mass-velocity-dispersion relation
\[\sigma^{2}=\frac{2}{3}(MGa_{0})^{1/2}[1-\sum_{p}(m_{p}/M)^{3/2}], \tag{30}\]
where \(\sigma^{2}=M^{-1}\sum_{p}m_{p}{\bf v}_{p}^{2}\) is the mass-weighted, three-dimensional, mean-squared velocity dispersion, and \(M=\sum_{p}m_{p}\) is the total mass.
This also implies that all these theories predict the same value of the \(Q\) parameter, defined in Ref. [12], which was proposed as a possible discriminator between MOND theories. This parameter thus cannot discriminate between the different TRIMOND theories.
The reason that the full detail of the DML of the tripotential theories does not enter the virial relation and its corollaries is that what effectively enters the derivation of this relation is only the asymptotic behavior of the MOND field from a single mass. And this is oblivious to the details of the theory,
## IV Discussion
The large variety afforded by the freedom to choose the three-variable \(\mathcal{F}\) can potentially lead to a considerable variety in secondary predictions of TRIMOND theories. However, to demonstrate this in some quantitative detail would require numerical solutions, which I cannot provide here.
AQUAL and QUMOND were shown to be the nonrelativistic limits of certain versions of the bimetric, relativistic formulation of MOND (BIMOND) [21; 22]. It seems that generalizations of BIMOND with an additional, scalar degree of freedom could give TRIMOND theories as their Newtonian limits. BIMOND theories use in their Lagrangians scalars that are quadratic in the tensors
\[C^{\lambda}_{\mu\nu}=\Gamma^{\lambda}_{\mu\nu}-\hat{\Gamma}^{\lambda}_{\mu\nu}, \tag{31}\]
where \(\Gamma^{\lambda}_{\mu\nu}\) and \(\hat{\Gamma}^{\lambda}_{\mu\nu}\) are the connections of the two metrics that are the gravitational degrees of freedom in BIMOND. Such quadratic scalars reduce in the nonrelativistic limit to \((\vec{\nabla}\chi)^{2}\), for some potential \(\chi\).
With an additional, scalar degree of freedom, \(\varphi\), we can construct scalar variables such as \(\varphi_{,\mu}\varphi^{\mu}_{,}\) and the mixed \(\varphi_{,\mu}\bar{C}^{\mu}\), \(\varphi_{,\mu}C^{\mu}\), where \(C^{\lambda}=g^{\mu\nu}C^{\lambda}_{\mu\nu}\), \(\bar{C}^{\mu}=g^{\lambda\nu}C^{\lambda}_{\mu\nu}\) are the two traces of \(C^{\lambda}_{\mu\nu}\). Such variable are expected to reduce in the nonrelativistic limit to some \((\vec{\nabla}\varphi)^{2}\) and \(\vec{\nabla}\varphi\cdot\vec{\nabla}\chi\). So it may be possible to construct such scalar-bimetric theories that reduce to some TRIMOND theories.
|
2308.16491 | In-class Data Analysis Replications: Teaching Students while Testing
Science | Science is facing a reproducibility crisis. Previous work has proposed
incorporating data analysis replications into classrooms as a potential
solution. However, despite the potential benefits, it is unclear whether this
approach is feasible, and if so, what the involved stakeholders-students,
educators, and scientists-should expect from it. Can students perform a data
analysis replication over the course of a class? What are the costs and
benefits for educators? And how can this solution help benchmark and improve
the state of science?
In the present study, we incorporated data analysis replications in the
project component of the Applied Data Analysis course (CS-401) taught at EPFL
(N=354 students). Here we report pre-registered findings based on surveys
administered throughout the course. First, we demonstrate that students can
replicate previously published scientific papers, most of them qualitatively
and some exactly. We find discrepancies between what students expect of data
analysis replications and what they experience by doing them along with changes
in expectations about reproducibility, which together serve as evidence of
attitude shifts to foster students' critical thinking. Second, we provide
information for educators about how much overhead is needed to incorporate
replications into the classroom and identify concerns that replications bring
as compared to more traditional assignments. Third, we identify tangible
benefits of the in-class data analysis replications for scientific communities,
such as a collection of replication reports and insights about replication
barriers in scientific work that should be avoided going forward.
Overall, we demonstrate that incorporating replication tasks into a large
data science class can increase the reproducibility of scientific work as a
by-product of data science instruction, thus benefiting both science and
students. | Kristina Gligoric, Tiziano Piccardi, Jake Hofman, Robert West | 2023-08-31T06:53:22Z | http://arxiv.org/abs/2308.16491v2 | # In-class Data Analysis Replications:
###### Abstract
Science is facing a reproducibility crisis. Overcoming it would require concerted efforts to replicate prior studies, but the incentives for researchers are currently weak, as replicating prior studies requires considerable time and effort without providing the same level of recognition as _de novo_ research. Previous work has proposed incorporating data analysis replications into classrooms as a potential solution. However, despite the potential benefits, it is unclear whether this approach is feasible, and if so, what the involved stakeholders--students, educators, and scientists--should expect from it. Can students perform a data analysis replication over the course of a class, and if so, what are the benefits for them? What are the costs and benefits for educators? And how can this solution help benchmark and improve the state of science?
In the present study, we incorporated data analysis replications in the project component of the _Applied Data Analysis_ course (CS-401) taught at EPFL (\(N=354\) students). Here we report pre-registered findings based on surveys administered throughout the course. First, we demonstrate that students can replicate previously published scientific papers, most of them qualitatively and some exactly. A small fraction of replications failed, and in cases where there were known issues with papers, students correctly identified them. We find discrepancies between what students expect of data analysis replications and what they experience by doing them along with changes in expectations about reproducibility, which together serve as evidence of attitude shifts to foster students' critical thinking. Second, we provide information for educators about how much overhead is needed to incorporate replications into the classroom and identify concerns that replications bring as compared to more traditional assignments. Third, we identify tangible benefits of the in-class data analysis replications for scientific communities, such as a collection of replication reports and insights about replication barriers in scientific work that should be avoided going forward, including insufficiently described methods and datasets.
Overall, we demonstrate that incorporating replication tasks into a large data science class can increase the reproducibility of scientific work as a by-product of data science instruction, thus benefiting both science and students.
## 1 Introduction
The low reproducibility rates of scientific publications have raised concerns across a number of fields [1, 3, 36]. Although scientific publishing plays a key role in advancing science, the publication process has multiple weaknesses that may influence the validity of conclusions. The focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature [49], in fields ranging from basic [5] and biomedical [16], to management and organizational sciences [6]. This inclination may lead to bad research practices [7], such as \(p\)-hacking [18, 28], or developing post-hoc hypotheses to fit known results [21].
Recently, Patil et al. [37] introduced a framework to consider the key components of a scientific study pipeline that tend to vary across studies and disciplines: the intent of a study (including research
question, experimental design, and analysis plan) and what was actually performed when conducting the study (when data is collected, analyses are conducted, estimates are made, and conclusions are asserted). Replication challenges exist throughout the entire pipeline, all the way to data analysis, given previously collected and publicly available data. Data analysis replication, in particular, entails different analysts using their independently written data analysis code to reproduce the original estimates and claims, using the same data and the same analysis plan [19]. Significant variation in the results of data analysis replication has been proven difficult to avoid, even by experts with honest intentions [46].
Researchers are increasingly encouraged to share code and materials [35] for other researchers to perform direct data analysis replication, as a way to improve the credibility of the corresponding research findings. However, replicating the data analysis reported in the publications of others requires considerable time and effort, without providing a particularly rewarding outcome (i.e., a publication, because of a presumed lack of originality [20] and novelty [36]). Researchers are thus usually not incentivized to perform data analysis replications. Ultimately, published replications are rare across fields [22, 24, 29, 40, 42] and the incentives are not yet in place to address this issue.
**Potential solution: data analysis replication as part of data science instruction.** A recent body of work [19, 43] has proposed one step towards a solution: educating undergraduate and graduate students to perform data analysis replications. Universities are well positioned to introduce replications as class assignments in methods training in order to establish a culture of replication [4, 30], reproducibility, and critical thinking [13, 20, 47, 48]. In-class replications have previously been proposed for college-level education [31] and for psychology education [15, 17], to understand correlates of replicability [8, 14]. Furthermore, data analysis replication efforts have previously been used for comprehensive meta-analyses [50, 51], based on multiple studies rather than on a single replication attempt [8, 39, 45].
However, despite the postulated benefits of this solution, it is unclear whether this approach is feasible, and if so, what the involved stakeholders--students, educators, and scientists--should expect from it. First, in terms of _students_, it is unclear if they are able to perform data analysis replications over the course of a class, and if so, what are the benefits for them. In particular, are computer science university students capable of replicating published analyses and correctly identifying issues when issues exist? What type of effort does this require on the students' end? What do students learn from the process, i.e., how do their beliefs differ before vs. after engaging in data analysis replication exercises? What outcomes do students expect before the activity, and how do actual outcomes differ from prior expectations?
Second, in terms of _educators_, there are open questions regarding required investments vs. potential benefits. For instance, what is required on the educator's end to run successful data analysis replications? How can data analysis replications be incorporated into existing large university classes? What should educators expect their students to learn and take away from data analysis replications? How much of the educator's time and effort is in-class replication expected to take, and what challenges might the educator face [48]?
Lastly, in terms of _scientists_, it remains to be determined how this solution can help benchmark and improve the state of science. Do the studies that students work on replicate, either exactly or qualitatively? If not, what are the main sources of error or confusion that students identify? How can these replication barriers in scientific work be avoided going forward?
**Our approach.** To better understand the feasibility, challenges, and benefits of the in-class data analysis replication approach, we incorporated data analysis replications in the project component of the _Applied Data Analysis_ course (CS-401) taught at EPFL, the Swiss Federal Institute of Technology in Lausanne. As part of the class, students performed data analysis replications individually. Based on a set of surveys conducted over the course of the semester, our main goal was to understand students' expectations about the difficulty of the exercise before performing the replication vs. their impressions of how hard the task actually was, once completed.
Through pre-registered analyses of survey responses, we pose the following specific research questions about the impact that data analysis replications tasks have on the students. First, our guiding research question is
**RQ1.** How large are the discrepancies between students' expectations and the reality of data analysis replication?
Additionally, we explore the following questions:
**RQ2.** Do the discrepancies (if any) persist as students solve replication tasks?
**RQ3.** Do students acquire intuitions about the scientific process? Can they anticipate in what ways papers are hard to replicate?
**RQ4.** Are the effects stronger for the same type of data analysis as performed in the replication exercise, or is there an attitude shift for expectations regarding different data analysis types?
Any discrepancies between expectations and reality (**RQ1-2**) and any changes in expectations about reproducibility in general (**RQ3-4**) serve as evidence of shifts in students' attitude. Identifying such indicators of behavioral changes is essential to understanding ways how students benefit from performing data analysis replications.
**Overview of study design.** In order to work on homework assignments and class projects in the _Applied Data Analysis_ course, students formed groups of four students (some groups exceptionally comprised three students) working together throughout the semester. The replication activity was performed as part of the graded class project. The study design is outlined in Fig. 1. The study started with a bidding process where students expressed preferences for papers (Step 1). Afterwards, each group focused on one scientific paper, assigned to them by the class instructors. After reading the assigned paper (Step 2), pre-surveys recorded the individual students' expectations about the time required, the difficulty of replicating findings from data science papers, and about the perceived reproducibility of papers in the field.
Then, students performed the replications (Steps 3 and 4). Within the group, replications were performed and reported individually by each student. We specified two figures or tables to replicate, a basic one (replicated in Step 3) and an advanced one (replicated in Step 4). Students then individually recorded their results and working times in post-surveys, which we compared with students' expectations from before they started as expressed in the pre-surveys. Lastly, students proposed and conducted creative extension projects, which students built on top of the replicated analyses (Step 5) and presented at the end of the class.
Additionally, in the pre- and post-surveys, students were also asked about two control papers which they did not replicate, to be able to measure general attitude shifts beyond the specific papers being replicated. One control paper reported a similar type of analysis as the assigned paper (either counting items and hypothesis testing, or regression modeling), and one reported a different type of analysis, to be able to explore the impact of the nature of the data analysis on attitude shifts, if any.
**Contributions.** Concretely, we make the following contributions addressing the feasibility of in-class data analysis replication and identifying its impact on students, educators, and scientists. Our pre-registered1 and exploratory findings (see next points) are based on the work and responses of 354 consenting students who produced data analysis replications of 10 peer-reviewed publications.2
Footnote 1: [https://osf.io/usam4k/?view_only=a92a3fb483bd4a5e8b3425be6261ebcf](https://osf.io/usam4k/?view_only=a92a3fb483bd4a5e8b3425be6261ebcf)
Footnote 2: [https://dlab.epfl.ch/teaching/fall2020/cs401/reports/](https://dlab.epfl.ch/teaching/fall2020/cs401/reports/)
_Students._ First, we show that students are indeed capable of replicating published studies. In total, 98% of students reported having replicated exactly or qualitatively the basic figure, and 87% the advanced figure. A small fraction of replications failed, and in cases where there were known issues with papers, students correctly identified them. We found that it takes students on average about 10.5 hours to replicate a main result (cf. Sec. 3.1). Overall, we established the feasibility of a large class performing a data analysis replication exercise in a real-world educational scenario.
Figure 1: **Study design summary.** The timeline is visualized from the students’ perspective. The semester progresses from the left to the right. The surveys were administered upon submission of the respective assignment step.
Furthermore, discrepancies between expectations and reality, and changes in expectations about reproducibility arose among students. On average, students underestimated the time they would take to reproduce, overestimated how long data wrangling would take, and underestimated how long it would take to iteratively analyze and interpret results. However, students had fairly calibrated expectations of replication rates (Sec. 3.1). Moreover, we found that students generalized their attitude shifts from the replicated papers to other papers that they did not attempt to replicate, where following the initial replications, students' expected time to perform the replication increased by about two hours. The identified attitude shifts have the potential to foster students' critical thinking skills and promote students' appreciation for the challenges involved in the scientific process. Further exploratory analyses of open-text responses (cf. Sec. 3.2) let us identify how the students perceived this activity and understand the specific challenges that the students faced, including resource, expertise, and time constraints.
_Educators._ On the educator side, we provide realistic information about how much overhead is needed in teacher-to-student ratios for overseeing replications, how much effort is required to select feasible papers for replications, and some concerns that replications bring over more traditional assignments. We offer further "lessons learned" that can be useful to other educators, putting particular emphasis on reflections regarding cost-benefit tradeoffs.
The insights about the discrepancies between expectations and true outcomes, as well as the associated attitude shifts, will be informative for future efforts aiming to incorporate data analysis replications into existing educational practices. For example, since the replication activity took students longer than expected, instructors should carefully plan the course timeline and clearly communicate the expected workload to students, to avoid stress and frustration (cf. Sec. 4.2).
_Science._ Lastly, we identified tangible ways how the scientific communities, directly and indirectly, benefit from this and similar efforts. The direct benefit is reflected in the produced artifacts (i.e., 354 online repositories, each with a replication report) and the identified main sources of error or confusion among students (Sec. 3.2) when replicating published data analyses. We found that papers often have poor descriptions of the methods and data, leading to hard-to-replicate results. The indirect long-term benefit can arise as a shift of norms and incentives if this auditing paradigm becomes more prevalent.
**Implications.** Overall, we demonstrated that incorporating replication tasks into a large data science class has the potential to increase the reproducibility of scientific work as a by-product of data science instruction, thus benefiting both science and students.
## 2 Methods
### Study design
In preparation for the study, we identified ten data science publications suitable for the course, in terms of the difficulty of data analysis tasks required, a variety of covered topics, data availability, and, to the best of our knowledge, lack of easily accessible publicly available code necessary to reproduce the main findings. The publications were split into five tracks, with two publications each:
1. Natural language processing and machine learning [33, 34]
2. Computational social science [12, 41]
3. Networks [11, 25]
4. Social media and Web [26, 38]
5. Health [2, 10]
We identified two key figures or tables from each of the publications that are important for the overall message of the publication. Teaching assistants (Master students who took the course the previous year) aimed to replicate (exactly or qualitatively) the selected figures before the class started. We developed pre- and post-surves by conducting a pilot with student assistants.
The data analysis replication activity was composed of six steps. We asked students to fill out repeated short surveys, each part of a project milestone deadline. Each student was assigned one paper to replicate (around 36 students per replicated paper). In each paper we selected a primary and secondary figure or table. The primary figure or table requires basic skills taught in the lectures and exercises before the replication task was performed (limited to counting, hypothesis testing, visualizing, and fitting regressions). The secondary figure or table requires potentially more advanced data analysis such as non-standard resampling and error estimation techniques, examination of feature importance, and
network analysis. Note that henceforth we refer to the basic figure/table and the advanced figure/table as simply _basic_ and _advanced figures_ (although the result might be presented in a table).
Additionally, the ten assigned papers were divided into two conditions based on the paper type (referred to as type A and type B). Paper type refers to the type of analysis necessary to perform the replication. For basic figure in type A papers, to reproduce a result, students were required to count items, perform hypothesis testing, and make a visualization and interpretation of the result. For type B papers, to reproduce a result, students had to fit a regression model and make a visualization and interpretation of the result. In addition to the main assigned paper (referred to as "Paper 1"), each student was assigned two control papers (referred to as Papers 2 and 3) that the student does not replicate. One control paper is of the same type as the replicated paper, and one of the other type. The assignment scheme is shown in Table 1.
The study consisted of five steps, outlined in Fig. 1:
**Step 1: Reading abstracts of pre-selected papers and expressing a preference.** Students were instructed to read abstracts of all the ten pre-selected papers to get an idea of what the papers are about. Students then ranked the ten papers by their preference of working on them for the project. After this, students were assigned a main paper ("Paper 1"). We assigned the same number of students per paper. We calculated the average rank of preference for each paper across the team members, and assigned papers to groups in a balanced way, to minimize the total average rank since smaller rank implies higher preference. We also assigned to each publication two assistants who were in charge of mentoring teams working on the respective data analysis replication.
**Step 2: Reading the assigned paper.** Students were instructed to read the assigned paper. Students were pointed to the freely accessible PDF and the dataset repository. Students wrote a short summary (at most 500 characters). Upon submission of the summary, the students completed the pre-survey measuring expectations of the replication of the assigned figure and general attitudes towards reproducibility.
**Step 3: Replication of a basic figure.** Students individually performed a replication of the assigned basic figure from the assigned paper ("Paper 1"). Students prepared a replication report in the form of a Jupyter Notebook containing code and text. Students were instructed to log their hours spent doing the replication, on a piece of paper, in a digital sheet, or using time-tracking software. To elicit truthful time log reports, it was clarified to the students that the number of reported hours would in no way affect the grading of the work. Students were instructed that they would be graded based on the overall quality of the replication, textual descriptions, and code. It was noted that it would not be graded whether or not students managed to replicate the results from the paper, but only whether they made an honest and diligent attempt at replicating, given the information available in the paper. Upon submitting the replication report, the students completed the first post-survey, which measured outcomes of the replication of the basic figure and expectations for the advanced figure. The main analyses (RQ1) contrast the post-survey responses after replication of the basic figure with the pre-survey responses given before the replication exercise.
\begin{table}
\begin{tabular}{l|l|l|l}
**Condition** & **Paper 1** & **Paper 2** & **Paper 3** \\ \hline Type A & Niculae et al. [34] & Liang and Fu [26] & Pierson et al. [41] \\ (counting and testing) & Liang and Fu [26] & Aiello et al. [2] & Cattaneo et al. [10] \\ & Aiello et al. [2] & Cho et al. [11] & Muchlinski et al. [33] \\ & Cho et al. [11] & Leskovec et al. [25] & Penney [38] \\ & Leskovec et al. [25] & Niculae et al. [34] & Choi and Varian [12] \\ \hline Type B & Choi and Varian [12] & Pierson et al. [41] & Liang and Fu [26] \\ (regression modeling) & Pierson et al. [41] & Cattaneo et al. [10] & Aiello et al. [2] \\ & Cattaneo et al. [10] & Muchlinski et al. [33] & Cho et al. [11] \\ & Muchlinski et al. [33] & Penney [38] & Leskovec et al. [25] \\ & Penney [38] & Choi and Varian [12] & Niculae et al. [34] \\ \end{tabular}
\end{table}
Table 1: **Paper assignment.** For Type A (counting and testing) and Type B (regression modelling) conditions, the table specifies the assignment of the replicated paper (“Paper 1”) and control papers (“Paper 2”, of the same type, and “Paper 3”, of the different type). The set of papers used as Paper 2 is a permutation of the set of papers used as “Paper 1” (i.e., “Papers 1” and 2 are of the same type, whereas “Paper 3” is of the different type.)
**Step 4: Replication of an advanced figure.** In their team, the students then proposed a creative extension of the analysis performed in the paper, placing their data science skills into practice [23]3. When submitting the short project proposal, the students also completed the second post-survey, a repeated measurement of the expectations for the advanced figure. Analyses in RQ2 contrast the second post-survey with the expectation for the advanced figure.
Footnote 3: Recall that the team members individually replicated the same paper in the previous steps.
**Step 5: Creative extension.** Students conducted the proposed creative extension in their group. Additionally, individually and following identical instructions as in step 3, the students replicated the advanced figure from the assigned paper. Students were again instructed to log their hours spent doing the data analysis replication. The students completed the third post-survey, measuring outcomes of the replication of the advanced figure, and general attitudes towards reproducibility. General outcomes towards reproducibility are studied to address RQ3.
After each step, students were additionally asked about their expectations about the two control papers that they did not replicate (Paper 2 and Paper 3). We explore answers related to these control papers in order to address RQ4. In preparation for the study, we tested this pipeline with five student assistants.
### Inclusion and exclusion criteria
The study took place at EPFL in the fall semester of 2020, between September 2020 and January 2021. In total, 384 students took the class. Out of 384, 30 students (7.81%) opted out from the study (resulting in 354 consenting students). Data from all students enrolled in the Applied Data Analysis course participated in the study, except those who chose to opt out. We also excluded students who did not submit all four surveys or whose responses did not pass validation checks. With these restrictions, we analyzed responses from \(N=329\) consenting students.
### Consent statement and information sheet
Students were provided with the following information about the study and its purpose: "As part of ADA 2020, we introduced data analysis replications as a way of making you interact with real data science research. In order to understand the effectiveness of this new learning paradigm, we will analyze your solutions and survey responses, and we aim to publish a research paper about our findings. No personal data will be made public; we will only release aggregate, anonymized information. Every data point is valuable for us, but if you would nonetheless like to retract your data from the analysis, you can indicate this by checking the following box. Checkbox: I would like to be excluded from the analysis of the ADA data analysis replications". An information sheet about the study was provided to students 4.
Footnote 4: [https://go.epfl.ch/ada2021-replic-info-sheet](https://go.epfl.ch/ada2021-replic-info-sheet)
## 3 Results
### Pre-registered findings
Before analyzing the data collected via surveys, we formed and pre-registered a set of hypotheses (H1-H4), each relating to one of the four research questions (RQ1-RQ4). Pre-registered data analysis plan and survey materials are available on OSF5. We then executed the analyses following the plan. The unit of analysis is a student. All statistical tests were run with pre-registered significance level \(p=0.05\).
Footnote 5: [https://osf.io/usm4k/?view_only=lbf20c3c6fa848dab4619aac480974c6](https://osf.io/usm4k/?view_only=lbf20c3c6fa848dab4619aac480974c6)
#### Primary confirmatory analyses
Our primary confirmatory analysis tests the following hypothesis:
**H1 (RQ1): There are discrepancies between students' expectations and the reality of data analysis replication.** In particular, we test four dependent variables, organized into **H1a-d**.
**H1a:**_There is a significant difference between the time students take to perform the data analysis replication and the time they expect to take._
Students reported the expected number of hours in the pre-survey and the actual number of hours in the post-survey. Across students, we compared the expected number of hours to reproduce the basic
figure from the assigned paper with the actual number of hours it took to reproduce. Specifically, we conducted a paired, two-sided \(t\)-test on the difference between actual and anticipated number of hours, with a null hypothesis of no mean difference.
We found that there is a significant difference between the time students take to perform the data analysis replication and the time they expect to take (\(p=0.0309\); full distributions in Fig. (a)a and (b)b). On average, students expected to take \(9.01\)hr, but actually took \(10.53\)hr. The median expected time is \(5\)hr and median time taken is \(8\)hr. In total, \(62\%\) of students took longer than expected, \(7.30\%\) the same, and \(30.70\%\) less than expected. So overall, students on average underestimate the time it would take to reproduce the basic figure.
**H1b:**_There is a significant difference between how challenging performing data analysis replication tasks is and how challenging students expect it to be._
We next compare how challenging students thought that it would be to reproduce the basic figure from the assigned paper with the reported true level of challenge. Students reported the perceived level of challenge on an ordinal scale (1: very straightforward, 2: somewhat straightforward, 3: neither straightforward nor challenging, 4: somewhat challenging, 5: very challenging). Specifically, we conducted a paired, two-sided \(t\)-test on the difference between the actual and anticipated level of challenge, with a null hypothesis of no mean difference.
We found that there is a significant difference between how challenging performing data analysis replication tasks is and how challenging students expect it to be (illustrated in Fig. (a)a and (b)b). Interestingly, performing data analysis replication tasks was less challenging than expected (\(p=3.70\times 10^{-5}\)). The average expected score on the 1-5 scale is \(3.39\) (median 4), whereas the average score after performing the task is \(3.11\) (median 3).
**H1c:**_There are discrepancies between the predicted and the true distribution of time spent on the three core activities: data wrangling, data analysis, and interpretation._
Figure 2: **Expectations vs. reality of a data analysis replication exercise: Time taken (H1a).** (a) Across students (y-axis), the histogram of the a-priori expected number of hours (x-axis) required (in blue), and the actual number of hours (in orange). (b) Across students (y-axis), the histogram of the difference (x-axis) between the actual number of hours and the expected number of hours. (c) Paired plot, with expected and actual number of hours, separately for students who took less, as many, or more hours than expected.
We conceptualized the data analysis replication task as being composed of three core activities: **data wrangling** (understanding the data structure, preprocessing steps, feature engineering), **data analysis** (exploratory analysis, statistical tests, developing and training models, evaluating model performance), and **interpretation** (evaluating results and comparing them with the results in the paper, interpreting findings, and redoing the analysis if necessary). Students were asked to sort these three activities with respect to the amount of time they expected to spend on them (before the analysis), and with respect to the amount of time they actually spent on them (after the analysis).
Our main hypothesis here relates to a disturbance in the rank of the three core activities. The three activities can be ranked in six possible ways. We treated each of the six ranking configurations as a categorical variable. The ranking configurations in the pre-test and the post-test were paired across students in a \(6\times 6\) contingency table. We then performed the Stuart-Maxwell test for marginal homogeneity in the contingency table. The null hypothesis is that the activity rank configuration frequencies for pre-test and post-test are the same.
Testing the hypothesis **H1c**, we found discrepancies between the predicted and the true distribution of time spent on the three core activities: data wrangling, data analysis, and interpretation (\(p<10^{-307}\)). In particular, on average, data wrangling took less time than expected (with average ranks 1.48 and 2.19, respectively, where 1 means "most time consuming" and 3, "least time consuming"), while data analysis and result interpretation took more time than expected (with ranks 1.84 and 1.65 for data analysis, and 2.68 and 2.16 for result interpretation). In summary, in relative terms, students overestimated how much time data wrangling would take, and underestimated how much time data analysis and interpreting results would take.
This finding shines light on why replication took more time than expected, but was less challenging than expected. Students took more time iteratively redoing the data analyses, interpreting their results, which was perceived as time-consuming, although not technically challenging.
**H1d:**_There are discrepancies between predicted and true outcomes of the replication._
There are three possible self-reported outcomes of the data analysis replication: **the analysis replicated exactly** (the replication attempt produced results that agreed exactly with the paper, up to the decimals printed in the paper or shown in the figures), **the analysis replicated qualitatively** (the replication attempt produced results that had small differences with the paper, but these still agreed with the abstract-level findings of the paper), and **the analysis did not replicate at all** (the replication attempt produced results that were in conflict with the abstract-level findings of the paper).
We considered these outcomes as ordinal variables (1: the analysis replicated exactly, 2: the analysis replicated qualitatively, 3: the analysis did not replicate at all). In the pre-survey, students attributed a probability to each of the possible outcomes. We calculated the outcome expectation on the ordinal scale for each student, by multiplying each possible outcome (1, 2, and 3) with the probability the student attributed to it and summing up. In the post-survey, students selected one of the outcomes. We compared the anticipated and the true value across students, for the basic figure from the assigned paper. We performed a paired two-sided \(t\)-test.
Figure 3: **Expectations vs. reality of a data analysis replication exercise: Level of challenge (H1b). (a) Histogram of the expected level of challenge (on an ordinal 1–5 scale) of the data analysis replication in the pre-survey (in blue), and the actual level of challenge of the data analysis replication (in orange). (b) Histogram of the difference between the expected and the actual level of challenge. The dashed lines and surrounding bands in each figure show the corresponding means and 95% confidence intervals.**
First, we found that 98% of students reported having replicated exactly or qualitatively the basic figure, and 87% the advanced figure. We did not find significant discrepancies between predicted and true outcomes of the replication (\(p=0.0747\); illustrated in Fig. 3(a) and 3(b)). A possible explanation is that the papers were preselected to be (with enough effort) at least partially qualitatively replicable. Students were not exposed to randomly sampled papers from the field. Rather, the selected papers were already found to be qualitatively reproducible in our paper selection process.
**Summary.** Overall, we found that data analysis replication tasks take longer, but are less challenging than expected. Compared to the expectations, students spent more time analyzing and modeling the data and interpreting the results, and less time in data wrangling activities. We did not find significant discrepancies between predicted and true outcomes of the replication. The considerable amount of time spent modeling and interpreting the results may explain why replication took more time than expected, while simultaneously being less challenging than expected. We found that students took time iteratively redoing the data analyses, interpreting their results, which was perceived as time-consuming, although not necessarily technically challenging. In the exploratory analyses, we further tried to understand the gaps between the expectations and the reality of data analysis replication.
#### Further confirmatory analyses
We also considered a set of secondary hypotheses (**H2-4**).
**H2 (RQ2):**_Discrepancies between predictions and true outcomes persist as students solve replication tasks._
In the first replication task, the students replicated the basic figure, and in the second replication task, they replicated the advanced figure. We compared the predictions and true outcomes for the advanced figure in the assigned paper by repeating the same analyses and statistical tests described in H1a-d, but now for the advanced rather than the basic figure. We then explored the ways how the second replication task differs from the first replication task. In other words, we explored how the discrepancies between expectations and outcomes vary as students gain experience in conducting data analysis replication tasks.
* We found that a significant difference between the time students take to perform the advanced data analysis replication and the time they expect to take (\(p=0.0311\)). On average, students expected to take 9.43hr and took 8.54hr. That is, after underestimating the time it takes to reproduce the basic figure, students overestimated the time it would take to reproduce the advanced figure (i.e., students overshoot after they initially underestimated).
* We found that, after performing the replication of the basic figure, there was a significant difference between how challenging performing data analysis replication of the advanced figure is, and how challenging students expect it to be. Performing data analysis replication tasks was again less challenging than expected (\(p=0.00766\)) as students overestimated how challenging it would be. The average expected score on a 1-5 scale is 3.10, whereas the average score after performing the task is 2.91. For comparison, in the case of the basic figure, the average expected score on the 1-5 scale was 3.39 and the average score after performing the task was 3.11.
Figure 4: **Predicted and true outcomes of the replication (H1d)**. (a) Histogram of the percentage of papers expected to replicate exactly (blue), qualitatively (orange), or not at all (green), in the pre-survey. The dashed lines and surrounding bands in each figure show the corresponding means and 95% confidence intervals. (b) Histogram of true outcomes of the data analysis replication, in the post-survey.
3. For the advanced figure, we again found discrepancies between the predicted and the true distribution of time spent on the three core activities: data wrangling, data analysis, and interpretation (\(p=9.54\times 10^{-7}\)). In particular, on average, data wrangling and data analysis took less time than expected, while interpreting results took more time than expected. For the advanced figure, students again overestimated how much time data wrangling would take, and underestimated how much time interpreting the results would take. We find no significant difference for the data analysis component.
4. For the advanced figure, we found discrepancies between predicted and true outcomes of the replication (\(p=1.17\times 10^{-13}\)). As a reminder, we considered these outcomes as ordinal variables (1: the analysis replicated exactly, 2: the analysis replicated qualitatively, 3: the analysis did not replicate at all). The pre-test average score is on average 1.76, whereas the post-test average score is 2.01. Overall, the outcomes were less successful than expected. That is, with the advanced figure, students faced more reproducibility issues than with the basic figure, as we expected.
**Summary.** Overall, when reproducing the advanced figure after the basic one, discrepancies between expectations and outcomes persisted (although some in the opposite direction). Compared to what students expected, the advanced figure took less time to reproduce and was less challenging to reproduce. Moreover, there were discrepancies between the predicted and the true distribution of time spent on the core activities and between predicted and true outcomes of the replication (there were replication failures that the students did not expect).
**H3 (RQ3):**_The replication task affects the students' expectations on the fraction of peer-reviewed data science papers that are reproducible._
At the beginning and at the end of the study, we asked the following question: "Out of 100 peer-reviewed data science papers published in 2020, in how many of these papers do you think the analysis would replicate exactly, the analysis would replicate qualitatively, and the analysis would not replicate at all?"
As before (**H1d**), we considered the outcomes as ordinal variables (1: the analysis replicated exactly, 2: the analysis replicated qualitatively, 3: the analysis does not replicate at all). We calculated the outcome expectation on the ordinal scale for each student, by multiplying each possible outcome (1, 2, and 3) with the probability the student attributed to it and summing up. Students were instructed to carefully verify that the three numbers add up to 100, and we excluded students whose responses do not pass this validation check. We then performed a paired two-sided \(t\)-test on the outcome expectation at the beginning and at the end of the study. We did not find evidence that the replication task affects the students' expectations of the fraction of peer-reviewed data science papers that are reproducible (\(p=0.143\); illustrated in Fig. 5).
**H4 (RQ4):**_There is a spillover effect as expectations are modified across the board, to papers that students did not replicate._
Figure 5: **Perceived reproducibility of peer-reviewed data science papers (H3)**. Histogram of the percentage of papers expected to replicate exactly (blue), qualitatively (orange), or not at all (green), (a) in the pre-survey, (b) in the post-survey. The dashed lines and surrounding bands in each figure show the corresponding means and 95% confidence intervals.
Upon performing the replication tasks, we monitored any simultaneous changes in the expectations (as described in H1a-d) for the two control papers that students did not reproduce. We explored whether there were any changes in expectations regarding quantities described in H1a-d by repeating the same tests as outlined above, for the two control figures. One of the two control papers ("Paper 2") entails data analysis of the same type as the replicated paper ("Paper 1") and the other ("Paper 3") one of a different type (counting items and hypothesis testing vs. regression modeling). By contrasting the two control papers, we explored the presence of any spillover effects to different types of data analysis replication, beyond the specific type of analysis that the student worked on.
Overall, we found that there is a spillover effect as expectations regarding time spent and time distribution across the activities are modified across the board, for the papers that students did not replicate (summarized in Table 2). It is noteworthy that, even though the figures students were asked about were not replicated, the expectations changed after v.s. before the replication activity. The expectations were modified in the same direction as for the replicated papers (about two hour increase in expectation, and more time expected to spend in analysis and interpretation). The effects are not stronger for the same type of data analysis as performed in the replication exercise. We found that, overall, there was an attitude shift across data analysis types.
### Exploratory findings
Next, we complement the previous finding with an exploratory study to understand the challenges and difficulties students experienced during the replication activity. In this analysis, we qualitatively investigate the open-text responses to two questions we included in the post-survey: (1) "What was challenging?" (2) "What may explain the differences?" Students replied to these questions after replicating the second figure and completing the replication assignment.
**Topic coding.** To understand what topics the students mentioned, two of the authors of this study qualitatively coded the students' answers using a grounded-theory approach. For both questions, we independently repeated the following process. The researchers autonomously read a random sample of 100 answers and produced a list of topics mentioned in the students' descriptions. These topics were then compared and discussed until an agreement on their representativeness was reached. This process led to merging similar topics and refining the names describing them. Then, each researcher assigned
\begin{table}
\begin{tabular}{l|l} \multicolumn{2}{l}{**Control paper of the same type as replicated**} \\ \hline H4a: Expected time & 1.73 hour increase (\(p=0.0129\)). \\ & Pre-test: \(M=8.81\), post-test: \(M=10.54\). \\ \hline H4b: Expected level of challenge & Difference not significant (p=0.836). \\ \hline H4c: Expected distribution & Significant disturbance in the ranking (\(p<10^{-307}\)). \\ & Wrangling: +0.36, Analysis: -0.04, Interpretation: -0.32. \\ \hline H4d: Expected outcomes & Difference not significant (\(p=0.0804\)). \\ \hline \multicolumn{2}{l}{**Control paper of a different type than replicated**} \\ \hline H4a: Expected time & 2.07 hour increase (\(p=0.000434\)). \\ & Pre-test: \(M=8.75\), post-test: \(M=10.82\). \\ \hline H4b: Expected level of challenge & Difference not significant (p=0.161). \\ \hline H4c: Expected distribution & Significant disturbance in the ranking (\(p<10^{-307}\)). \\ & Wrangling: +0.37, Analysis: -0.07, Interpretation: -0.31. \\ \hline H4d: Expected outcomes & Difference not significant (\(p=0.0841\)). \\ \end{tabular}
\end{table}
Table 2: **Spillover effects: Modified expectations regarding papers that students did not replicate (H4).** Summary of the results comparing pre- and post-replication expectations, across the four hypotheses (H4a–H4c), for the two types of control papers that were not replicated.
the obtained topics to the answers. Multiple topics (or none) could be assigned to an answer. Finally, the label assignments were compared and, in case of discrepancies, discussed until a final agreement was reached. At the end of the process, the answers not assigned to any previously-agreed topics were examined to extract new possible labels. If new topics were identified, the process was repeated; otherwise, the process terminated by leaving these answers unlabeled. The outlined topic coding approach was applied to two open-ended questions included in the post-survey: (1) "What was challenging?" (2) "What may explain the differences?". We report topics assigned to at least 5% of the answers.
**What was challenging?** In this question, students were asked to describe in two or three sentences what they found challenging during the replication task. Most students (77%) described challenges assigned to at least one of the topics. Inspecting the unassigned responses (23%) did not lead to introducing additional themes. Rather, the unassigned responses were short and vague (e.g., "probability issue") or uninformative (e.g., "It did not replicate at all").
We identified four frequent topics: Poor description, Expertise requirements, Time requirements, and Limited resources. In the following paragraphs, we report more details about the four themes and the relative commonness in the questions assigned to at least one topic. Since each answer can be assigned to multiple topics, the percentages of assignments do not sum to 100%.
Poor description (60%): Students pointed out that the main challenge in replicating the authors' results was a poor description of the process. This issue includes missing details about the parameters used in the modeling (e.g., size of the random forest model), little information on the data preprocessing steps, inconsistency between the data released and the description in the article, and explicit mistakes of the authors in reporting the method details (e.g., wrong start date in a time series analysis). This issue was summarized by one student as: "[...] it's almost a guessing game as to what method or inclusion I might be doing differently. This lack of hints was fairly difficult to navigate."
Expertise requirements (37%): Many students mentioned their lack of expertise as one challenge they encountered during the replication. Their descriptions varied from specific issues, such as the need to be confident in manipulating and plotting the data (e.g., how to plot timestamps on the x-axis), to more complex problems, such as the use of some advanced techniques (e.g., domain-specific hypothesis testing).
Time requirements (17%): Students frequently mentioned the amount of time they spent working on the replication as a challenge. This problem is often associated with a poor description and is often described as many trial and error attempts.
Limited resources (11%): Finally, some students found working with the data provided challenging because of its scale. The computation time required to process large datasets represented a limitation for students working with personal laptops.
**What may explain the differences?** In this question, we investigate what the students believed could explain the differences between the figure in the paper and the one they obtained in the replication task. First, we asked as a multiple-choice question if they were able to replicate the results exactly (a), qualitatively (b), or not at all (c). Then, students were asked to describe in two or three sentences what may explain the differences. The most common outcome is that figure "replicated qualitatively but not exactly" (b, 73.2%), followed by "did not replicate at all" (c, 13.9%), and "replicated exactly" (a, 12.7%).
In this second exploratory analysis, we focus on students who obtained similar results (b) or failed to reproduce the figure assigned (c). We identified five recurrent topics mentioned by the students who could not replicate the figure exactly: Poor description, Data issues, Authors' mistakes, Tools differences, and Students' skills. As for the previous analysis, each answer can mention multiple problems. We found that 83% of the answers are assigned to at least one theme, while the remaining 17% were not informative and could not be assigned to new topics.
Poor description (55%): Similarly to what we observed in the answers to the previous question, students blame the limited description for the mismatch between their results and the article's figure. Answers in this category frequently mention a lack of details on the models' parameters used by the authors. Students who managed to reproduce the results only qualitatively pointed out that it was impossible to reproduce the figure exactly when the code and seeds used for "random initialization" are unavailable. Another common observation was the limited description of all the steps and choices involved in the preprocessing pipeline. These aspects include how authors sampled data, handled missing values, what qualifies as outliers, and what numeric rounding steps are involved.
Data issues (30%): Many students attribute their impossibility of reproducing the results to problems associated with the data. These problems come from issues with the data release that does not entirely match the description in the paper or from an incomplete release of the data necessary to reproduce all the results. Students encountering this last limitation went as far as trying to collect their own dataset with all the associated challenges--especially if depending on an outdated API.
Authors' mistakes (24%): A significant portion of students assigned the blame for the impossibility of reproducing the results to the authors of the research. Answers assigned to this category mentioned possible embellishment of the results by the researchers and both genuine mistakes in reporting or plotting (e.g., "The authors interchanged a row at some point which messed up their analysis") and bad-faith adjustments (e.g., "The authors did some shady-ish things, for example hard coding the plot").
Tool differences (11%): Some students suspected that the discrepancy between the tool used for the replication may play a role in obtaining different results. They speculated on potential differences in the model and optimizer implementations available in Python, R, and Stata.
Students' skill (7%): Lastly, some students believe mistakes on their side can be a possible reason for the differences. Some of them mention general mistakes in their code, whereas others describe their inexperience in doing effective data preprocessing and using libraries or methods that are not explicitly covered in the course material (e.g., "[...] researchers used a very advanced algorithm [...] from another paper and I would be surprised if any student fully implemented it."). This topic is relatively infrequent likely due to the fact that, in preparation for the study, we identified publications suitable for the course, in terms of the difficulty of tasks required. We additionally ensured that the course lectures ahead of the replication covered the crucial skills necessary to perform the replication.
**Paper-specific common feedback.** Lastly, we aim to understand whether there were blocking factors that made it impossible for students to replicate the result that cannot be addressed simply by taking more time.6 We re-examined the students' explanations separately per paper in order to identify issues that students consistently mention when the result is not replicated. Such consistent issues that are reported many times might be authors' own mistakes or a true lack of information.
Footnote 6: Note that our goal is _not_ to discover any specific replication failures, but rather to understand if the activity allows discriminating between potential true issues in the original publication from students’ own issues.
We list recurring issues for five papers where more than 10% of the students self-reported that they did not manage to replicate at all any of the assigned figures (Table 3). Explanations for the remaining five papers did not contain any repeatedly occurring explanations.
Examining the recurring explanations, we identified two recurring issues--a cross-validation mistake [33] and counting error [25], which were known to the teaching team in advance and were correctly identified by students, while the other recurring issues mainly reflect a lack of information or other pre-processing discrepancies.
## 4 Discussion
### Summary of the main results
**Feasibility.** Overall, our study points towards the feasibility of using in-class activities as a vehicle for conducting scientific replications. In total, 98% of students reported having replicated exactly or qualitatively the basic figure, and 87% the advanced figure. Moreover, we successfully integrated the activity as part of a large class, demonstrating that computer science Master's students can have the necessary skills to perform in-class data analysis replication.
**Benefits for students.** Beyond the feasibility of a large-scale data analysis replication, our study demonstrates that data analysis replications can have significant benefits for both students and scientific communities. First, we identified tangible educational benefits for students. Testing our primary pre-registered hypothesis, we found a significant difference between the expected and true time taken to replicate the data analysis (Fig. 2). Furthermore, we found a discrepancy between the expected and true time distribution across different types of tasks. The activity was less challenging than anticipated, likely because the tasks were laborious and iterative while not technically challenging (Sec. 3.2). The discrepancies persisted in the second replication task, replicating the more advanced figure.
It is noteworthy that the attitude shifts extended beyond the specific papers that the students replicated, into control papers, where following the initial replications, students' expected time to perform the replication increased by about two hours (Fig. 2). The identified discrepancies between expectations and reality, and the observed changes in expectations about the reproducibility of control papers, serve as evidence of students' attitude shifts. Altogether, these results point toward the benefits of the data analysis replication tasks for students.
The identified attitude shifts have the potential to foster students' critical thinking skills and promote students' appreciation for the challenges involved in the scientific process. Furthermore, the students benefited from the exercise in two other major ways. First, students read more papers and got exposed to the scientific process (which was not done so extensively in the previous iterations of the class). Second, the unconstrained component of the project, which students built on top of the replicated papers, was in many cases more technically advanced and meaningful than what students do in a fully unconstrained project in previous iterations, according to the instructors' experience.
**Benefits for science.** Second, we identified tangible ways how the scientific communities, directly and indirectly, benefit from this and similar efforts. The direct benefit is reflected in the produced artifacts: 354 online repositories, each with the replication report, a website with students' creative extensions (88 extension projects done in groups of 4), and a video presentation.7 Indirectly, teaching students to do data analysis replications can increase the overall number of conducted replications. Further indirect benefits include a potential shift of norms and incentives if the auditing paradigm becomes more prevalent. If researchers are aware of large data analysis replication attempts and more replications are done, more attention may be paid to reproducibility in the future.
Footnote 7: [https://dlab.epfl.ch/teaching/fall2020/ca401/reports/](https://dlab.epfl.ch/teaching/fall2020/ca401/reports/)
### Costs and necessary considerations for the educators
We now report insights and further "lessons learned" that can be useful to other educators, with a particular emphasis on the necessary considerations in order to integrate data analysis replications into
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \multirow{2}{*}{**Paper**} & \multirow{2}{*}{**Figure**} & **Replicated** & **Replicated** & **Did not** \\ & & **exactly** & **qualitatively** & **replicate** \\ \hline \multirow{2}{*}{Muchlinski et al. [33]} & Fig. 2 & 3.7\% & 96.3\% & 0\% \\ & Fig. 4 & 0\% & 55.56\% & 44.44\% \\ \multirow{2}{*}{Random forest parameters and random seed are not stated in the paper.} & \multirow{2}{*}{Specific feature importance metric is not explained in detail.} & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ & & & \\ \end{tabular}
\end{table}
Table 3: **Challenges students encountered, separately by paper (exploratory analysis).** For papers where more than 10% of students did not manage to replicate at all any of the assigned figures, the frequency of self-reported outcomes across students. For each paper, specific shared challenges that the students identified in their open-form responses when asked to explain the discrepancies between the original result and their result, and to speculate as to why the differences arose. Note that qualitative replication is the most frequent self-reported outcome for each of the figures.
a class. The outlined points are based on the instructor and assistants' experiences and discussions, and students' anonymous feedback.
Although data analysis replications can have clear benefits for students and for science, integrating them into an existing course is challenging. Based on our study, we highlight these major considerations:
1. Logistics: When designing and conducting in-class data analysis replications, it is necessary to carefully re-evaluate and implement changes in the order in which the concepts are taught throughout the semester, since replicating data analyses requires specific skills (such as statistical tests, regression modeling, or counting items). One has to ensure that at the time when students start working on it, they have the required knowledge, which can lead to tradeoffs. In the study, in addition to modifying the class schedule, we carefully reconsidered other logistical aspects of the class, including team size and student assignment to projects and advisors.
2. Human resources: In-class data analysis replication activities may require additional human resources. In our class (with \(N=354\) consenting students), two teaching assistants dedicated half-time of their teaching assistantship to coordinating the project component of the course, as part of which the replication analysis was conducted. This amounted to around eight hours per week. Additionally, around 30 students were assigned to each teaching assistant. The teaching assistants provided ongoing support specifically to the replicated paper throughout the semester, as well as performed grading, troubleshooting, technical support, and performed data analysis replications in preparation for the class.
3. Added constraints: We note that, if implemented as part of a standard component of a class (e.g., project or homework), data analysis replications may constrain the topics, as students cannot perform a project of choice, but have to build on top of the data analysis replication. Additionally, student level needs to be considered, and the activity designed to be appropriate. Our course had the following prerequisites, which turned out to be sufficient for successfully performing the replications: the students must have passed an introduction to databases course, a course in probability and statistics, or two separate courses that include programming projects. Also, programming skills were required (in class, we mostly used Python).
4. Ethical challenges: Data analysis replication activities call for ethical consideration. First, we had doubts about assigning students to papers that we knew were likely not to replicate at all, because we did not want to give students tasks we knew were unlikely to succeed. The ethical issue of potentially knowingly exposing students to stress and frustration limits the pool of paper candidates. Second, since the replication activity takes students more time than expected, instructors should carefully plan the course timeline and communicate the expected workload clearly to students, to avoid any stress and frustration.
5. Grading: We advise caution when assigning multiple papers within the same class. The selection of papers such that they are of comparable difficulty with regard to reproducibility is challenging, given that there are many paths one could take during a data analysis, and students are bound to face challenges that were not anticipated [32]. Students are sensitive to a perceived uneven workload across teams, might prioritize performance in class, and in other ways feel that it is unfair that there is variance across teams in the amount of time they had to spend. In our study, this was the only aspect of the data analysis replication activity that the students reflected on negatively in the anonymous feedback. Alternatively, a single class-wide project would address the issue of an uneven workload, but might not fit specific students' interests.
### Limitations
Scientific studies of the in-class replication activities need to be designed carefully in order to allow deriving useful insights. In our study, measurements before and after the activity were contrasted, without randomly assigning students to the experimental conditions. Randomized assignment to the replication activity vs. another type of data analysis activity was considered but ruled out due to ethical challenges and to avoid student frustration. On the contrary, self-selection into a condition (replication vs. standard data analysis) would introduce biases and was hence also ruled out. Ultimately, it is challenging to disentangle the educational benefits of a data analysis replication task from the educational benefits of another comparable data analysis task. Nonetheless, carefully designed studies, such as our design with cross-sectional longitudinal comparisons (Sec. 2), can help tease out the impact within a set of students who are all performing the replication activity.
### Future work
Our study opens the door for a number of future directions aiming to understand how to conduct in-class replication activities. First, exploratory analyses of students' perceptions of the ability to reproduce revealed a tension between attributing inconsistencies either to the authors' mistakes or the student's mistakes due to a perceived lack of skill. This raises an interesting follow-up question of determining the moment when a replication attempt can be considered complete and a student can stop performing the exercise, as opposed to assuming the inconsistencies can be attributed to the students' (lack of) skills or mistakes. It remains unclear-what is considered a sufficient and satisfactory time investment? How to avoid having students commit an unlimited amount of time to unproductive replication attempts?
One proposed solution may involve providing students with a limited number of submissions to a platform for corrected checks. This can involve allocating a "budget" with the number of attempts submitted to a platform to evaluate the data analysis results (similar to leaderboards where participants submit the predictions on a test set for evaluation to a platform). This approach would, however, require the instructors to a priori know the correct results of the data analysis to be performed, which would in turn defeat the power of data analysis replications to serve as a detector of false results in published papers.
Second, future efforts should consider building a crowdsourced cohort of university students to standardize and unify similar efforts [44]. Such efforts to re-design undergraduate courses for reproducibility and collaboration across institutes can result in fostering open science [9].
Third, our study was based on ten pre-selected publications tested in advance. In the future, we envision development of an auditing paradigm where classrooms are fundamentally integrated into the scientific process to evaluate comprehensive samples of published scientific findings, beyond the carefully selected pool used here.
Finally, future research integrating tools to support replication attempts is called for, including the usage of software containers, cloud computing, and checkpoints. These tools make it possible to standardize the computing environment around each submission [19, 27]. Standardizing the computing environment becomes particularly relevant in the age of closed-access large language models increasingly used as part of data analysis and modeling pipelines.
### Conclusion
Our study explores the paradigm of in-class data analysis replications with a double purpose: to teach students while testing science. We show that incorporating replications tasks into the project component of a large data science class has the potentially to establish and increase the reproducibility of scientific work as a natural by-product of data science instruction. The fact that a vast majority of students reported having replicated exactly or qualitatively the assigned figures implies the feasibility of this approach, at a relatively low cost for the instructors of classes that include a data analysis component. We additionally demonstrated how this approach can have tangible benefits for students and the scientific communities. We hope this paper will inspire further instructors to consider including data analysis replications in the syllabus of their classes.
### Ethics
This study was approved by the EPFL Human Research Ethics Committee. We obtained consent for using the produced materials and survey responses for conducting research. Students were able to opt out of their data being analyzed. Students were provided with an information sheet (Methods, "Information sheet for students"). The analyzed data is anonymized and can be accessed only by researchers participating in the project. Furthermore, the students were informed that any survey analyses would be conducted only after the class had already finished and the grades had been formed.
|
2309.09840 | Mobility Performance Analysis of RACH Optimization Based on Decision
Tree Supervised Learning for Conditional Handover in 5G Beamformed Networks | In 5G cellular networks, frequency range 2 (FR2) introduces higher
frequencies that cause rapid signal degradation and challenge user mobility. In
recent studies, a conditional handover procedure has been adopted as an
enhancement to baseline handover to enhance user mobility robustness. In this
article, the mobility performance of conditional handover is analyzed for a 5G
mm-wave network in FR2 that employs beamforming. In addition, a
resource-efficient random access procedure is proposed that increases the
probability of contention-free random access during a handover. Moreover, a
simple yet effective decision tree-based supervised learning method is proposed
to minimize the handover failures that are caused by the beam preparation phase
of the random access procedure. Results have shown that a tradeoff exists
between contention-free random access and handover failures. It is also seen
that the optimum operation point of random access is achievable with the
proposed learning algorithm for conditional handover. Moreover, a mobility
performance comparison of conditional handover with baseline handover is also
carried out. Results have shown that while baseline handover causes fewer
handover failures than conditional handover, the total number of mobility
failures in the latter is less due to the decoupling of the handover
preparation and execution phases. | Subhyal Bin Iqbal, Umur Karabulut, Ahmad Awada, Andre Noll Barreto, Philipp Schulz, Gerhard P. Fettweis | 2023-09-18T14:57:40Z | http://arxiv.org/abs/2309.09840v1 | Mobility Performance Analysis of RACH Optimization Based on Decision Tree Supervised Learning for Conditional Handover in 5G Beamformed Networks
###### Abstract
In 5G cellular networks, frequency range 2 (FR2) introduces higher frequencies that cause rapid signal degradation and challenge user mobility. In recent studies, a conditional handover procedure has been adopted as an enhancement to baseline handover to enhance user mobility robustness. In this article, the mobility performance of conditional handover is analyzed for a 5G mm-wave network in FR2 that employs beamforming. In addition, a resource-efficient random access procedure is proposed that increases the probability of contention-free random access during a handover. Moreover, a simple yet effective decision tree-based supervised learning method is proposed to minimize the handover failures that are caused by the beam preparation phase of the random access procedure. Results have shown that a tradeoff exists between contention-free random access and handover failures. It is also seen that the optimum operation point of random access is achievable with the proposed learning algorithm for conditional handover. Moreover, a mobility performance comparison of conditional handover with baseline handover is also carried out. Results have shown that while baseline handover causes fewer handover failures than conditional handover, the total number of mobility failures in the latter is less due to the decoupling of the handover preparation and execution phases.
5G cellular networks, beamforming, conditional handover, contention-free random access, frequency range 2, handover failures, mobility performance, supervised learning.
## I Introduction
In 5G cellular networks, the demand for user data throughput is poised to increase dramatically [1]. To address this, the range of carrier frequencies has been further increased to frequency range 2 (FR2) [2] to fulfill this ever-increasing demand. However, operating in FR2 challenges user mobility due to the higher free-space path loss and penetration loss which can lead to rapid signal degradation [3]. Moreover, the dense BS deployment in 5G networks increases the total number of handovers, which can cause frequent interruption of the user equipment (UE) connection and increase the signaling overhead [3].
Baseline handover (BHO) is a handover procedure used in Long Term Evolution. It has also been reused for 5G networks in the 3rd Generation Partnership Project (3GPP) _Release 15_[2, 4]. In BHO, the time instant for triggering the handover is critical because the signal of the serving cell should be good enough to receive the handover command and the signal of the target cell should be sufficient for access. This is more pronounced in FR2 due to the rapid signal degradation and dense BS deployment.
Conditional handover (CHO) has been introduced in [5] for 5G New Radio _3GPP Release 16_ as a handover mechanism to increase the mobility robustness of BHO. In CHO, the coupling between handover preparation and execution is removed by introducing a conditional procedure, whereby the handover is prepared early by the serving cell and access to the target cell is performed only later when its radio link is sufficient. A contention-free random access (CFRA) procedure has already been defined in [2]. Here the target cell of the handover can allocate CFRA resources for the UE during the handover. Using CFRA instead of contention-based random access (CBRA) resources helps to avoid collision in random access and consequently reduces mobility interruption time and signaling overhead [6, 7].
The authors in [8] have proposed a scheme to reduce the number of CFRA attempts during a handover but it focuses on using an additional measurement report between the CHO preparation and execution phases to update the CFRA resources for specific beams. In the first contribution of this article, a resource-efficient random access channel (RE-RACH) procedure is proposed as a simple yet powerful enhancement to the 3GPP RACH model [2] such that the utilization of CFRA resources is increased. The authors in [9] have also proposed a scheme to reduce HOFs in a mobile environment by using different machine learning techniques including decision tree-based supervised learning but they consider a non-beamformed 5G system with BHO. In the second contribution of this article, a beam-specific enhanced logging and learning (BELL) approach based on a decision tree-based supervised learning algorithm [10] is proposed. The BELL approach is shown to decrease handover failures (HOFs) by using a learning approach to avoid HOFs caused by wrong beam preparation during the beam preparation phase of the random access procedure. A reduction in HOFs also means unnecessary RACH attempts can be avoided and therefore resource utilization can be improved. To the best of the authors' knowledge, both these enhancements are novel and their mobility performance in 5G cellular beamformed
networks has not been investigated in literature before. Later in the article, the RE-RACH and BELL approaches are combined to study the tradeoff between CFRA resource utilization and HOFs and furthermore optimize the mobility performance.
## II UE Measurements and Handover Models
In 5G cellular networks it is necessary to hand over the link of a UE between cells to sustain the user connection with the network. This handover is performed using received signal power measurements at the UE from the serving and neighboring cells and by following a predefined handover procedure. In this section, BHO and CHO procedures are reviewed along with the relevant UE measurements for mobility.
### _UE Measurements in 5G Beamformed Networks_
A UE \(u\) in the network monitors the reference signal received power (RSRP) \(P_{c,b}^{\text{RSRP}}(n)\) (in dBm) at discrete time instant \(n\) for beams \(b\in B\) of cell \(c\in C\), using the synchronization signal block (SSB) bursts that are transmitted by the base station (BS). \(u\) is left out in the subscript here and in future instances for simplicity. The separation between the time instants is given by \(\Delta t\) ms. The raw RSRP measurements are inadequate for handover decisions since those measurements fluctuate rapidly over time due to fast fading and measurement errors which would lead to unstable handover decisions. To mitigate these channel impairments, the UE applies a moving average layer-1 (L1) filter and an infinite impulse response (IIR) layer-3 (L3) filter sequentially to the RSRP measurements. The implementation of L1 filtering is not specified in 3GPP standardization and it is UE specific, i.e., it can be performed either in linear or dB domain. The L1 filter output can be expressed as
\[P_{c,b}^{\text{L1}}(m)=\frac{1}{N_{\text{L1}}}\sum_{\kappa=0}^{N_{\text{L1}}- 1}P_{c,b}^{\text{RSRP}}(m-\kappa),\ m=n\omega \tag{1}\]
where \(\omega\in\mathbb{N}\) is the L1 measurement period normalized by time step duration \(\Delta t\), and \(N_{\text{L1}}\) is the number of samples that are averaged in each L1 measurement period. For cell quality derivation of cell \(c\), the strongest set \(B_{c}^{\text{str}}\) of beams having measurements above the threshold \(P_{\text{thr}}\) is determined by the UE as
\[B_{c}^{\text{str}}(m)=\{b\ |\ P_{c,b}^{\text{L1}}(m)>P_{\text{thr}}\}\] (2a) subject to \[P_{c,b_{1}}^{\text{L1}}(m)>P_{c,b_{2}}^{\text{L1}}(m),\ \forall b_{i}\in B_{c}^{\text{str}},\ \forall b_{j}\in B\setminus B_{c}^{\text{str}}, \tag{2b}\] \[|B_{c}^{\text{str}}|\leq N_{\text{str}},\ N_{\text{str}}\in \mathbb{N}^{+}. \tag{2c}\]
The cardinality of the set is denoted by \(|\cdot|\) and \(N_{\text{str}}\) is the maximum number of beams that are accounted for cell quality derivation. L1 RSRP measurements of beams \(b\in B_{c}^{\text{str}}\) are averaged to derive the L1 cell quality of cell \(c\) as
\[P_{c}^{\text{L1}}(m)=\frac{1}{|B_{c}^{\text{str}}(m)|}\sum_{b\in B_{c}^{\text {str}}(m)}P_{c,b}^{\text{L1}}(m). \tag{3}\]
If \(B_{c}^{\text{str}}(m)\) is empty, \(P_{c}^{\text{L1}}(m)\) is equal to highest \(P_{c,b}^{\text{L1}}(m)\).
L1 cell quality is further smoothed by L3 filtering and L3 cell quality output is derived by the UE as
\[P_{c}^{\text{L3}}(m)=\alpha P_{c}^{\text{L1}}(m)+(1-\alpha)P_{c}^{\text{L3}} (m-\omega), \tag{4}\]
where \(\alpha=\left(\frac{1}{2}\right)^{\frac{k}{4}}\) is the forgetting factor that controls the impact of older measurements \(P_{c}^{\text{L3}}(m-\omega)\) and \(k\) is the filter coefficient of the IIR filter [2].
Similarly, the L3 beam measurement \(P_{c,b}^{\text{L3}}(m)\) of each beam is evaluated by L3 filtering of L1 RSRP beam measurements
\[P_{c,b}^{\text{L3}}(m)=\alpha^{\prime}P_{c,b}^{\text{L1}}(m)+(1-\alpha^{ \prime})P_{c,b}^{\text{L3}}(m-\omega), \tag{5}\]
where \(\alpha^{\prime}\) can be configured separately from \(\alpha\).
### _Baseline Handover_
L3 cell quality measurements \(P_{c}^{\text{L3}}(m)\) are used to assess the quality of the radio links between the UE and its serving and neighboring cells. To this end, the UE reports the L3 cell quality measurements \(P_{c}^{\text{L3}}(m)\) and beam measurements \(P_{c,b}^{\text{L3}}(m)\) to its serving cell \(c_{0}\) if the following _A3_ condition
\[P_{c_{0}}^{\text{L3}}(m)+o_{c_{0},c}^{\text{L3}}<P_{c}^{\text{L3}}(m)\ \ \text{for}\ \ m_{0}-T_{\text{TTT}}^{\text{A3}}<m<m_{0}, \tag{6}\]
expires at time instant \(m=m_{0}\) for any neighboring cell \(c\neq c_{0}\). The cell-pair specific handover offset \(o_{c_{0},c}^{\text{A3}}\) can be configured differently by \(c_{0}\) for each neighboring cell \(c\) and the time-to-trigger \(T_{\text{TTT}}^{\text{A3}}\) is the observation period of condition (6) before a measurement report is initiated by \(c_{0}\).
After receiving L3 cell quality measurements from the UE, the serving cell \(c_{0}\) sends a _Handover Request_ to the target cell \(c_{\text{T}}\), typically the strongest cell, along with the L3 beam measurements \(P_{c,b}^{\text{L3}}(m)\). Thereafter, the target cell reserves CFRA resources (preambles) for beams \(b\in B_{c_{\text{T}}}^{\text{prep}}\) with the highest power based on reported \(P_{c,b}^{\text{L3}}(m)\). The target cell \(c_{\text{T}}\) prepares the _Handover Request Acknowledgement_, which includes the reserved CFRA resources and sends it to the serving cell. Thereafter, the serving cell sends the _Handover Command_ to the UE. This command includes the target cell configuration and CFRA preambles that are reserved by the target cell \(c_{\text{T}}\). Upon reception, the UE detaches from the serving cell and initiates random access toward the target cell.
In this handover scheme, the radio link between UE and the serving cell should be good enough to send the measurement report in the uplink and receive the handover command in the downlink. In addition, the radio link quality between the UE and the target cell should also be sufficient so that the signaling between UE and the target cell is sustained during the RACH procedure. Herein, the link quality conditions for a successful handover between the serving cell \(c_{0}\) and target cell \(c_{\text{T}}\) are expressed as
\[\gamma_{c_{0},b}(m_{0})> \gamma_{\text{out}}, \tag{7a}\] \[\gamma_{c_{0},b}(m_{0}+T_{\text{p}})> \gamma_{\text{out}},\] (7b) \[\gamma_{\text{cr},b}(m_{0}+T_{\text{p}})> \gamma_{\text{out}}, \tag{7c}\]
where \(\gamma_{c,b}(m)\) is the SINR of the link between the UE and the beam \(b\) of cell \(c\). The time instant \(m_{0}\) is when the
measurement report is sent by the UE and \(T_{\text{p}}\) is the handover preparation time, i.e., the time delay between the UE sending the measurement report and receiving the handover command. \(\gamma_{\text{out}}\) is the SINR threshold that is required for maintaining radio communication between UE and the network.
### _Conditional Handover_
In CHO, the handover preparation and execution phases are decoupled. This helps to receive the handover command safely from the serving cell by triggering the handover early and to access the target cell later when its radio link is sufficient.
Similar to the A3 condition in (6), an _Add_ condition is defined as
\[P_{c_{0}}^{\text{L3}}(m)+o_{c_{0},c}^{\text{add}}<P_{c}^{\text{L3}}(m)\ \ \text{for}\ \ m_{0}-T_{\text{TTT}}^{\text{add}}<m<m_{0}, \tag{8}\]
where \(o_{c_{0},c}^{\text{add}}\) is defined as the _Add_ condition offset. The UE sends the measurement report to serving cell \(c_{0}\) at \(m=m_{0}\) if the _Add_ condition is fulfilled for \(T_{\text{TTT}}^{\text{add}}\) seconds. Then, the serving cell \(c_{0}\) sends the _Handover Request_ to the target cell \(c_{\text{T}}\) for the given UE. The preparation of the handover is performed as in BHO, where the target cell reserves CFRA RACH resources for the UE and sends the _Handover command_ to the UE via the serving cell. Unlike BHO, however, the UE does not detach from the serving cell immediately and initiates the RACH process towards the target cell. Instead, the UE continues measuring the RSRP measurements from the neighboring cells and initiates the random access only when the _Execution_ condition expires at time instant \(m_{1}\)
\[P_{c_{0}}^{\text{L3}}(m)+o_{c_{0},c_{\text{T}}}^{\text{exec}}<P_{c_{\text{T}}} ^{\text{L3}}(m)\ \ \text{for}\ \ m_{1}-T_{\text{TTT}}^{\text{exec}}<m<m_{1}. \tag{9}\]
where \(m_{1}>m_{0}\). The _Execution_ condition offset \(o_{c_{0},c_{\text{T}}}^{\text{exec}}\) is configured by the serving cell and forwarded to the UE in the handover command along with CFRA resources reserved by the target cell.
It is observed that larger \(o_{c_{0},c_{\text{T}}}^{\text{add}}\) values lead to early preparation of the target cell and reservation of the RACH preambles which ensures that the UE can send the measurement report and receive the handover command successfully (cf. (7a) and (7b)). Besides, unlike BHO, lower \(o_{c_{0},c_{\text{T}}}^{\text{add}}\) does not lead to any early RACH attempt of the UE towards the target cell since the random access is initiated only when the _Execution_ condition is fulfilled. Higher offset \(o_{c_{0},c_{\text{T}}}^{\text{exec}}\) values cause the UE to perform random access late enough such that it is more likely that the SINR \(\gamma_{c_{\text{T}},b}(m)\) is above \(\gamma_{\text{out}}\) (cf. (7c)).
## III RACH Procedure in 5G Beamformed Networks
In this section, the basics of random access are discussed. Then, the 3GPP RACH procedure used in 5G networks [2] is described and our proposed RACH procedure is introduced.
### _Contention-free and Contention-based Random Access_
Random access can be described as the first signaling performed by a UE for establishing the synchronization with a cell during a handover procedure. The UE initiates random access by sending a RACH preamble to the target cell. However, it is possible that multiple UEs use the same preamble during random access towards the same transmit beam of a target cell. In this case, RACH collision occurs, which means that the UE needs to re-transmit the RACH preamble. This results in additional signaling and handover interruption time [6, 7]. The type of random access where a UE selects one preamble out of set that is common for all UEs in the network is called CBRA.
During a handover, the collision risk can be avoided by assigning dedicated preambles to each UE to be used towards a prepared transmit beam \(b\in B_{c_{\text{T}}}^{\text{prep}}\) of the target cell \(c_{\text{T}}\). The network identifies the UE signal without further signaling and handover interruption time if the UE accesses the prepared beam using the assigned dedicated preamble. This type of random access is called CFRA. Based on [2], \(B_{c_{\text{T}}}^{\text{prep}}\) can be defined as the preparation of beams with the strongest RSRP for UE \(u\) and can be formulated as
\[B_{c_{\text{T}}}^{\text{prep}}=\{b|P_{c_{\text{T}},b}^{\text{L3}}(m_{0})\geq P _{c_{\text{T}},b_{\text{i}}}^{\text{L3}}(m_{0}),\ b\neq b_{i},\ b,b_{i}\in B_{c_ {\text{T}}}\}\] (10a) subject to \[|B_{c_{\text{T}}}^{\text{prep}}|=N_{\text{B}} \tag{10b}\]
where \(N_{\text{B}}\) is the number of beams prepared for random access.
### _Access Beam and Preamble Selection_
During a handover, accessing the target cell by using a dedicated CFRA preamble is preferable due to lower handover interruption time and signaling requirements than for CBRA. Although a set of beams \(b\in B_{c_{\text{T}}}^{\text{prep}}\) of the target cell \(c_{\text{T}}\) with the strongest L3 beam quality measurements \(P_{b,c_{\text{T}}}^{\text{L3}}\) can be prepared with CFRA resources, measurements of those beams may vary between the preparation time instant \(m=m_{0}\) and access time \(m=m_{1}\) due to the decoupling between the phases. The variation of beam measurements is more significant in CHO as compared to BHO. This is because in BHO the elapsed time between the preparation and execution phases is given by the handover preparation time \(T_{\text{p}}\) in (7c). However, in CHO there is an additional period of time \(T_{\text{f}}\) until the _Execution_ condition in (9) is fulfilled after receiving the handover command from the serving cell \(c_{0}\), which itself follows the CHO preparation given in (8). The time period \(T_{\text{f}}\) depends on the serving and target cell RSRPs and can be as large as \(10\,\text{s}\)[8], which therefore makes it much than \(T_{\text{p}}=50\,\text{ms}\).
Due to this temporal variation of beam measurements, the access beam is selected based on measurements at time instant \(m=m_{1}\) of CHO execution. This is illustrated in Fig. 1. Herein, the UE selects the access beam \(b_{0}\in B_{c_{\text{T}}}^{\text{prep}}\) as follows
\[b_{0}=\operatorname*{arg\,max}_{b\in B_{c_{\text{T}}}^{\text{prep}}}P_{c_{ \text{T}},b}^{\text{L1}}(m_{1}),\ \text{if}\ P_{c_{\text{T}},b_{0}}^{\text{L1}}(m_{1})>\xi_{\text{ access}}, \tag{11}\]
where \(\xi_{\text{access}}\) is the threshold that L1 RSRP beam measurements should exceed to consider prepared beams for access. Ultimately, the UE accesses the prepared beam \(b_{0}\) that satisfies the condition in (11) and uses the corresponding CFRA preamble. If none of the measurements \(P_{b,c_{\text{T}}}^{\text{L1}}\) of beams \(b\in B_{c_{\text{T}}}^{\text{prep}}\) is above the threshold \(\xi_{\text{access}}\), the UE simply selects any beam
\(b_{0}\) (as opposed to just the prepared beams in the set \(B_{\mathrm{cr}}^{\text{prep}}\) in (11)) with the strongest L1 RSRP beam measurement as
\[b_{0}=\operatorname*{arg\,max}_{b\in B_{\mathrm{cr}}}P_{\mathrm{cr},b}^{\text{L1} }(m_{1}). \tag{12}\]
In 3GPP standardization [2], CBRA preambles are used if none of the L1 RSRP measurement of prepared beams is above the threshold \(\xi_{\text{access}}\). This carries the disadvantage that the UE may select CBRA resources although there may be CFRA resources associated with the selected strongest beam. To address this issue, a resource-efficient RACH (RE-RACH) scheme is proposed to increase the CFRA utilization. This enhancement is shown in green in Fig. 1. Herein, the UE uses CFRA resources if the selected beam happens to be a prepared beam \(b_{0}\in B_{\mathrm{cr}}^{\text{prep}}\), even if the L1 RSRP beam measurement \(P_{\mathrm{cr},b_{0}}^{\text{L1}}\) is below the threshold \(\xi_{\text{access}}\). This will lead to less signaling and handover interruption during the RACH procedure. The UE selects either CFRA or CBRA preambles by following either of the two RACH procedures shown in Fig. 1 and attempts to access the target cell with the selected preamble. If the random access fails, the UE repeats the preamble selection process and declares a HOF after a predefined number of attempts. This is followed by a re-establishment process in which the UE searches for a new serving cell to be connected to.
### _Beam-specific Enhanced Logging and Learning Approach_
As mentioned in Section III-B, the RSRP of the prepared beams \(b\in B_{\mathrm{cr}}^{\text{prep}}\) at time \(m=m_{0}\) can change as time elapses due to temporal characteristics of the wireless channel and user mobility. It could be that the RSRP of the prepared beam with the highest RSRP might not be strong enough for successful random access when the access procedure is started at \(m=m_{1}\) and afterward. As will be seen later in Section V-B, HOFs are primarily dependent on the radio link quality, and their probability can increase if the quality of the radio link between the UE and the prepared beam is not good enough.
In this article, we propose a RACH optimization scheme based on a supervised learning algorithm [10] that aims to learn preparing the correct transmit beam so that the HOFs caused by wrong beam preparation in (10a) during the RACH process are minimized. We term it beam-specific enhanced logging and learning (BELL). BELL classifies the HOF events into sub-events by following a pre-defined decision tree. This is succeeded by an assessment mechanism that reacts to the inferences (classes), either by rewarding or penalizing the network decision on the prepared beam in a way that the decisions leading to successful handovers are encouraged and decisions leading to HOFs are discouraged. The decision tree used in the BELL approach is illustrated in Fig. 2. It applies the root-cause analysis to classify the HOF events. A handover attempt that is followed by the HOF event could be classified as one of four unique classes, which will be explained next.
If the HOF is followed by a re-establishment process that fails, the UE will be unconnected to the network until further connection to a new cell. This type of HOF event is classified as a "coverage hole". Although the UE failed to hand over to the target cell \(c_{\text{T}}\), it may connect to cell \(c_{0}^{\prime}=c_{\text{T}}\) after the re-establishment process through a beam \(b_{0}^{\prime}\not\in B_{\mathrm{cr}}^{\text{prep}}\) other than the prepared one, which is then interpreted as "wrong beam prepared" (shown in orange). This type of HOF could be avoided if a handover attempt through the wrong beam \(b_{0}\in B_{\mathrm{cr}}^{\text{prep}}\) is avoided and the preparation of \(b_{0}^{\prime}\) is motivated. The UE may also re-establish on the serving cell \(c_{0}\), i.e., \(c_{0}^{\prime}=c_{0}\), which shows that the handover execution condition expired before the link between the target cell \(c_{\text{T}}\) and UE became good enough for successful handover. Therefore, it is classified as "early execution". Similarly, if the UE does not connect to target cell \(c_{\text{T}}\) after HOF but rather another cell, i.e., \(c_{0}^{\prime}\neq c_{\text{T}}\)
Fig. 1: A flow diagram of the random access procedure. The part shown in black is defined in 3GPP standard and the green part is the proposed enhancement for the random access procedure.
Fig. 2: Classification of the HOFs with a decision tree that applies the root-cause analysis on the HOFs. Classes identify the correctness of the decisions on the target cell, prepared beam, and execution time of the handover process along with network planning problems such as coverage holes. The BELL approach focuses on addressing HOFs caused by wrong beam prepared (shown in orange). Serving cell, target cell, re-establishment cell, prepared beam, and re-establishment beam are denoted by \(c_{0}\), \(c_{\text{T}}\), \(c_{0}^{\prime}\), \(b_{0}\) and \(b_{0}^{\prime}\) respectively.
it shows that the preparation of the target cell \(c_{\text{T}}\) was not accurate. This is classified as "wrong cell prepared".
It is already mentioned that this article is focused on the RACH optimization by minimizing the HOF caused by the preparation of the wrong beams defined in (10a). Therefore, the BELL approach reacts to the HOF events that are classified as "wrong beam prepared". To this end, a beam-specific preparation offset \(o_{c,b}^{\text{prep}}\) is proposed and equation (10a) is reformulated as
\[B_{c_{\text{T}}}^{\text{prep}} =\{b|P_{c_{\text{T}},b}^{\text{L3}}(m_{0})+o_{c_{\text{T}},b}^{ \text{prep}}\geq P_{c_{\text{T}},b_{i}}^{\text{L3}}(m_{0})+o_{c_{\text{T}},b_{i }}^{\text{prep}},\] \[b\neq b_{i},\ b,b_{i}\in B_{c_{\text{T}}}\}. \tag{13}\]
When a HOF event is classified as "wrong beam prepared", the beam-specific preparation offset \(o_{c,b}^{\text{prep}}\) of \(b\in B_{c_{\text{T}}}^{\text{prep}}\) is reduced by \(\Delta o\) dB to penalize the preparation of this beam, and the offset \(o_{c_{\text{T}},b_{i}^{\text{prep}}}^{\text{prep}}\) of the re-established beam \(b_{\text{f}}^{\text{0}}\not\in B_{c_{\text{T}}}^{\text{prep}}\) is increased by \(\Delta o\) dB to leverage the preparation of the beam \(b_{\text{f}}^{\text{0}}\) in future events. Enabling such optimization is possible only if the network logs the target cell \(c_{\text{T}}\) and access beam \(b_{\text{0}}\) information so that the root-cause analysis is applied to classify the HOF events and proper preparation decisions can be learned by the network over time.
The handover performance can be further optimized and HOF events that are classified as "early execution" and "wrong cell prepared" can be improved. This can be done by adjusting the cell-specific preparation and execution offsets \(o_{c_{0},c}^{\text{prep}}\) and \(o_{c_{0},c}^{\text{exec}}\), respectively, such that the early execution or wrong cell preparation can be discouraged towards a specific cell. However, as will be seen later in Section V-B, these classes are corner cases that are rarely observed compared to the case "wrong beam prepared".
## IV Simulation Scenario and Parameters
In this section, the simulation scenario is described along with the simulation parameters which are listed in Table I. The simulations have been performed in our proprietary MATLAB-based system-level simulator.
A 5G network based on the Madrid Grid layout described in the METIS 2 project [11] is considered in this study. The layout is given in Fig. 3 and consists of buildings (shown in grey), streets (show in black), an open square (shown in blue), and a pedestrian area (shown in green). There are 33 three-sector macro cells which are located on the rooftops of the buildings (shown in red). The scenario involves mixed UE traffic and users are distributed into three different categories. 200 users are moving in the streets at \(30\) km/h in both directions. 40 pedestrian users are walking in the open square and 80 users are walking in the pedestrian area, both at \(3\) km/h where they walk straight in a random direction and bounce when they reach the area border. The channel model used [12] takes into account shadow fading due to large obstacles and assumes a soft line-of-sight for all radio links between the cells and UEs. Fast fading is taken into consideration through the low complexity channel model for multi-beam systems proposed in [13], which integrates the spatial and temporal characteristics of 3GPP's geometry-based stochastic channel model [12] into Jake's channel model. The transmitter (Tx)-side beamforming gain model is based on [13], where a 12-beam grid configuration is considered. Beams \(b\in\{1,\ldots,8\}\) have smaller beamwidth and higher beamforming gain to cover an area further apart from the BS. Whereas beams \(b\in\{9,\ldots,12\}\) with larger beamwidth and relatively smaller beamforming gain are defined to serve regions closer to the BS. The number of simultaneously scheduled beams per cell is taken as \(K_{b}=4\).
The average dowlink SINR \(\gamma_{c,b}(m)\) of a link between the UE and beam \(b\) of cell \(c\) is evaluated by the approximation given in [15] for the strict resource fair scheduler, where all UEs get precisely the same amount of resources. This SINR is then used in deriving the HOF and radio link failure (RLF) models, each of which are discussed below.
_Handover Failure Model_: The HOF model is used to determine the failure of a UE to hand over from its serving cell \(c_{0}\) to its target cell \(c_{\text{T}}\). As shown in Fig. 1, for both 3GPP and the proposed RE-RACH procedure, the UE may decide to use either CBRA or CFRA preamble and attempt to access the selected prepared beam \(b_{0}\) of target cell \(c_{\text{T}}\) with the selected preamble. For successful random access, it is required that the SINR \(\gamma_{c_{\text{T}},b_{0}}(m)\) of the target cell remains above the threshold \(\gamma_{\text{out}}\), during the RACH procedure. A HOF timer \(T_{\text{HOE}}=500\) ms is started when the UE starts the random access and sends the RACH preamble. The RACH procedure in Fig. 1 is repeated until a successful RACH attempt is achieved or \(T_{\text{HOF}}\) expires. A UE may succeed in accessing the target cell only if \(\gamma_{c_{\text{T}},b_{0}}(m)\) exceeds the threshold \(\gamma_{\text{out}}\). In case the timer \(T_{\text{HOF}}\) expires and the UE fails to access the target cell, i.e., \(\gamma_{c_{\text{T}},b}<\gamma_{\text{out}}\), a HOF is declared. The UE then performs connection re-establishment, which results in additional signaling and handover interruption time [2].
_Radio Link Failure Model_: The RLF model is used to determine failure of a UE while in its serving cell \(c_{0}\). The UE is prompted to start an RLF timer \(T_{\text{RLF}}=600\) ms when the SINR \(\gamma_{c_{\text{D}},b}(m)\) of the serving cell \(c_{0}\) falls below \(\gamma_{\text{out}}\). An RLF is declared if \(T_{\text{RLF}}\) expires. During the time, the UE may recover before declaring an RLF if the SINR \(\gamma_{c_{\text{D}},b}\) exceeds a second SINR threshold defined as \(\gamma_{\text{in}}\) = \(-6\) dB, where \(\gamma_{\text{in}}>\gamma_{\text{out}}\). A more detailed explanation of the procedure can be found in [2].
## V Performance Evaluation
In this section, the mobility performance of the proposed RE-RACH procedure is compared with 3GPP RACH for both BHO and CHO. The performance analysis is then extended for the BELL approach. The key performance indicators (KPIs) used for comparison are explained below.
### _Mobility KPIs_
#### V-A1 Cbra Ratio (\(R_{\text{CbRA}}\))
The ratio of the total number of CBRA events to the sum of the total number of CBRA and CFRA events in the network, denoted as \(N_{\text{CbRA}}\) and \(N_{\text{CFRA}}\), respectively. The ratio is thus defined as
\[R_{\text{CbRA}}[\%]=\frac{N_{\text{CbRA}}}{N_{\text{CbRA}}+N_{\text{CFRA}}} \cdot 100\%. \tag{14}\]
#### V-A2 \(N_{\text{HOF}}\)
The total number of HOFs that are declared in the network, following the HOF model discussed in Section V-B.
#### V-A3 \(N_{\text{RLF}}\)
The total number of RLFs that are declared in the network, following the RLF model discussed in Section V-B.
Both \(N_{\text{HOF}}\) and \(N_{\text{RLF}}\) are normalized to the total number of UEs \(N_{\text{UE}}\) in the network per minute and expressed as UE/min.
### _Simulation Results_
The mobility performance of the 3GPP and RE-RACH procedures is investigated for both CHO and BHO. To this end, the impact of different beam access thresholds \(\xi_{\text{access}}\) and numbers of prepared beams \(N_{\text{B}}\) are analyzed in terms of the mobility KPIs discussed above.
#### V-B1 3GPP and RE-RACH Performance with CHO
It can be seen in Fig. 4(a) that for \(\xi_{\text{access}}\) = \(-\infty\) the ratio \(R_{\text{CbRA}}\) equals zero for all the six cases in consideration, i.e., the UE uses only CFRA preambles during RACH. This is because the UE always selects a prepared beam from the \(B_{\text{cr}}^{\text{prep}}\) to perform random- access d towards the target cell \(c_{\text{T}}\), as given in (11). As per (12), for higher \(\xi_{\text{access}}\) values the UE tends to select prepared beams less frequently, which results in more likely use of CBRA preambles for random access with the 3GPP approach. It is also clearly observed in Fig. 4(a) that the ratio of CBRA resource usage is much smaller for the RE-RACH procedure for higher values of \(\xi_{\text{access}}\). This is because the UE now has the option of using CFRA resources if none of the prepared beams have beam measurements above the threshold \(\xi_{\text{access}}\). On the other hand, it can be seen in Fig. 4(b) that the beam access threshold \(\xi_{\text{access}}=-\infty\) has the worst HOF performance for all the six cases under consideration. This is because the RSRP of the prepared beam changes over time and the prepared beam does not always remain a good candidate during the time that has elapsed between the handover preparation and execution phases. Ultimately, the SINR \(\gamma_{c,b_{0}}(m)\) of the accessed beam \(b_{0}\), where \(m\geq m_{1}\), falls below \(\gamma_{\text{out}}\) which leads to a HOF. This is more visible for \(N_{\text{B}}=1\) since the UE does not have any other options for selecting another prepared beam. Increasing \(N_{\text{B}}\) from \(1\) to \(4\) reduces the access failure \(N_{\text{HO}}\) to almost one-third of its value because it increases the chance of the strongest beam being selected by the UE.
For increasing values of access threshold \(\xi_{\text{access}}\), the RACH beam selection procedure prioritizes the L1 RSRP beam measurements \(P_{c,b}^{\text{L1}}(m)\) as given in (12) and the UE becomes less persistent on selecting one of the prepared beams and instead simply selects the beam with the strongest L1 RSRP beam measurement. As a consequence, beams with higher \(P_{c,b}^{\text{L1}}(m)\) are selected to be accessed, which yields a higher
Fig. 3: Madrid Grid layout is used for simulation scenario as described in METIS 2 project [11]. The scenario consists of buildings (grey), streets (black) with 200 users, open square (blue) with 40 users and pedestrian area (green) with 80 users.
SINR \(\gamma_{{\rm{c}}_{0},b}(m)\) and consequently fewer HOFs. This can be seen in Fig. 4(b). It can also be visualized in Fig. 4(b) that the same HOF performance is observed for both the 3GPP and the proposed RE-RACH procedures for the respective number of prepared beams \(N_{\text{B}}\) cases. This is due to the fact that the RE-RACH procedure only focuses on increasing CFRA resource usage whereas the HOF performance is dependent on the selected access beam \(b_{0}\) that is taken into account in the target cell SINR given in the HOF model Section IV.
#### Iv-B2 BELL Performance with CHO
It can be seen in Fig. 4(b) that the total number of normalized HOFs of both the 3GPP and RE-RACH procedures without the BELL approach is approximately \(0.3\)/UE/min for \(\xi_{\text{access}}=-\infty\) and \(N_{\text{B}}=1\) (shown in blue). Applying the BELL approach on both the RACH procedures prevents around \(60\%\) the HOFs and reduces them to around \(0.13\)/UE/min (shown in green). For beam access threshold values \(\xi_{\text{access}}>-\infty\), the same trend is seen for both \(N_{\text{B}}=1\) and \(N_{\text{B}}=4\) and the results converge as \(\xi_{\text{access}}\) approaches \(\infty\). This shows that the BELL approach not only improves the HOF performance but also requires less resources to be reserved since essentially the same performance can be improved by preparing less beams \(N_{\text{B}}\) for higher \(\xi_{\text{access}}\) values. Furthermore, it is seen that for \(N_{\text{B}}=4\) the HOF performance of the BELL approach (shown in cyan) remains fairly constant for all \(\xi_{\text{access}}\) values and shows the same performance as that seen at \(\xi_{\text{access}}=\infty\), where always the strongest beam is selected as the access beam, no matter which beam was prepared. Hence, the efficacy of the scheme can be seen here. It is to be noted here that the BELL approach does not react on the other classes that are defined for HOFs in Fig. 2, such as coverage holes, early execution, and wrong cell preparation but only focuses on addressing HOFs caused by wrong beam preparation. This is the cause of residual failures seen at \(\xi_{\text{access}}=\infty\) for the BELL approach.
It is also seen in Fig. 4(a) that ratio \(R_{\text{CBRA}}\) is slightly reduced when the BELL approach is employed. This is because the BELL approach designates the prepared beam based on a rewarding algorithm that leverages the beams with higher L1 RSRP beam measurements at the time of access \(m>m_{1}\). Therefore, the chance of preparing a beam is indirectly increased, where the prepared beams are more likely to be CFRA on account of their higher RSRP, as given in (11).
Lastly, it can be concluded that the BELL approach with \(N_{\text{B}}=4\) provides an optimum operation point at \(\xi_{\text{access}}=-\infty\) for CHO. This is because it gives full CFRA resource utilization (\(R_{\text{CBRA}}=0\%\)) for minimum achievable HOF, where the HOFs are comparable to the minimum convergence value seen at \(\xi_{\text{access}}=-\infty\) for all the six cases under consideration.
#### Iv-B3 RACH Performance Comparison between BHO and CHO
It can be observed in Fig. 5 that HOFs (shown in solid line) are not observed in BHO for any number \(N_{\text{B}}\) of prepared beams and beam access threshold \(\xi_{\text{access}}\). As explained in Section III-B, this is because the time \(T_{\text{p}}\) that elapses between handover preparation and execution in BHO is shorter than that of CHO (\(T_{\text{p}}+T_{\text{f}}\)) and during this time the measurements of the prepared beams do not undergo large changes. Consequently, the UEs perform access to a beam \(b_{0}\) that yields a sufficient SINR \(\gamma_{{\rm{cr}},b_{0}}(m)\) at target cell \(c_{\text{T}}\). The BELL approach cannot improve the HOF performance of BHO since it focuses on reducing the HOFs, which to begin with are not observed here.
It can also be seen in Fig. 5 that the CBRA ratio (shown in dashed line) of the 3GPP procedure for increasing \(\xi_{\text{access}}\) values sees a similar trend as seen for CHO in Fig. 4. This is again due to the fact that the 3GPP RACH procedure does not
Fig. 4: The mobility performance of the 3GPP and RE-RACH procedures for CHO, shown as a function of the beam access threshold \(\xi_{\text{access}}\). The mobility performance is shown in terms of (a) the ratio \(R_{\text{CBRA}}\) and (b) HOFs for different numbers of prepared beams for random access \(N_{\text{B}}\). The performance of the BELL approach is also shown for both the RACH procedures.
Fig. 5: The mobility performance of the 3GPP and RE-RACH procedures for BHO, shown as a function of the beam access threshold \(\xi_{\text{access}}\). The mobility performance is shown in terms of HOFs (solid line) and ratio \(R_{\text{CBRA}}\) (dashed line), for different numbers of prepared beams for random access \(N_{\text{B}}\).
consider the prepared beams in case the L1 measurements are below the access threshold \(\xi_{\text{access}}\). Fig. 5 also shows that the CBRA ratio of the proposed RE-RACH procedure is close to 0% and only slightly increases for higher \(\xi_{\text{access}}\) because the measurements of the beams do not change much in the time interval between preparation and execution phases, which is much shorter than that of the CHO case.
#### V-B4 Total Number of Mobility Failures
Fig. 6 shows a comparison between BHO and CHO for the total number of failures, taken as a sum of \(N_{\text{HOF}}\) and \(N_{\text{RLF}}\). Since it has been shown in Figs. 4 and 5 that the HOFs are independent of the RACH procedure, the results in Fig. 6 do not differentiate between them. Furthermore, the total number of failures for BHO results in the same performance for both \(N_{\text{B}}=1\) and \(N_{\text{B}}=4\) and for the BELL approach, and therefore a differentiation has not been made.
Although Fig. 5 shows that the mobility performance of BHO in terms of HOF is better than that of CHO as seen in Fig. 4(b), Fig. 6 illustrates that the overall mobility failure performance of BHO is improved by the conditional execution mechanism that is introduced by CHO. When BHO (in red) is compared with CHO BELL, \(N_{\text{B}}=4\) (in cyan) at \(\xi_{\text{access}}=\infty\), a relative reduction of \(30\%\) is seen (from \(2.06\) to \(1.45\)/UE/min). Furthermore, one can also state that the mobility failures observed in the mobility scenario are dominated by RLFs, which is improved by CHO. The enhancements of a larger number of prepared beams for RACH \(N_{\text{B}}\) and BELL approach on the overall performance of CHO can also be seen.
## VI Conclusion
In this article, conditional handover of _3GPP Release 16_ is analyzed for a 5G beamformed network. Baseline and conditional handover procedures have been reviewed along with L1 and L3 UE measurements that are imperative for mobility. In addition, the 3GPP random access procedure is revisited and a new random access procedure is proposed that aims to increase contention-free random access (CFRA) and in turn reduce the signaling overhead and latency during handover. Besides, a decision tree-based supervised learning method is proposed that reduces the handover failures (HOFs) caused by the beam preparation phase of the RACH procedure. The results show that the optimum operation point is achieved with the proposed learning algorithm. Furthermore, the mobility performance of conditional handover is compared with baseline handover. Simulation results have shown that the number of fall-backs to CBRA is reduced significantly when the proposed random access procedure is used. Moreover, the results have shown that the baseline handover procedure causes fewer handover failures than conditional handover. However, the total number of failures for conditional handover is less than that of baseline handover due to the decoupling of handover preparation and execution phases. Based on these findings, future studies may be conducted to investigate the effect of the proposed schemes with enhanced conditional handover techniques [16].
|
2305.19992 | A Nested Matrix-Tensor Model for Noisy Multi-view Clustering | In this paper, we propose a nested matrix-tensor model which extends the
spiked rank-one tensor model of order three. This model is particularly
motivated by a multi-view clustering problem in which multiple noisy
observations of each data point are acquired, with potentially non-uniform
variances along the views. In this case, data can be naturally represented by
an order-three tensor where the views are stacked. Given such a tensor, we
consider the estimation of the hidden clusters via performing a best rank-one
tensor approximation. In order to study the theoretical performance of this
approach, we characterize the behavior of this best rank-one approximation in
terms of the alignments of the obtained component vectors with the hidden model
parameter vectors, in the large-dimensional regime. In particular, we show that
our theoretical results allow us to anticipate the exact accuracy of the
proposed clustering approach. Furthermore, numerical experiments indicate that
leveraging our tensor-based approach yields better accuracy compared to a naive
unfolding-based algorithm which ignores the underlying low-rank tensor
structure. Our analysis unveils unexpected and non-trivial phase transition
phenomena depending on the model parameters, ``interpolating'' between the
typical behavior observed for the spiked matrix and tensor models. | Mohamed El Amine Seddik, Mastane Achab, Henrique Goulart, Merouane Debbah | 2023-05-31T16:13:46Z | http://arxiv.org/abs/2305.19992v1 | # A Nested Matrix-Tensor Model
###### Abstract
In this paper, we propose a nested matrix-tensor model which extends the spiked rank-one tensor model of order three. This model is particularly motivated by a multi-view clustering problem in which multiple noisy observations of each data point are acquired, with potentially non-uniform variances along the views. In this case, data can be naturally represented by an order-three tensor where the views are stacked. Given such a tensor, we consider the estimation of the hidden clusters via performing a best rank-one tensor approximation. In order to study the theoretical performance of this approach, we characterize the behavior of this best rank-one approximation in terms of the alignments of the obtained component vectors with the hidden model parameter vectors, in the large-dimensional regime. In particular, we show that our theoretical results allow us to anticipate the exact accuracy of the proposed clustering approach. Furthermore, numerical experiments indicate that leveraging our tensor-based approach yields better accuracy compared to a naive unfolding-based algorithm which ignores the underlying low-rank tensor structure. Our analysis unveils unexpected and non-trivial phase transition phenomena depending on the model parameters, "interpolating" between the typical behavior observed for the spiked matrix and tensor models.
## 1 Introduction
Tensor methods have received growing attention in recent years, especially from a statistical perspective, following the introduction of a statistical model for tensor PCA by Richard and Montanari (2014). In machine learning, these methods are particularly attractive for addressing several unsupervised learning tasks which can be formulated as the extraction of some _low-rank structure_ from a (potentially high-dimensional) tensor containing observations or functions thereof (such as high-order moments). Among the many existing examples, we can mention learning latent variable models such as Dirichlet allocation, topic models, multi-view models and Gaussian mixtures (Anandkumar et al., 2014, 2015; Ge et al., 2015; Hsu et al., 2012; Hsu and Kakade, 2013; Janzamin et al., 2019; Khouja et al., 2022; Bakshi et al., 2022; Rahmani et al., 2020); learning probability densities and non-Gaussian mixtures (Kargas and Sidiropoulos, 2019; Singhal et al., 2023; Oseledets and Kharyuk, 2021); detecting communities from interaction data of (possibly multi-view or time-evolving) networks (Anandkumar et al., 2013; Huang et al., 2015; Gujral et al., 2020; Fernandes et al., 2021); and high-order co-clustering (Papalexakis et al., 2012).
Despite its simplicity, the statistical model of Richard and Montanari (2014), sometimes called a _rank-one spiked tensor model_, has raised many theoretical challenges. A significant amount of work has been done to understand the fundamental questions related to this model (Perry et al., 2020; Jagannath et al., 2020; Goulart et al., 2022; Auddy and Yuan, 2022; Ben Arous et al., 2021; Seddik et al., 2021), in particular involving statistical thresholds and the asymptotic performance of estimators in the large-dimensional limit. However, the findings of these works have a somewhat limited practical impact due to the rank-one nature of that model, motivating the development and study of more sophisticated statistical models for the analysis of tensor methods. In particular,
phase transitions associated with multi-spiked tensor models of rank \(r>1\) have been considered by Chen et al. (2021); Lesieur et al. (2017).
In this work, we take another path towards bridging the gap between theory and practical applications, by proposing a statistical _nested matrix-tensor model_ that generalizes the (third-order) rank-one spiked tensor model and is motivated by a problem that we call _noisy multi-view clustering_, which can be formulated as follows. Let \(\mathbf{M}=\boldsymbol{\mu y}^{\top}+\mathbf{Z}\in\mathbb{R}^{p\times n}\) be a data matrix containing \(n\) observations of \(p\)-dimensional vectors centered around \(\pm\boldsymbol{\mu}\) (i.e., data are made of two classes), with \(\boldsymbol{y}\in\{-1,1\}^{n}\) holding their corresponding labels and \(\mathbf{Z}\) a Gaussian matrix modeling data dispersion. Now, suppose that we are given \(m\) different noisy observations of \(\mathbf{M}\) with potentially different signal-to-noise ratios (SNR), denoted by:
\[\tilde{\mathbf{X}}_{k}=\boldsymbol{\mu y}^{\top}+\mathbf{Z}+\tilde{\mathbf{W}} _{k},\quad k=1,\ldots,m,\]
where \(\tilde{\mathbf{W}}_{k}\) is a \(p\times n\) matrix comprising independent Gaussian entries drawn from \(\mathcal{N}(0,\sigma_{k}^{2})\). Assuming that the variances \(\sigma_{k}^{2}\) are known (or can be accurately estimated), one can build a tensor \(\mathbf{X}\in\mathbb{R}^{p\times n\times m}\) containing normalized slices \(\mathbf{X}_{k}=h_{k}\tilde{\mathbf{X}}_{k}\), with \(h_{k}:=1/\sigma_{k}\), so that:
\[\mathbf{X}=\big{(}\boldsymbol{\mu y}^{\top}+\mathbf{Z}\big{)}\otimes \boldsymbol{h}+\mathbf{W}\in\mathbb{R}^{p\times n\times m},\] (Nested Matrix-Tensor Model)
where the tensor \(\mathbf{W}\) has independent standard Gaussian entries and \(\boldsymbol{h}=(h_{1},\ldots,h_{m})^{\top}\in\mathbb{R}^{m}\).
The above model can be seen as a more general version of the rank-one spiked model that incorporates a nested structure allowing for more flexible modeling (Specifically, when the variances of the elements in \(\mathbf{Z}\) tend to zero, one recovers the rank-one spiked model). The common low-rank structure in the slices \(\mathbf{X}_{k}\), which can be interpreted as different views of the data, encodes the latent clustering structure that can then be retrieved by using tensor methods applied on \(\mathbf{X}\).
In particular, our results precisely quantify the asymptotic performance of a simple estimator of the vectors \(\boldsymbol{\mu},\boldsymbol{y}\), and \(\boldsymbol{h}\) based on rank-one approximation of \(\mathbf{X}\), in the large-dimensional limit where \(p,n,m\to\infty\) at the same rate. This is achieved by resorting to the recently developed approach of Goulart et al. (2022) and Seddik et al. (2021), which allows one to use tools from random matrix theory by inspecting _contractions_ of the random tensor model in question. Numerical results are given to illustrate the usefulness of such predictions even for moderately large values of \(p\) and \(n\), and also to show the superiority of such a tensor-based approach in comparison with a naive spectral method that does not take the tensor structure of the model into account. Quite interestingly, our results show that the performance of such a rank-one spectral estimator exhibits different phase transition behaviors depending on two parameters governing the SNR and the data dispersion, effectively "interpolating" between phase transition curves that are characteristic of matrix and tensor models.
**Key contributions:** Our main contributions can be summarized as follows:
1. We introduce a nested matrix-tensor model that generalizes the (third-order) spiked tensor model, and we provide a random matrix analysis of its best rank-one tensor approximation in the high-dimensional regime.
2. We provide an application of this model to the problem of clustering multi-view data and show that the developed theory allows the exact characterization of the asymptotic performance of a multi-view clustering approach. Further simulations suggest the superiority of the tensor-based clustering approach compared to a naive unfolding method that ignores the hidden rank-one structure.
**Related work on tensor multi-view methods:** In multi-view machine learning (Xu et al., 2013; Zhao et al., 2017; Sun, 2013), one has to deal with data coming from different sources or exhibiting various statistical or physical natures (e.g. documents composed of both text and images). The main challenge consists in jointly leveraging both the agreement and the complementarity of the different views (Blum and Mitchell, 1998; Dasgupta et al., 2001; Nigam and Ghani, 2000), e.g. via learning a shared latent subspace (White et al., 2012) for diverse tasks such as regression (Kakade and Foster, 2007) or clustering (Chaudhuri et al., 2009; Gao et al., 2015; Cao et al., 2015). In this context, multi-view clustering algorithms using a low-rank tensor representation of the multi-view data have already been proposed: among others, Xie et al. (2018); Wu et al. (2020) relied on tensor-SVD (Kilmer et al., 2013) while Liu et al. (2013) favored a Tucker-type tensor decomposition.
However, the usual sense employed for the term "multi-view clustering" is not exactly the same that we adopt here, since in our problem all views essentially hold noisy measurements of the same quantities. Hence, our work is perhaps closer in spirit to certain tensor-based clustering models comprising an additional diversity (e.g., temporal), such as those of Papalexakis et al. (2012) or those reviewed in Fernandes et al. (2021). Yet, it differs from this literature in that our additional diversity is quite specific (namely, it comes from the availability of multiple measurements for each individual in the sample) and, furthermore, we derive the exact asymptotic performance of our proposed tensor-based method in the large-dimensional limit.
## 2 Notation and Background
The set \(\{1,\ldots,n\}\) is denoted by \([n]\). The unit sphere in \(\mathbb{R}^{p}\) is denoted by \(\mathbb{S}^{p-1}\). The Dirac measure at some real value \(x\) is denoted by \(\delta_{x}\). The support of a measure \(\nu\) is denoted by \(\operatorname{Supp}(\nu)\). The inner product between two vectors \(\mathbf{u}\), \(\mathbf{v}\) is denoted by \(\langle\mathbf{u},\mathbf{v}\rangle=\sum_{i}u_{i}v_{i}\). The imaginary part of a complex number \(z\) is denoted by \(\Im[z]\). The set of eigenvalues of a matrix \(\mathbf{M}\) is denoted by \(\operatorname{Sp}(\mathbf{M})\). Almost sure convergence of a sequence of random variables is denoted by \(\xrightarrow{\text{a.s.}}\). The arrow \(\xrightarrow{\mathcal{D}}\) denotes the convergence in distribution.
### Tensor Notations and Contractions
In this section, we introduce the main tensor notations and definitions used throughout the paper, which we recommend following carefully for a clear understanding of its technical contents.
**Three-order tensors:** The set of third-order tensors of size \(n_{1}\times n_{2}\times n_{3}\) is denoted \(\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\). The scalar \(\mathcal{T}_{ijk}\) or \([\mathbf{\mathsf{T}}]_{ijk}\) denotes the \((i,j,k)\) entry of a tensor \(\mathbf{\mathsf{T}}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\).
**Rank-one tensors:** A tensor \(\mathbf{\mathsf{T}}\) is said to be of rank-one if it can be represented as the outer product of three real-valued vectors \((\mathbf{x},\mathbf{y},\mathbf{z})\in\mathbb{R}^{n_{1}}\times\mathbb{R}^{n_{2}}\times \mathbb{R}^{n_{3}}\). In this case, we write \(\mathbf{\mathsf{T}}=\mathbf{x}\otimes\mathbf{y}\otimes\mathbf{z}\), where the outer product is defined such that \([\mathbf{x}\otimes\mathbf{y}\otimes\mathbf{z}]_{ijk}=x_{i}y_{j}z_{k}\).
**Tensor contractions:** The first mode contraction of a tensor \(\mathbf{\mathsf{T}}\) with a vector \(\mathbf{x}\) yields a matrix denoted \(\mathbf{\mathsf{T}}(\mathbf{x},\cdot,\cdot)\) with entries \([\mathbf{\mathsf{T}}(\mathbf{x},\cdot,\cdot)]_{jk}=\sum_{i=1}^{n_{1}}x_{i}\,\mathcal{T }_{ijk}\). Similarly, \(\mathbf{\mathsf{T}}(\cdot,\mathbf{y},\cdot)\) and \(\mathbf{\mathsf{T}}(\cdot,\cdot,\mathbf{z})\) denote the second and third mode contractions of \(\mathbf{\mathsf{T}}\) with vectors \(\mathbf{y}\) and \(\mathbf{z}\) respectively. We will sometimes denote these contractions by \(\mathbf{\mathsf{T}}(\mathbf{x})\), \(\mathbf{\mathsf{T}}(\mathbf{y})\), and \(\mathbf{\mathsf{T}}(\mathbf{z})\) if there is no ambiguity. The contraction of \(\mathbf{\mathsf{T}}\) with two vectors \(\mathbf{x},\mathbf{y}\) is a vector denoted \(\mathbf{\mathsf{T}}(\mathbf{x},\mathbf{y},\cdot)\) with entries \([\mathbf{\mathsf{T}}(\mathbf{x},\mathbf{y},\cdot)]_{k}=\sum_{ij}x_{i}y_{j}\,\mathcal{T}_{ijk}\). Similarly, the contraction of \(\mathbf{\mathsf{T}}\) with three vectors \(\mathbf{x},\mathbf{y},\mathbf{z}\) is a scalar denoted \(\mathbf{\mathsf{T}}(\mathbf{x},\mathbf{y},\mathbf{z})=\sum_{ijk}x_{i}y_{j}z_{k}\,\mathcal{T}_{ijk}\).
**Tensor norms:** The Frobenius norm of a tensor \(\mathbf{\mathsf{T}}\) is denoted \(\|\mathbf{\mathsf{T}}\|_{F}\) with \(\|\mathbf{\mathsf{T}}\|_{F}^{2}=\sum_{ijk}\mathcal{T}_{ijk}^{2}\). The spectral norm of \(\mathbf{\mathsf{T}}\) is \(\|\mathbf{\mathsf{T}}\|=\sup_{\|\mathbf{u}\|=\|\mathbf{v}\|=\|\mathbf{w}\|=\|\mathbf{v}\|=1}|\mathbf{ \mathsf{T}}(\mathbf{u},\mathbf{v},\mathbf{w})|\).
**Best rank-one approximation:** A best rank-one approximation of \(\mathbf{\mathsf{T}}\) corresponds to a rank-one tensor of the form \(\lambda\mathbf{u}\otimes\mathbf{v}\otimes\mathbf{w}\), where \(\lambda>0\) and \(\mathbf{u},\mathbf{v},\mathbf{w}\) are unitary vectors, that minimizes the square loss \(\|\mathbf{\mathsf{T}}-\lambda\mathbf{u}\otimes\mathbf{v}\otimes\mathbf{w}\|_{F}^{2}\). The latter generalizes to tensors the concept of singular value and vectors (Lim, 2005) and the scalar \(\lambda\) coincides with the spectral norm of \(\mathbf{\mathsf{T}}\). Such a best rank-one approximation can be computed via _tensor power iteration_ which consists of iterating:
\[\mathbf{u}\leftarrow\mathbf{\mathsf{T}}(\cdot,\mathbf{v},\mathbf{w})/\|\mathbf{\mathsf{T}}(\cdot,\bm {v},\mathbf{w})\|,\quad\mathbf{v}\leftarrow\mathbf{\mathsf{T}}(\mathbf{u},\cdot,\mathbf{w})/\|\mathbf{ \mathsf{T}}(\mathbf{u},\cdot,\mathbf{w})\|,\quad\mathbf{w}\leftarrow\mathbf{\mathsf{T}}(\mathbf{u}, \mathbf{v},\cdot)/\|\mathbf{\mathsf{T}}(\mathbf{u},\mathbf{v},\cdot)\|,\]
starting from some appropriate initialization (Kofidis & Regalia, 2002; Anandkumar et al., 2014).
### Random Matrix Theory
In this section, we recall some necessary tools from random matrix theory (RMT) which are at the core of our main results. Specifically, we will consider the _resolvent_ formalism of Hachem et al. (2007) which allows one to characterize the spectral behavior of large symmetric random matrices and the estimation of low-dimensional functionals of such matrices. Given a symmetric matrix \(\mathbf{S}\in\mathbb{R}^{n\times n}\), the resolvent of \(\mathbf{S}\) is defined as \(\mathbf{R}(\xi)=\left(\mathbf{S}-\xi\mathbf{I}_{n}\right)^{-1}\) for some \(\xi\in\mathbb{C}\setminus\operatorname{Sp}(\mathbf{S})\).
In essence, RMT focuses on describing the distribution of eigenvalues of large random matrices. Typically, under certain technical assumptions on some random matrix \(\mathbf{S}\in\mathbb{R}^{n\times n}\) with eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\), the _empirical spectral measure_ of \(\mathbf{S}\), defined as \(\hat{\nu}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\lambda_{i}}\), converges in
the weak sense (Van Der Vaart and Wellner, 1996) to some deterministic probability measure \(\nu\) as \(n\to\infty\) and RMT aims at describing such a \(\nu\). To this end, one widely considered (so-called analytical) approach relies on the _Stieltjes transform_(Widder, 1938). Given a probability measure \(\nu\), the Stieltjes transform of \(\nu\) is defined as \(g_{\nu}(\xi)=\int\frac{d\nu(\lambda)}{\lambda-\xi}\) with \(\xi\in\mathbb{C}\setminus\mathrm{Supp}(\nu)\), and the inverse formula allows one to describe the density of \(\nu\) as \(\nu(dx)=\frac{1}{\pi}\lim_{\varepsilon\to 0}\Im[g_{\nu}(x+i\varepsilon)]\) (assuming it admits one).
The Stieltjes transform of the empirical spectral measure, \(\hat{\nu}_{n}\), is closely related to the resolvent of \(\mathbf{S}\) through the normalized trace operator. In fact, \(g_{\hat{\nu}_{n}}(\xi)=\frac{1}{n}\operatorname{Tr}\mathbf{R}(\xi)\) and the point-wise _almost sure_ convergence of \(g_{\hat{\nu}_{n}}(\xi)\) to some deterministic Stieltjes transform \(g_{\nu}(\xi)\) (where \(\nu\) is defined on \(\mathbb{R}\)) on the upper-half complex plane is equivalent to the weak convergence of \(\hat{\nu}_{n}\) to \(\nu\)(Tao, 2012). Our analysis relies on estimating quantities involving \(\frac{1}{n}\operatorname{Tr}\mathbf{R}(\xi)\), making the use of the resolvent approach a natural choice (see Appendix A for the derivation of our results).
## 3 Main Results
### The Nested Matrix-Tensor Model
We start by defining our considered nested matrix-tensor model in a general form since it might have applications beyond the multi-view data model in Eq. (Nested Matrix-Tensor Model). Let \(n_{1},n_{2},n_{3}\in\mathbb{N}_{+}\) and further denote \(n_{M}=n_{1}+n_{2}\) and \(n_{T}=n_{1}+n_{2}+n_{3}\). We consider the following statistical model:
\[\mathbf{T}=\beta_{T}\mathbf{M}\otimes\mathbf{z}+\frac{1}{\sqrt{n_{T}}}\mathbf{W} \in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}},\quad\mathbf{M}=\beta_{M}\mathbf{x} \otimes\mathbf{y}+\frac{1}{\sqrt{n_{M}}}\mathbf{Z}\,\in\mathbb{R}^{n_{1}\times n _{2}}, \tag{1}\]
where we assume that \(\|\mathbf{x}\|=\|\mathbf{y}\|=\|\mathbf{z}\|=1\) and that the entries of \(\mathbf{W}\) and \(\mathbf{Z}\) are independent Gaussian random variables, with \(\mathbf{W}_{ijk}\sim\mathcal{N}(0,\sigma_{T}^{2})\) and \(\mathcal{Z}_{ij}\sim\mathcal{N}(0,\sigma_{M}^{2})\). For the sake of simplicity, we consider the unit variance case \(\sigma_{T}=\sigma_{M}=1\) in the remainder of the paper while we defer the general variance case to Appendix A.
**Remark 1** (Spectral normalization): _Note that the normalization of \(\mathbf{W}\) by \(\sqrt{n_{T}}\) (resp. \(\mathbf{Z}\) by \(\sqrt{n_{M}}\)) in Eq. (1) ensures that the spectral norm of \(\mathbf{T}\) is of order \(O(1)\) when the dimensions \(n_{i}\) grow to infinity. This follows from a standard concentration result (Seddik et al., 2021, Lemma 4)._
**Best rank-one tensor estimator:** We consider the analysis of the best rank-one approximation of \(\mathbf{T}\) which corresponds to the following problem (Lim, 2005):
\[\operatorname*{arg\,min}_{\lambda>0,\,\|\mathbf{u}\|\|\mathbf{v}\|\|=1}\|\mathbf{T}- \lambda\mathbf{u}\otimes\mathbf{v}\otimes\mathbf{w}\|_{\mathrm{F}}^{2}\quad\Leftrightarrow \operatorname*{arg\,max}_{\|\mathbf{u}\|\|\mathbf{v}\|\|\mathbf{w}\|=1}\mathbf{T}(\mathbf{u}, \mathbf{v},\mathbf{w})\;. \tag{2}\]
In particular, the solution for the scalar \(\lambda\) in the left-hand problem coincides with the spectral norm of \(\mathbf{T}\), i.e., \(\lambda=\|\mathbf{T}\|\). Given a critical point \((\lambda,\mathbf{u},\mathbf{v},\mathbf{w})\) of that problem, it holds that (Lim, 2005):
\[\mathbf{T}(\cdot,\mathbf{v},\mathbf{w})=\lambda\mathbf{u},\quad\mathbf{T}(\mathbf{u},\cdot, \mathbf{w})=\lambda\mathbf{v},\quad\mathbf{T}(\mathbf{u},\mathbf{v},\cdot)=\lambda\mathbf{w}, \quad\lambda=\mathbf{T}(\mathbf{u},\mathbf{v},\mathbf{w}). \tag{3}\]
In essence, for sufficiently large \(\beta_{M}\) and \(\beta_{T}\), the triplet \((\mathbf{u},\mathbf{v},\mathbf{w})\) will start to align with the signal components \((\mathbf{x},\mathbf{y},\mathbf{z})\) and our main goal is to quantify these alignments (i.e., the inner products \(\langle\mathbf{u},\mathbf{x}\rangle,\langle\mathbf{v},\mathbf{y}\rangle\) and \(\langle\mathbf{w},\mathbf{z}\rangle\)) in the large dimensional regime when \(n_{i}\to\infty\). To this end, we need a typical set of assumptions that we formulate as follows (see (Goulart et al., 2022; Seddik et al., 2021) for similar assumptions in the case of spiked random tensors).
**Assumption 1**: _There exists a sequence of critical points \((\lambda,\mathbf{u},\mathbf{v},\mathbf{w})\) satisfying Eq. (3) such that, when \(n_{i}\to\infty\) with \(\frac{n_{1}}{n_{T}}\to c_{1}>0,\frac{n_{2}}{n_{T}}\to c_{2}>0,\frac{n_{3}}{n_{T }}\to c_{3}>0\), we have the following:_
\[\lambda\xrightarrow{\text{a.s.}}\bar{\lambda},\quad|\langle\mathbf{u},\mathbf{x} \rangle|\xrightarrow{\text{a.s.}}\alpha_{1},\quad|\langle\mathbf{v},\mathbf{y}\rangle| \xrightarrow{\text{a.s.}}\alpha_{2},\quad|\langle\mathbf{w},\mathbf{z}\rangle| \xrightarrow{\text{a.s.}}\alpha_{3}.\]
In the remainder of the paper, we refer to the quantities \((\lambda,\langle\mathbf{u},\mathbf{x}\rangle,\langle\mathbf{v},\mathbf{y}\rangle,\langle \mathbf{w},\mathbf{z}\rangle)\) as _summary statistics_ as per the formalism introduced by Ben Arous et al. (2022) since the asymptotic limits of these scalar quantities fully describe the asymptotic behavior of the considered best rank-one tensor estimator applied to \(\mathbf{T}\).
**Remark 2** (On Assumption 1): _The almost sure convergence of the summary statistics has been demonstrated in (Jagannath et al., 2020) in the case of the spiked tensor model. We believe similar arguments can be extended to our proposed nested matrix-tensor model to validate Assumption 1._
### Associated Random Matrix
As discussed in the previous section, our primary goal is to compute the asymptotic summary statistics \((\bar{\lambda},\alpha_{1},\alpha_{2},\alpha_{3})\) in terms of the model's parameters, namely, the signal-to-noise ratios \((\beta_{M},\beta_{T})\) and the dimension ratios \((c_{1},c_{2},c_{3})\). To this end, we follow the approach developed by Seddik et al. (2021), who studied the _asymmetric_ spiked tensor model, and where it has been shown that the estimation of \((\bar{\lambda},\alpha_{1},\alpha_{2},\alpha_{3})\) boils down to the analysis of the _block-wise contraction random matrix_\(\mathbf{\Phi}\) in Eq.(4), which can be done by deploying tools from random matrix theory.
Given the model in Eq. (1), it can be easily noticed that \(\mathbf{\Phi}\) decomposes as a sum of two matrices \(\mathbf{H}+\mathbf{L}\) where \(\mathbf{L}\) is a low-rank matrix related to the signal part in the nested matrix-tensor model (the expression of \(\mathbf{L}\) is provided in Eq. (17) in Appendix A), and \(\mathbf{H}\) corresponds to the noise part of the model, being given by:
\[\mathbf{H}=\begin{bmatrix}\mathbf{0}_{n_{1}\times n_{1}}&\frac{(\mathbf{w},\mathbf{z })\beta_{T}}{\sqrt{n_{M}}}\mathbf{Z}+\frac{1}{\sqrt{n_{T}}}\textbf{W}(\mathbf{w})& \frac{1}{\sqrt{n_{T}}}\textbf{W}(\mathbf{v})\\ \frac{(\mathbf{w},\mathbf{z})\beta_{T}}{\sqrt{n_{M}}}\mathbf{Z}^{\top}+\frac{1}{\sqrt {n_{T}}}\textbf{W}(\mathbf{w})^{\top}&\mathbf{0}_{n_{2}\times n_{2}}&\frac{1}{ \sqrt{n_{T}}}\textbf{W}(\mathbf{u})\\ \frac{1}{\sqrt{n_{T}}}\textbf{W}(\mathbf{v})^{\top}&\frac{1}{\sqrt{n_{T}}}\textbf {W}(\mathbf{u})^{\top}&\mathbf{0}_{n_{3}\times n_{3}}\end{bmatrix}. \tag{4}\]
**Remark 3** (On the spectrum of \(\mathbf{\Phi}\)): _In terms of spectrum, we will see subsequently that the matrices \(\mathbf{\Phi}\) and \(\mathbf{H}\) share the same "bulk" of eigenvalues while the spectrum of \(\mathbf{\Phi}\) exhibits two isolated eigenvalues at positions \(2\lambda\) and \(-\lambda\) if \(\beta_{M},\beta_{T}\) are large enough. In fact, one can quickly check that, given the identities in Eq. (3), the scalars \(2\lambda\) and \(-\lambda\) are eigenvalues of \(\mathbf{\Phi}\) with respective multiplicities \(1\) and \(2\), and respective eigenvectors \((\mathbf{u}^{\top},\mathbf{v}^{\top},\mathbf{w}^{\top})^{\top}\) for the eigenvalue \(2\lambda\) and \((\mathbf{u}^{\top},\mathbf{0}^{\top},-\mathbf{w}^{\top})^{\top},(\mathbf{0}^{\top},\mathbf{v}^{ \top},-\mathbf{w}^{\top})^{\top}\) corresponding to the eigenvalue \(-\lambda\)._
### Limiting Spectrum
We will find subsequently that the asymptotic summary statistics \((\bar{\lambda},\alpha_{1},\alpha_{2},\alpha_{3})\) are closely related to the limiting spectral measure of the random matrix \(\mathbf{H}\). Therefore, our first result characterizes precisely this limiting distribution using the Stieltjes transform formalism (Widder, 1938).
**Theorem 1** (Limiting spectrum): _Under Assumption 1, the empirical spectral measure of \(\mathbf{H}\) or \(\mathbf{\Phi}\) converges weakly almost surely to a deterministic distribution \(\nu\) whose Stieltjes transform is given by \(g(\xi)=\sum_{i=1}^{3}g_{i}(\xi)\) such that \(\Im[g(\xi)]>0\) for \(\Im[\xi]>0\), and where \((g_{i}(\xi))_{i\in[3]}\) satisfy the following equations:_
\[g_{1}(\xi)=\frac{c_{1}}{g_{1}(\xi)-g(\bar{\xi})-\bar{\gamma}g_{2}(\xi)-\xi}, \quad g_{2}(\xi)=\frac{c_{2}}{g_{2}(\xi)-g(\xi)-\bar{\gamma}g_{1}(\xi)-\xi}, \quad g_{3}(\xi)=\frac{c_{3}}{g_{3}(\xi)-g(\xi)-\xi},\]
_with \(\bar{\gamma}=\frac{\beta_{T}^{2}\alpha_{3}^{2}}{c_{1}+c_{2}}\). In particular, the density function of \(\nu\) is given by \(\nu(dx)=\frac{1}{\pi}\lim_{\varepsilon\to 0}\Im\left[g(x+i\varepsilon)\right]\)._
Theorem 1 generalizes the limiting spectral measure obtained by (Seddik et al., 2021) in the sense that the latter corresponds to the particular case when \(\bar{\gamma}=0\) (e.g. if \(\beta_{T}=0\)). Moreover, in the
Figure 1: Empirical versus limiting spectrum of \(\mathbf{\Phi}\) for \(\beta_{T}=2,\beta_{M}=3,n_{1}=130,n_{2}=80,n_{3}=140\). In addition to the “bulk” of eigenvalues, the spectrum of \(\mathbf{\Phi}\) exhibits two isolated spikes at positions \(2\lambda\) and \(-\lambda\) as discussed in Remark 3. In particular, the isolated spikes are accurately estimated by the limiting singular value \(\bar{\lambda}\) as per Theorem 2 and Algorithm 2.
specific case \(\beta_{T}=0\) and \(c_{1}=c_{2}=c_{3}=\frac{1}{3}\), the distribution \(\nu\) describes a _semi-circle law_ of compact support \([-2\sqrt{2/3},2\sqrt{2/3}]\), and the corresponding Stieltjes transform is explicitly given by \(g(\xi)=\frac{3}{4}(-\xi+\sqrt{\xi^{2}-8/3})\) with \(g_{i}(\xi)=g(\xi)/3\) for all \(i\in[3]\). We refer the reader to (Seddik et al., 2021) for more details and a full description of various particular cases. Moreover, an explicit formula for \(g(\xi)\) can be derived in the case \(c_{1}=c_{2}\) using a formal calculation tool (e.g. SymPy).
However, for arbitrary values of \(\beta_{T},\beta_{M}\) and of the dimension ratios \((c_{1},c_{2},c_{3})\), the limiting spectral measure of \(\mathbf{H}\) or \(\mathbf{\Phi}\) can be computed numerically as per Algorithm 1 which implements the equations in Theorem 1. Figure 1 shows that the empirical spectral measure of \(\mathbf{\Phi}\) is accurately predicted by the limiting measure of Theorem 1 (further examples are depicted in Figure 6 in the Appendix). We note that the computation of \(\bar{\gamma}\) (which is closely related to the alignment \(\alpha_{3}\)) is a key step in the numerical evaluation of \(g\), which we will address next by computing the asymptotic alignments \(\alpha_{i}\)'s.
```
Input: Complex number \(\xi\in\mathbb{C}\setminus\operatorname{Supp}(\nu)\), ratios \(c_{1},c_{2},c_{3}\in[0,1]\), \(\beta_{T},\beta_{M}\geq 0\) and option. Output: Limiting Stieltjes transform \(g(\xi)\) and \(g_{i}(\xi)\) for \(i\in[3]\). Initialize \(g_{1},g_{2},g_{3}\) and set \(g\gets g_{1}+g_{2}+g_{3}\). ifoptionis "compute \(\bar{\gamma}\)"then Compute the asymptotic summary statistics \((\bar{\lambda},\alpha_{1},\alpha_{2},\alpha_{3})\) with Algo. 2 and set \(\bar{\gamma}\leftarrow\frac{\beta_{T}^{2}\alpha_{3}^{2}}{c_{1}+c_{2}}\). endif while"\(g\) has not converged"do ifoptionis "approximate \(\bar{\gamma}\)"then Update \(\bar{\gamma}\leftarrow\frac{\beta_{T}^{2}}{c_{1}+c_{2}}\left(1-\frac{g_{2}^{2} }{c_{3}}\right)\). endif Update \(g_{1}\leftarrow\frac{c_{1}}{g_{1}-g-\bar{\gamma}g_{2}-\xi}\), \(g_{2}\leftarrow\frac{c_{2}}{g_{2}-g-\bar{\gamma}g_{1}-\xi}\), \(g_{3}\leftarrow\frac{c_{3}}{g_{3}-g-\xi}\), \(g\gets g_{1}+g_{2}+g_{3}\). endwhile
```
**Algorithm 1** Limiting Stieltjes transform as per Theorem 1
### Asymptotic Summary Statistics
In the previous subsection, we have shown that the empirical spectral measure of \(\mathbf{H}\) or \(\mathbf{\Phi}\) converges to some deterministic measure \(\nu\) as we depicted in Figure 1. Specifically, we notice that the measure \(\nu\) has a compact support that depends on the various parameters of the model. In what follows, we will need to evaluate the corresponding Stieltjes transform \(g\) at the asymptotic spectral norm \(\bar{\lambda}\), and therefore the latter must lie outside the support of \(\nu\) as per the following assumption. In fact, this assumption has also been made by (Goulart et al., 2022; Seddik et al., 2021).
**Assumption 2**: _Assume that \(\bar{\lambda}\notin\operatorname{Supp}(\nu)\) and \(\alpha_{i}>0\) for all \(i\in[3]\), with \(\nu\) given by Theorem 1._
**Remark 4** (On Assumption 2): _For any critical point \((\lambda,\mathbf{u},\mathbf{v},\mathbf{w})\) of problem (2), as we saw in Remark 3, \(\mathbf{\Phi}\) has an eigenvalue \(2\lambda\). In particular, for a local maximum, \(2\lambda\) is in fact its largest eigenvalue (Seddik et al., 2021). Furthermore, by studying the Hessian of that problem (which is related to \(\mathbf{\Phi}\)) at a maximum one can also show that \(\lambda\) is at least as large as the second largest eigenvalue of \(\mathbf{\Phi}\) (which is almost surely close to the right edge of the measure \(\nu\)). Hence, the above condition in Assumption 2 is slightly stronger, only requiring that inequality to hold strictly. See also (Goulart et al., 2022) for a similar discussion in the case of a symmetric spiked tensor model._
We are now in place to provide our main result which characterizes the asymptotic summary statistics \((\bar{\lambda},\alpha_{1},\alpha_{2},\alpha_{3})\) given the signal-to-noise ratios \((\beta_{M},\beta_{T})\) and the dimension ratios \((c_{1},c_{2},c_{3})\).
**Theorem 2** (Asymptotic summary statistics): _Let us define the following functions for \(i\in[2]\):_
\[q_{i}(\xi) =\sqrt{1-\frac{[1+\gamma(\xi)]g_{i}^{2}(\xi)}{c_{i}}},\quad q_{3} (\xi)=\sqrt{1-\frac{g_{3}^{2}(\xi)}{c_{3}}},\quad\gamma(\xi)=\frac{\beta_{T}^{ 2}q_{3}^{2}(\xi)}{c_{1}+c_{2}},\] \[f(\xi) =\xi+[1+\gamma(\xi)]g(\xi)-\gamma(\xi)g_{3}(\xi)-\beta_{T}\beta_{M }\prod_{i=1}^{3}q_{i}(\xi).\]
_Then, under Assumptions 1 and 2, the asymptotic spectral norm \(\bar{\lambda}\) satisfies \(f(\bar{\lambda})=0\) and the asymptotic alignments are given by \(\alpha_{i}=q_{i}(\bar{\lambda})\) (in particular, \(\bar{\gamma}=\gamma(\bar{\lambda})\))._
Theorems 1 and 2 show that the spectral behavior of the random matrix \(\mathbf{\Phi}\) is fully described by its limiting spectral measure \(\nu\) and the position of the limiting singular value \(\bar{\lambda}\). This is illustrated by Figure 1 which depicts the empirical spectrum of \(\mathbf{\Phi}\) along with the limiting measure \(\nu\) as per Theorem 1 and the asymptotic spikes computed via Theorem 2. As we discussed earlier in Remark 3, the spectrum of \(\mathbf{\Phi}\) consists of a "bulk" of eigenvalues spread around \(0\) and two isolated eigenvalues at positions \(2\lambda\) and \(-\lambda\) with multiplicities \(1\) and \(2\) respectively. In fact, the spike at position \(-\lambda\) is only visible when the signal-to-noise ratios \((\beta_{M},\beta_{T})\) are large enough, and this basically corresponds to the situation where it is theoretically possible to estimate the signal components \((\boldsymbol{x},\boldsymbol{y},\boldsymbol{z})\) from the tensor \(\mathbf{T}\). In addition, note that Assumption 2 holds when a spike is visible at the position \(-\lambda\). This _phase transition_ phenomenon is highlighted in Figure 2 where we vary the signal-to-noise ratios \((\beta_{M},\beta_{T})\). In particular, roughly speaking, the parameter \(\beta_{T}\) affects the "shape" of limiting distribution \(\nu\) while \(\beta_{M}\) determines the position of the isolated spikes. Besides, note that in the situations where \(\bar{\lambda}\) lies inside the support of \(\nu\), we solve numerically the equation \(f(\bar{\lambda}+i\varepsilon)=0\) for some small value \(\varepsilon\) (and take the real parts of \(g\) and \(g_{i}\)'s), which allows us to circumvent Assumption 2 in this case.
Figure 3 in turn depicts the empirical versus asymptotic summary statistics when varying the parameter \(\beta_{M}\) (with \(\beta_{T}\) being fixed) and shows that the empirical quantities are accurately predicted by the theoretical counterparts. Moreover, as in standard spiked random matrix models, our results show that there exists a _phase transition_, i.e., a minimum value for \(\beta_{M}\) above which the singular vectors along the modes \(1\) and \(2\)\((\boldsymbol{u},\boldsymbol{v})\) start to correlate with the matrix signal components \((\boldsymbol{x},\boldsymbol{y})\). However, below this critical value of \(\beta_{M}\), \(\alpha_{1}\) and \(\alpha_{2}\) are vanishing while \(\alpha_{3}\approx 1\). The continuity of the curves of \(\alpha_{1}\) and \(\alpha_{2}\) when varying \(\beta_{M}\) is a typical characteristic of spiked matrices as per the classical BBP phase transition phenomenon (Baik et al., 2005). Besides, for smaller values of \(\beta_{T}\) (below some critical value), the curves of \(\alpha_{1}\) and \(\alpha_{2}\) start to become discontinuous as per Figure 7 in
Figure 2: Limiting spectrum and isolated spikes of \(\mathbf{\Phi}\) for \(n_{1}=80,n_{2}=100,n_{3}=90\) and varying \((\beta_{T},\beta_{M})\). For small values of \((\beta_{T},\beta_{M})\) the “bulk” is described by a semi-circle-like distribution. Large values of \(\beta_{T}\) affect the “shape” of the limiting measure while larger values of \(\beta_{M}\) control the position of the isolated spikes.
Appendix B which is commonly observed in spiked tensor models (Jagannath et al., 2020). In this sense, the nested matrix-tensor model is a sort of "interpolating model" between spiked matrices and tensors (see Appendix B for additional simulations), as far as a spectral estimator of the spike is concerned.
**Remark 5** (Computation of \(\alpha_{3}\) below the phase transition): _Even though Assumption 2 is not valid in the regime where \(\beta_{M}\) is below its critical value (because an isolated spike at position \(-\bar{\lambda}\) outside the support of \(\nu\) is not present in this case), numerically computed solutions for \(f(\bar{\lambda}+i\varepsilon)=0\) with a small \(\varepsilon>0\) seem to accurately estimate \(\alpha_{3}\) as per Fig. 3 (whereas \(f(\xi)\) is not defined at \(\bar{\lambda}\) since it depends on \(g(\xi)\), which is undefined inside the support of \(\nu\)). Yet, we currently do not have a rigorous justification for this intriguing property._
## 4 Application to Multi-view Clustering
Now we illustrate the application of Theorem 2 to the assessment of the performance of a simple multi-view spectral clustering approach. As we presented in the introduction, we consider that we observe a tensor \(\mathbf{X}\) of \(n\) data points of dimension \(p\) along \(m\) different views:
\[\mathbf{X}=\left(\boldsymbol{\mu}\bar{\boldsymbol{y}}^{\top}+\mathbf{Z} \right)\otimes\boldsymbol{h}+\mathbf{W},\quad\mathbf{Z}_{ij}\sim\mathcal{N} \left(0,\frac{1}{p+n}\right),\;\mathbf{W}_{ijk}\sim\mathcal{N}\left(0,\frac{ 1}{p+n+m}\right), \tag{6}\]
where \(\boldsymbol{\mu}\in\mathbb{R}^{p}\) models the cluster means (\(-\boldsymbol{\mu}\) or \(\boldsymbol{\mu}\)), \(\bar{\boldsymbol{y}}=\boldsymbol{y}/\sqrt{n}\) with \(\boldsymbol{y}\in\{-1,1\}^{n}\) corresponding to the data labels (either \(-1\) or \(1\)) and \(\boldsymbol{h}\in\mathbb{R}_{+}^{m}\) is related to the variances along the different views. In particular, the case \(m=1\) corresponds to the classical binary Gaussian isotropic model of centroids \(\pm\boldsymbol{\mu}\) in which case the tensor \(\mathbf{X}\) becomes a matrix of the form \(\mathbf{X}=\boldsymbol{\mu}\bar{\boldsymbol{y}}^{\top}+\mathbf{Z}\). Figure 4 depicts the multi-view model in Eq. (6) for \(p=2\) and \(m=4\), where the first class is represented by dots and the second class is depicted by crosses, while the different views are illustrated with different colors. Observing the tensor \(\mathbf{X}\), the clustering of the different data points would consist in estimating the labels vector \(\boldsymbol{y}\). Indeed, this can be performed by computing the best rank-one approximation of \(\mathbf{X}\) (denoted \(\lambda\boldsymbol{u}\otimes\hat{\boldsymbol{y}}\otimes\boldsymbol{w}\)), and depending on the class separability condition (i.e. if \(\|\boldsymbol{\mu}\|\) and \(\|\boldsymbol{h}\|\) are large enough), the \(2\)-mode singular vector of \(\mathbf{X}\) will start to correlate with \(\boldsymbol{y}\) thereby providing a clustering of the data samples. Our aim is to quantify the performance of this multi-view spectral clustering approach in terms of the different parameters, i.e., the dimensions
Figure 4: Illustration of the multi-view model in Eq. (6) for \(p=2,n=1000,m=4,\|\boldsymbol{\mu}\|=5\) and \(\|\boldsymbol{h}\|=3\). The first class is represented by dots and the second class by crosses. The different colors represent the views.
Figure 3: Empirical versus asymptotic summary statistics for \(n_{1}=40,n_{2}=110,n_{3}=90,\beta_{T}=2\) and varying \(\beta_{M}\). Simulations are obtained by averaging over \(10\) independent realizations of the random matrix \(\mathbf{Z}\) and tensor \(\mathbf{W}\). Our results exhibit a phase transition when varying \(\beta_{M}\) above which the matrix components \((\boldsymbol{x},\boldsymbol{y})\) become estimable.
\(n,p,m\) and the quantities \(\|\mathbf{\mu}\|\) and \(\|\mathbf{h}\|\). We precisely have the subsequent proposition which characterizes the theoretical performance of the multi-view spectral clustering method under the following growth rate assumptions.
**Assumption 3** (Growth rate): _Assume that as \(p,n,m\to\infty\), \(\|\mathbf{\mu}\|,\|\mathbf{h}\|=O(1)\) and denote \(c_{p}=\lim\frac{p}{N}>0,c_{n}=\lim\frac{n}{N}>0,c_{m}=\lim\frac{m}{N}>0\) with \(N=p+n+m\)._
**Proposition 1** (Performance of multi-view spectral clustering): _Let \(\hat{\mathbf{y}}\) be the \(2^{\text{nd}}\) mode vector of the best rank-one approximation of the data tensor \(\mathbf{X}\). The estimated label for the sample \(\mathbf{X}_{\cdot,i,j}\) is given by \(\hat{\ell}_{i}=\operatorname{sign}(\hat{y}_{i})\) for all \(j\in[m]\) and let \(\mathcal{L}_{0/1}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}\{\hat{\ell}_{i}\neq y_{ i}\}\) be the corresponding \(0/1\)-loss. We have under Assumption 3:_
\[(1-\alpha^{2})^{-\frac{1}{2}}\left[\sqrt{n}\hat{y}_{i}-\alpha y_{i}\right] \xrightarrow{\mathcal{D}}\mathcal{N}(0,1),\]
_where \(\alpha=q_{2}(\bar{\lambda})\) with \(q_{2}(\cdot)\) and \(\bar{\lambda}\) defined as per Theorem 2 for \((c_{1},c_{2},c_{3})=(c_{p},c_{n},c_{m})\) and \((\beta_{M},\beta_{T})=(\|\mathbf{\mu}\|,\|\mathbf{h}\|)\). Moreover, the clustering accuracy \(\max\left(\mathcal{L}_{0/1},1-\mathcal{L}_{0/1}\right)\) converges almost surely to \(\varphi\left(\alpha/\sqrt{1-\alpha^{2}}\right)\) with \(\varphi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}dt\)._
Proposition 1 states that the entries of the 2-mode singular vector corresponding to the largest singular value of \(\mathbf{X}\) are Gaussian random variables, with mean and variance depending on the dimension ratios \((c_{p},c_{n},c_{m})\) and the parameters \((\|\mathbf{\mu}\|,\|\mathbf{h}\|)\) through the asymptotic alignment \(\alpha\) obtained thanks to Theorem 2. In fact, Figure 5 (Left) illustrates this Gaussianity by depicting the entries of the vector \(\sqrt{n}\hat{\mathbf{y}}\) and the corresponding normal distributions. Furthermore, the theoretical accuracy \(\varphi\left(\alpha/\sqrt{1-\alpha^{2}}\right)\) is also depicted in Figure 5 (Right) from which we notice that the empirical performance is accurately anticipated. Essentially, for a fixed value of \(\|\mathbf{h}\|\), our results show that there exists a minimal value of the class separability \(\|\mathbf{\mu}\|\) below which the obtained accuracy is no better than a random guess, in fact, the such minimal value of \(\|\mathbf{\mu}\|\) is related to the _phase transition phenomenon_ discussed in the previous section. In addition, we highlight that the considered tensor-based multi-view clustering approach provides better accuracy compared to a tensor unfolding approach, which consists in computing the top left singular vector of the unfolding of \(\mathbf{X}\) along the second mode (Ben Arous et al., 2021), and therefore does not consider the hidden rank-one structure.
## 5 Conclusion & Perspectives
We introduced the nested matrix-tensor model and provided a high-dimensional analysis of its best rank-one approximation, relying on random matrix theory. Our analysis has brought theoretical insights into the problem of muti-view clustering and demonstrates the ability of random matrix
Figure 5: **(Left) Histogram of the entries of \(\sqrt{n}\hat{\mathbf{y}}\) for \(p=200,n=800,m=100,\|\mathbf{\mu}\|=1.5\) and \(\|\mathbf{h}\|=2\) with the corresponding Gaussian limit as per Proposition 1. (Right) Empirical versus theoretical multi-view clustering performance as per Proposition 1 for \(p=150,n=300,m=60\) and varying \(\|\mathbf{\mu}\|,\|\mathbf{h}\|\). The dashed curves correspond to tensor unfolding which discards the rank-one structure of the data and therefore yields sub-optimal accuracy.**
tools to assess the theoretical performance of the considered clustering method. This paves the way for an elaborated theoretical assessment and improvement of more sophisticated tensor-based methods. In particular, our present findings address only the case of binary clustering by considering the rank-one matrix model \(\mathbf{\mu y}^{\top}+\mathbf{Z}\) which can be extended to higher ranks, thereby modeling a multi-class problem. Besides, such an extension would require the analysis of more sophisticated tensor methods (e.g. the block-term decomposition (De Lathauwer, 2008)) which is more challenging compared to the present best rank-one estimator. Nevertheless, we believe our present work constitutes a fundamental basis for the development of more general results. |
2309.08592 | Confirmation of an anomalously low dark matter content for the galaxy
NGC1052-DF4 from deep, high resolution continuum spectroscopy | NGC1052-DF4 was found to be the second "galaxy lacking dark matter" in the
NGC1052 group, based on its velocity dispersion of $\sigma_{\rm
gc}=4.2^{+4.4}_{-2.2}$ km/s as measured from the radial velocities of seven of
its globular clusters. Here we verify this result by measuring the stellar
velocity dispersion of the galaxy. We observed the diffuse stellar light in
NGC1052-DF4 with the Keck Cosmic Web Imager (KCWI) in its highest resolution
mode, with $\sigma_{\mathrm{instr}}\approx 7$ km/s. With a total science + sky
exposure time of 34hrs, the resulting spectrum is exceptional both in its
spectral resolution and its S/N ratio of 23\r{A}$^{-1}$. We find a stellar
velocity dispersion of $\sigma_{\rm stars} = 8.0^{+2.3}_{-1.9}$ km/s,
consistent with the previous measurement from the globular clusters. Combining
both measurements gives a fiducial dispersion of $\sigma_{\rm f} =
6.3_{-1.6}^{+2.5}$ km/s. The implied dynamical mass within the half-light
radius is $8_{-4}^{+6} \times 10^7 M_{\odot}$. The expected velocity dispersion
of NGC1052-DF4 from the stellar mass alone is $7 \pm 1$ km/s, and for an NFW
halo that follows the stellar mass -- halo mass relation and the halo mass --
concentration relation, the expectation is $\sim 30$ km/s. The low velocity
dispersion rules out a normal NFW dark matter halo, and we confirm that
NGC1052-DF4 is one of at least two galaxies in the NGC1052 group that have an
anomalously low dark matter content. While any viable model for their formation
should explain the properties of both galaxies, we note that NGC1052-DF4 now
poses the largest challenge as it has the most stringent constraints on its
dynamical mass. | Zili Shen, Pieter van Dokkum, Shany Danieli | 2023-09-15T17:46:18Z | http://arxiv.org/abs/2309.08592v2 | Confirmation of an anomalously low dark matter content for the galaxy NGC1052-DF4 from deep, high resolution continuum spectroscopy
###### Abstract
NGC1052-DF4 was found to be the second "galaxy lacking dark matter" in the NGC1052 group, based on its velocity dispersion of \(\sigma_{\rm gc}=4.2^{+4.4}_{-2.2}\) km s\({}^{-1}\) as measured from the radial velocities of seven of its globular clusters. Here we verify this result by measuring the stellar velocity dispersion of the galaxy. We observed the diffuse stellar light in NGC1052-DF4 with the Keck Cosmic Web Imager (KCWI) in its highest resolution mode, with \(\sigma_{\rm instr}\approx 7\) km s\({}^{-1}\). With a total science + sky exposure time of 34 hrs, the resulting spectrum is exceptional both in its spectral resolution and its S/N ratio of 23 A\({}^{-1}\). We find a stellar velocity dispersion of \(\sigma_{\rm stars}=8.0^{+2.3}_{-1.9}\) km s\({}^{-1}\), consistent with the previous measurement from the globular clusters. Combining both measurements gives a fiducial dispersion of \(\sigma_{\rm f}=6.3^{+2.5}_{-1.6}\) km s\({}^{-1}\). The implied dynamical mass within the half-light radius is \(8^{+6}_{-4}\times 10^{7}M_{\odot}\). The expected velocity dispersion of NGC1052-DF4 from the stellar mass alone is \(7\pm 1\) km s\({}^{-1}\), and for an NFW halo that follows the stellar mass - halo mass relation and the halo mass - concentration relation, the expectation is \(\sim 30\) km s\({}^{-1}\). The low velocity dispersion rules out a normal NFW dark matter halo, and we confirm that NGC1052-DF4 is one of at least two galaxies in the NGC1052 group that have an anomalously low dark matter content. While any viable model for their formation should explain the properties of both galaxies, we note that NGC1052-DF4 now poses the largest challenge as it has the most stringent constraints on its dynamical mass.
High resolution spectroscopy (2096) - Stellar kinematics (1608) - Galaxy kinematics (602) 0000-0002-3181-8885]Zili Shen
0000-0002-4188-7088]Pieter van Dokkum
0000-0002-4171-7885]Shany Danieli
0000-0002-1888-7088]Paeter van Dokkum
0000-0002-1883-0883]Shany Danieli
## 1 Introduction
NGC1052-DF4 (or DF4) was the "second galaxy missing dark matter", with an extremely low velocity dispersion that is consistent with its stellar mass alone (van Dokkum et al., 2019). Its velocity dispersion of \(4.2^{+4.4}_{-2.2}\) km s\({}^{-1}\) was measured from seven globular clusters that are associated with DF4. It shares many puzzling properties with the "first galaxy missing dark matter", NGC1052-DF2 (DF2, van Dokkum et al., 2018). Located in the NGC1052 group (\(D_{\rm DF2}=21.7\) Mpc, Appendix A; \(D_{\rm DF4}=20.0\) Mpc, Danieli et al., 2020), both galaxies have low velocity dispersions that imply an extreme deficit of dark matter, and host a population of overluminous globular clusters (Shen et al., 2021). In addition, DF2 and DF4 have the same color in their globular clusters and the diffuse stellar light (van Dokkum et al., 2022).
Many scenarios have been proposed to explain why a single galaxy might appear to be lacking dark matter in the present day. One scenario is that DF2 is on a special orbit, such that dark matter was tidally stripped by close encounters with NGC1052 (Ogiya, 2018; Nusser, 2020; Maccio et al., 2021; Jackson et al., 2021; Ogiya et al., 2022; Moreno et al., 2022). Another idea is that the galaxy is not pressure supported but is rotating, and viewed nearly face-on. Lewis et al. (2020) claimed that the globular clusters in DF2 are in a rotating plane, even though spatially-resolved spectroscopy of the diffuse light does not show evidence for rotation (Danieli et al., 2019; Emsellem et al., 2019). Montes et al. (2021) suggested that
DF2 is a low-inclination disk galaxy, based on a photometric analysis.
While such "tail of the distribution" explanations can be invoked for a single unusual galaxy, the similarity of DF2 and DF4 strongly suggests a common origin. Keim et al. (2022) showed that the similarity extends even to their tidal distortions, with both galaxies showing the same ellipticity and (absolute) orientation as a function of radius. It is also unlikely that both galaxies are face-on disks. Silk (2019) proposed the "mini bullet-cluster" event, where a high-speed collision of two progenitor gas-rich galaxies produced a dark-matter deficient galaxy. During such a collision, the collision-less dark matter halos and stars would keep moving in their orbits, while the collisional gas would be separated. Shocks and gravitational collapse could then lead to the formation of stars and globular clusters. Following the Silk (2019) idea, van Dokkum et al. (2022) suggested that DF2 and DF4 are both remnants of the same collision that occurred about 9 Gyr ago and discovered a linear trail of \(\approx 10\) low-surface brightness galaxies in the NGC1052 group which are thought to have formed in the aftermath of the collision.
Given the various formation scenarios, the definite dark matter content of DF2 and DF4 carries important implications. For DF2, the velocity dispersion has been independently inferred from its GCs (\(7.8^{+5.2}_{-2.2}\), van Dokkum et al., 2018; \(10.5^{+4.0}_{-2.2}\), Emsellem et al., 2019) and from the velocity dispersion of its diffuse stellar light (\(8.5^{+2.3}_{-3.1}\), Danieli et al., 2019). However, for DF4, its velocity dispersion has been measured only once with its seven GCs (van Dokkum et al., 2019), probing the enclosed mass within 7 kpc.
An accurate stellar velocity dispersion for DF4 would not only test whether the galaxy is indeed dark matter-deficient but also help constrain the formation of dark-matter-deficient galaxies. For example, in stripping scenarios one might expect some dark matter to remain with the galaxy, whereas in the bullet scenario the galaxy should be truly devoid of dark matter. Distinguishing a velocity dispersion of, say, \(12\,\,{\rm km\,s^{-1}}\,\) from \(7\,\,{\rm km\,s^{-1}}\,\) requires high spectral resolution as well as high sensitivity, which is particularly difficult to achieve
Figure 1: _HST/ACS_ color image of NGC1052-DF4, created from \(V_{606}\) and \(I_{814}\) bands. The white rectangles represent the KCWI pointings: due to dithering, the science field that covers the diffuse light of the galaxy is bigger than the sky field, used for modeling the sky background. The inset panels show the KCWI data cubes flattened in the spectral dimension. The mean radius of the science field covers 95% of \(R_{e}\).
for low-surface-brightness galaxies. Integral Field Unit (IFU) spectrographs can act as a "light bucket" to build up signal-to-noise for these faint and large galaxies (e.g. Danieli et al., 2019; Emsellem et al., 2019; Gannon et al., 2023). The observed broadening of spectral features is
\[\sigma_{\rm obs}^{2}=\sigma_{\rm instrument}^{2}+\sigma_{\rm galaxy}^{2}.\]
Because of this quadratic behavior, a spectral resolution
\[\sigma_{\rm instrument}\sim\sigma_{\rm galaxy}\]
is required.
For the expected intrinsic velocity dispersion of DF4, the Keck Cosmic Web Imager (KCWI) is the only IFU that can achieve this instrumental resolution. Combining its "light bucket" capability with the light gathering power of the Keck telescope, KCWI allows us to take the highest resolution spectrum of any low-surface-brightness galaxy. In this paper, we present KCWI spectroscopy of DF4 with an instrumental resolution of \(7\,{\rm km\,s^{-1}}\) (Section 3). We measure the velocity dispersion of the diffuse stellar component of DF4 (Section 4) and confirm that it contains little to no meaurable dark matter.
## 2 Observation and Data Reduction
### KCWI spectroscopy
IFU spectroscopy of DF4 was obtained with KCWI on Keck II in October 2019 and November 2021 for a total of four nights. The small image slicer was used with the high-resolution BH2 grating, resulting in a field-of-view (FOV) of \(8^{\prime\prime}\times 20^{\prime\prime}\) (\(0.8\times 1.9\,{\rm kpc}\)) and an approximate spectral resolution of \(R\sim 18,000\). The data were taken with \(1\times 1\) binning and sky position angle of 0. The central wavelength was \(4250\)A and the wavelength coverage was \(4040-4450\)A.
NGC1052-DF4's angular size (\(R_{e}=16.5^{\prime\prime}=1.6\,{\rm kpc}\)) is larger than the KCWI FOV, which means that offset exposures have to be used to characterize the sky emission. As illustrated in Figure 1, we alternate between two pointings. In the first, "science" exposures were taken with the KCWI FOV placed on the center of NGC1052-DF4. Due to dithering, the effective FOV is \(12.25^{\prime\prime}\times 20.4^{\prime\prime}\) (\(1.2\times 1.9\,{\rm kpc}\)), covering the inner 95% \(R_{e}\) of the diffuse stellar component, as well as GC-1968 and GC-2239, globular clusters associated with the galaxy. In the second pointing, offset "sky" exposures were taken with the FOV placed on a field \(41.6^{\prime\prime}\) away, centered on the globular cluster GC-2537. The globular clusters take up only a small fraction of the KCWI area, and are masked in the analysis presented in this paper. In the data reduction process (described in Section 2.2), sky subtraction was done separately for each epoch.
In this high resolution mode the read noise of the KCWI detector exceeds the sky noise in short exposures. The exposure time is therefore a compromise between getting as close to sky-limited data as possible, while still having enough number of science and sky exposures to perform the data reduction. We chose exposure times of 3600 s at each position, the approximate transition point where the typical sky noise begins to exceed the read noise. The total exposure time is 50,400 s on the galaxy and 32,400 s on the offset field. The total science + sky time that is used in the analysis is therefore 34 hours. Conditions were somewhat variable, with thin cirrus present during 2019 and clear skies in 2021.
### Data Reduction
The KCWI Data Reduction Pipeline (KCWI DRP) was used to perform basic reduction and calibration of the data 1. A background gradient was manually removed from the master dark created by the KCWI DRP. A 9th-order polynomial was fitted to each row and the average fit was subtracted from the master dark. In addition to the master dark, the event table was also edited to skip the default sky subtraction. The rest of the data reduction procedures were set to their defaults. The KCWI DRP treats each of the 34 science and sky frames independently. "Bars" exposures are used to identify geometric transformations required to map each pixel in the 2D image into slice and position; the arc lamp exposures provide the wavelength solution. These transformations were used to convert the raw science image into a data cube with three dimensions: slice number, position along the slice, and wavelength. These data cubes, dubbed icubed files by the KCWI DRP but without sky subtraction, were used in the subsequent steps.
Footnote 1: [https://github.com/Keck-DataReductionPipelines/KCWI_DRP](https://github.com/Keck-DataReductionPipelines/KCWI_DRP)
Sky subtraction was carried out separately with a custom method that is optimized for low-surface-brightness galaxies. The sky frames could not be directly subtracted from adjacent science frames because the sky spectrum varied significantly over the hour-long interval between successive exposures. Instead, variation in the sky spectrum was modelled with a principal component analysis (PCA). The method was introduced and explained in detail in van Dokkum et al. (2019) and was also used for KCWI spectroscopy of NGC1052-DF2 (Danieli et al., 2019).
To summarize the procedure, principle components extracted from the sky spectra were used as templates to fit the science frames. The sky data set included both
the offset field and empty field exposures at the beginning and end of each night. All sky frames were masked to exclude GCs and bad pixels, then averaged over the two spatial dimensions. Due to the low signal-to-noise in each pixel, it was essential to discard sky pixels that deviate \(\geq 5\sigma\) from the median before taking the average. The resulting 1D sky spectra were grouped by observing epoch (2019 and 2021) and separately decomposed into eight components with scikit-learn PCA. For each epoch, a fitting template was compiled from the eigenspectra, the epoch-average 1D sky spectra, and a model of the galaxy spectrum. These templates were fitted to each science exposure in the same epoch with their amplitudes as fitting parameters. The best-fit 1D sky model was subtracted from each spatial pixel. We refer to van Dokkum et al. (2019) for details.
After sky subtraction a 1D spectrum was extracted from each science datacube. Again, the pixels were averaged over and any pixels that deviated \(\geq 5\sigma\) from the median are not included in the 1D average spectra. The final spectrum is a weighted average of the 14 science spectra, where the weights were determined from the integrated GC flux of each datacube. The final combined spectrum is shown in Fig 2.
## 3 Measuring Stellar Kinematics
### Instrumental Resolution
Since the instrumental resolution contributes to the stellar velocity dispersion measured from the spectrum, it is important to determine it as a function of wavelength. The instrumental resolution is typically determined from the width of emission lines from the ThAr arc lamp. The median width of the arc lines is \(0.115\pm 0.003\) A in 2019, and \(0.118\pm 0.002\) A in 2021. The arc frames show stable instrumental resolution between our two observing epochs.
The effective spectral resolution of the science data could differ from the arc frame resolution due to several reasons: arc frames are only taken at the beginning of night, but the science and sky frames span the whole night; the light path from the arc lamp to the detector is different from the path of the galaxy light; multiple science frames are combined in the final analysis, which can also impact the effective spectral resolution. A more empirical approach is to measure the instrumental resolution directly from the reduced sky data, using the same method that we measure velocity dispersion from the reduced science data.
Similar to van Dokkum et al. (2019), we make use of the fact that the first eigenspectra (PC1) of our 2019 data shows scattered and reflected sunlight. The instrumental resolution is determined by fitting a very high resolution solar template to PC1, with the instrumental broadening a free parameter. The high-resolution solar template is obtained from the BAse de donnees Solaire Sol (BASS20002). The template and PC1 are split into chunks of 50A and each chunk is fitted separately. Free parameters in the fit are the systemic velocity, the velocity dispersion, an additive constant, and a multiplicative constant. There is good correspondence between PC1 and the solar template, and the average instrumental resolution over the entire wavelength range is 0.107 A. The measured resolution from each chunk stays within 0.003A of the average so we will assume a constant instrumental resolution as a function of wavelength. Since this method encapsulates the effective spectral resolution, we use \(\sigma_{\rm instrumental}=0.107\) A to construct the template for the science spectrum. Using the resolution from the arc lines instead leads to 0.6 km s\({}^{-1}\) smaller final dispersions.
Footnote 2: [http://bass2000.obspm.fr/solar_spect.php](http://bass2000.obspm.fr/solar_spect.php)
### Line width
The width of the absorption lines of NGC1052-DF4 is measured by fitting template spectra to the combined 1D spectrum. The templates come from a set of synthetic stellar population synthesis models with an intrinsic wavelength resolution of 0.04A (Conroy et al., 2018). No spectroscopic metallicity has so far been measured for the diffuse light of DF4. A photometric measurement, and spectroscopic measurements for the globular clusters and diffuse light in DF2, indicate metallicities of \(-1.5\lesssim[Z/H]\lesssim-1.0\) and ages of \(7-9\) Gyr (van Dokkum et al., 2018; Fensch et al., 2019; Buzzo et al., 2022). We used two templates with metallicity \([Z/H]=-1.5\) and different ages: 9 Gyrs and 5 Gyrs. Given the age - metallicity degeneracy, the younger model is equivalent to an older model with a higher metallicity. The templates were convolved to the KCWI resolution and supplied to the MCMC fitting code.
We fit the final combined spectrum in the wavelength region 4130A\(\leq\lambda\leq 4400\)A, excluding the region between 4050A\(\leq\lambda\leq 4130\)A. A strong feature in the sky spectrum in this wavelength range caused a mismatch between the observed spectrum and the expected H\(\delta\) line, so we begin the fit at 4130A.
The velocity and velocity dispersion were determined using an MCMC methodology first described in van Dokkum et al. (2016). The algorithm finds the best linear combination of the two templates while using a seventh-order multiplicative polynomial and a second-order additive polynomial to account for the continuum.
Flat priors are used for each fitting parameter, including the redshift (0.00478) and line width (\(0<\sigma<0.5\) A). We iteratively identify outliers and mask the regions that deviate significantly from the model. This was necessary because residual instrumental effects and sky lines (e.g. around 4360A) affect a small fraction of the final spectrum. The best-fitting model is shown in Figure 2 as the red line and the masked intervals are shaded in grey. The reduced chi-square value of the best-fit model is 0.98 and the MCMC parameters converge. The posterior distribution of the systemic velocity \(v_{\rm sys}\) and the velocity dispersion \(\sigma\) is shown in Figure 3.
From the high-resolution KCWI spectra of the diffuse stellar light in DF4, we measure a line width of \(\sigma_{\rm galaxy}=9.67^{+1.4}_{-0.9}\,{\rm km\,s^{-1}}\). This value includes both stellar velocity dispersion and stellar rotation.
## 4 Results
### Stellar rotation
Figure 3: The posterior distribution of the systemic velocity (\(v\)) and line width (\(\sigma\)) from MCMC fitting of the KCWI spectrum.
Figure 2: Integrated 14hr KCWI spectrum of NGC1052-DF4 (black line) with instrumental resolution of 0.107Å. The high spectral resolution reveals a large number of absorption lines with high accuracy. The 1\(\sigma\) uncertainties and masked regions are shaded in grey. The best-fit model from synthetic stellar populations is over-plotted in red.
The integrated spectrum of diffuse stellar light in a galaxy is broadened both by velocity dispersion between stars and internal motion of individual stars (largely rotation and macroturbulence):
\[\sigma_{\rm galaxy}^{2}=\sigma_{\rm stars}^{2}+\sigma_{\rm broadening}^{2}\]
For galaxies with high dark matter content, the velocity dispersion dominates. In dark-matter-deficient galaxies like DF4, the velocity dispersion is so low (\(<10\,{\rm km\,s^{-1}}\) ) that the line broadening of individual stars may have a noticeable effect on the measured line width (Carney et al., 2008; Massarotti et al., 2008). Since our synthetic stellar population synthesis template does not include intrinsic broadening, we need to estimate it separately to measure the stellar velocity dispersion.
The stellar population in DF4 is old (\(\sim 9\) Gyr) and metal-poor (Buzzo et al., 2022), mostly composed of main sequence (MS) stars and red giant branch (RGB) stars. The internal motions of metal-poor stars are not well-determined in the literature. Sun-like MS stars have rotation velocities of around 3 \({\rm km\,s^{-1}}\)(Sheminova, 2019), but metal-poor MS stars have been predicted to have rotation velocities as high as 6 \({\rm km\,s^{-1}}\)(Amard and Matt, 2020). When a MS star evolves to the RGB, its rotation rate decreases (known as spin-down), but the rate of spin down depends on metallicity. Furthermore, metal-poor red giants can have high macroturbulence (6-8 \({\rm km\,s^{-1}}\), Gray, 1982), an additional source of spectral line broadening. Observationally, Carney et al. (2008) and Massarotti et al. (2008) measured internal broadening of 2-11 \({\rm km\,s^{-1}}\) in RGB stars. From the observed metallicity vs. broadening relation in Carney et al. (2008), we infer an internal broadening of \(\sigma_{\rm broadening}=5.4\pm 2.0\) \({\rm km\,s^{-1}}\) in red giants with \([Fe/H]=-1\). The \(\pm 1\sigma\) range 3.4 - 7.4 \({\rm km\,s^{-1}}\) encompasses the plausible macroturbulence in red giants, as well as the broadening due to rotation of Sun-like stars at the low end up to low metallicity MS stars at the high end.
### Velocity dispersion
After correcting for this internal broadening of \(5.4\pm 2\) \({\rm km\,s^{-1}}\), we obtain a velocity dispersion of \(\sigma_{\rm stars}=8.0^{+2.3}_{-1.9}\) \({\rm km\,s^{-1}}\). In Fig. 4 we summarize the velocity dispersion measurements for DF2 and DF4, including the new value of the stellar dispersion. The exceptional spectral resolution of KCWI (7 \({\rm km\,s^{-1}}\) using the small slicer and the BH2 grating) produces the smallest errorbar of the five values.
The only previous velocity dispersion measurement of DF4 came from 7 globular clusters (van Dokkum et al., 2019). Analyzing such a small number of dynamical tracers with regular MCMC could lead to a biased dispersion (Laporte et al., 2019), and recent papers have proposed modifying MCMC priors for an unbiased estimate (Doppel et al., 2021; Toloba et al., 2023). For the velocity dispersion of DF4, van Dokkum et al. (2019) used approximate Bayesian computation, which does not rely on explicit likelihood or prior functions and does not suffer from the small sample bias. The globular cluster dispersion of \(\sigma_{\rm GC}=4.2^{+4.4}_{-2.2}\) \({\rm km\,s^{-1}}\) covers a median galactocentric radius of 4.08 kpc (\(2.6\,R_{e}\)). Our new KCWI velocity dispersion probes the inner diffuse light out to an average radius of \(\sim 0.5\,R_{e}\). These two measurements are not sufficient to constrain the radial velocity dispersion profile, so we combine them with a simple weighted average to obtain a fiducial velocity dispersion of \(\sigma_{\rm f}=6.3^{+2.5}_{-1.6}\) \({\rm km\,s^{-1}}\) for DF4.
## 5 Discussion
We have used KCWI on the Keck II telescope to measure the stellar velocity dispersion of DF4, the "second galaxy lacking dark matter" in the NGC1052 group. Combining the stellar dispersion with the previously-measured dispersion from the velocities of globular clusters, we find a fiducial value of \(\sigma_{\rm f}=6.3^{+2.5}_{-1.6}\) \({\rm km\,s^{-1}}\) for DF4.
We calculate the dynamical mass of DF4 within the half-light radius, using the Wolf et al. (2010) relation:
\[M_{1/2}=3\,G^{-1}\langle\sigma_{\rm LOS}^{2}\rangle\,R_{1/2} \tag{1}\]
Figure 4: The best-fit velocity dispersion from this study (red star) corrected for stellar rotation and compared to literature values. For DF4, the stars alone contribute \(8\sim 10\) \({\rm km\,s^{-1}}\) (light brown band) while the expectation from the stellar-mass-halo-mass relation (Zaritsky and Behroozi, 2023), assuming a standard NFW halo, is \(\sigma_{\rm SMHM}=30\pm 5\) \({\rm km\,s^{-1}}\). The stellar contribution of DF2 is shaded in grey bands.
The projected circularized half-light radius of \(R_{e}=1.6\pm 0.1\) kpc (Cohen et al., 2018; Danieli et al., 2020) translates to a 3D half-light radius \(R_{1/2}\approx 4/3R_{e}\approx 2.1\pm 0.1\) kpc. Using the fiducial dispersion we find \(M(r<R_{1/2})=\)\(\rm{8}^{+6}_{-4}\times 10^{7}M_{\odot}\) within this radius.
This mass is fully consistent with the stellar mass within the half-light radius of \(M_{*}(r<R_{1/2})=(7.5\pm 2)\times 10^{7}M_{\odot}\) (assuming \(M/L=(2.0\pm 0.5)M_{\odot}/L_{\odot}\), van Dokkum et al. 2019), and we infer that our results are consistent with dark matter-free models. We can also rule out a standard NFW halo that follows the stellar mass - halo mass relation (Behroozi et al., 2013): the expectation is \(M_{\rm NFW}(r<r_{1/2})=1.2\times 10^{9}M_{\odot}\), orders of magnitude higher than the dynamical mass of DF4. More recent works on the satellite stellar mass - halo mass relation (Nadler et al., 2020; Danieli et al., 2022) predict even higher halo masses at this stellar mass, exacerbating the discrepancy.
DF4 is an outlier not only in the stellar mass - halo mass relation, but also in the GC system - halo mass relation. Massive galaxies follow the simple linear relationship \(M_{h}=5\times 10^{9}M_{\odot}\,N_{GC}\), which can also be expressed in the GC mass instead of their number (Spitler and Forbes, 2009; Forbes et al., 2018; Burkert and Forbes, 2020). In the dwarf galaxy regime (\(10^{8}<M_{h}<10^{10}M_{\odot}\)), this relation only applies in an average sense because the predicted number of GCs is small and the nature of GC formation is stochastic. Both DF2 and DF4 are consistent with having no dark matter based on the kinematics of their stars and GC system, but they each host \(19^{+9}_{-5}\) GCs (Shen et al., 2021) which are overmassive by a factor of 4. Using the canonical relation, and multiplying by a factor of four to account for the high mass of individual clusters, we find an expected halo mass of \(M_{h}=4^{+2}_{-1}\times 10^{11}M_{\odot}\). For an NFW halo with this mass, the dark matter mass enclosed within \(R_{1/2}\) is \(\rm{5}^{+3}_{-2}\times 10^{9}M_{\odot}\), between 10 and 30 times higher than our \(3-\sigma\) upper limit on the dynamical mass of DF4.
Three factors contribute to the outlier status of DF2 and DF4: they have less dark matter than expected for their stellar mass, more GCs than expected for their stellar mass, and a higher mass of individual clusters than other galaxies. The outlier status in the GC system - halo mass relation is a different way to express the potentially exotic origin of the galaxies. The GCs in DF2 and DF4 do not follow the near-universal GC luminosity function (Shen et al., 2021), but they are remarkably consistent in color (van Dokkum et al., 2022). If these GCs did not follow the canonical GC formation or hierarchical accretion, then the GC mass - halo mass relation may not apply here.
Interestingly, the fiducial dispersion is in some tension with cored dark matter profiles, which might result from extreme tidal stripping. Carleton et al. (2019) predict a velocity dispersion of \(\sim 10\,\rm{km\,s^{-1}}\) at \(R_{e}=1.6\,\rm{kpc}\), \(1.5\sigma\) higher than the fiducial dispersion. It is clearly inconsistent with the MOND prediction of \(\geq 12.5\,\rm{km\,s^{-1}}\)(Muller et al., 2019). This is a strict lower limit in MOND, as corresponds to the maximum external gravitational field of NGC1052.
It is difficult to improve the accuracy further. Systematic uncertainties in the contribution from intrinsic stellar broadening, as well as the stellar initial mass function, are at the same level as the random uncertainty. A more fruitful approach may be to expanding the sample. In the "bullet dwarf" model there are several other candidates for dark matter-free objects in the NGC1052 group. The kinematics of these galaxies will provide further constraints on formation scenarios and the nature of dark matter-deficient galaxies.
We thank Imad Pasha and Michael Keim for their help during the Keck observations. Support from STScI grant HST-GO-15695 is gratefully acknowledged. HST(ACS), Keck(KCWI) astropy (Astropy Collaboration et al., 2013, 2018, 2022)
## Appendix A A Small Revision to the Trgb Magnitude of NGC1052-DF2
Along with the velocity dispersions of both galaxies, the line-of-sight distance between DF2 and DF4 is an important constraint on models for their formation. Here we report a small correction to the published distance. Shen et al. (2021) used 40 orbits of HST data to find the Tip-of-the-Red-Giant-Branch (TRGB) distance to NGC1052-DF2. The 2021 analysis found a TRGB magnitude of \(m_{TRGB,F814W}=27.52\) mag, corresponding to an absolute distance of 22.1 Mpc and a distance between DF2 and DF4 of \(2.1\pm 0.5\,\rm{Mpc}\).
In the original 2021 data analysis, two out of the 20 orbits in each filter were misaligned with the rest. This error occurred during the tweakreg routine of the AstroDrizzle package, which identifies stars in each raw image and finds the best shift to align them before the drizzle combination process. Due to the low surface brightness of our target galaxy, tweakreg failed to align two out of 20 orbits in each filter. Due to the offset (around 10 pixels), these two orbits were effectively rejected during the final drizzle process and did not leave artifacts in the combined image. We found this error because the weight map showed pixels preferentially rejected at the location of bright objects like globular clusters. This misalignment also affected the Dolphot photometry because it relied on the WCS header from each flc file. Since tweakreg failed for two orbits and the shift was larger than the tolerance of Dolphot, Dolphot missed the flux of all the stars in the misaligned images. Around 10% of our data were misaligned, and it should lead to around 0.1 mag underestimation of the TRGB location.
We re-analyzed the HST data by dropping the two misaligned orbits from the Dolphot input, leaving everything else the same. From a simple edge detection of the RGB luminosity function (shown in the left panel of Figure 5), we revise the observed TRGB magnitude from 27.52 mag to 27.43 mag. The 0.1 mag shift we measure can be explained by the fraction of orbits missed.
The re-run of 180,000 artificial stars show that the remaining 18 orbits is still deep enough to reach below the TRGB, so we proceed with re-running the forward modeling analysis. Our forward modeling code starts with an intrinsic luminosity function and takes into account the photometric bias and scatter measured from the artificial stars. Compared to the 2021 value, the new TRGB magnitude is brighter, leading to lower photometric bias levels. This partially compensates for the change in the observed TRGB magnitude. From the forward modeling, we find the intrinsic TRGB magnitude of DF2 to be \(m_{\rm F814W,TRGB}=27.61\) mag, which is 0.06 mag brighter than the 2021 measurement.
The revised distance we measure from the forward modeling is \(21.7\,\rm Mpc\) (from \(22.1\,\rm Mpc\)). The revised distance between DF2 and DF4 is \(1.7\pm 0.5\,\rm Mpc\).
|
2309.09854 | Possible large CP violation in charmed $Λ_b$ decays | We propose that the cascade decay $\Lambda_b \to D(\to K^+\pi^-) N(\to
p\pi^-)$ may serve as the discovery channel for baryonic CP violation. This
decay chain is contributed by, dominantly, the amplitudes with the intermediate
$D$ state as $D^0$ or $\bar{D}^0$. The large weak phase between the two kinds
of amplitudes suggests the possibility of significant CP violation. While the
presence of undetermined strong phases may complicate the dependence of CP
asymmetry, our phenomenological analysis demonstrates that CP violation remains
prominent across a broad range of strong phases. The mechanism also applies to
similar decay modes such as $\Lambda_b \rightarrow D(\rightarrow K^+ K^-)
\Lambda$. Considering the anticipated luminosity of LHCb, we conclude that
these decay channels offer a promising opportunity to uncover CP violation in
the baryon sector. | Yin-Fa Shen, Jian-Peng Wang, Qin Qin | 2023-09-18T15:15:08Z | http://arxiv.org/abs/2309.09854v2 | # Possible large CP violation in charmed \(\Lambda_{b}\) decays
###### Abstract
We propose that the cascade decay \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\) may serve as the discovery channel for baryonic CP violation. This decay chain is contributed by dominantly the amplitudes with the intermediate \(D\) state as \(D^{0}\) or \(\bar{D}^{0}\). The large weak phase between the two kinds of amplitudes suggests the possibility of significant CP violation. While the presence of undetermined strong phases may complicate the dependence of CP asymmetry, our phenomenological analysis demonstrates that CP violation remains prominent across a broad range of strong phases. Considering the anticipated luminosity of LHCb, we conclude that this decay channel offers a promising opportunity to uncover CP violation in the baryon sector.
_Introduction.--_In modern particle physics, CP violation has gained increasing significance, particularly as one of the three Sakharov criteria for explaining the baryon asymmetry in the universe [1]. While the Cabibbo-Kobayashi-Maskawa (CKM) mechanism [2; 3] in the standard model naturally incorporates CP violation, its magnitude is insufficient to fully account for the observed baryon asymmetry, which remains one of the most intriguing puzzles in contemporary physics [4; 5]. It is worth noting that CP violation has been extensively studied in mesonic systems [6; 7; 8; 9; 10; 11; 12; 13; 14], but has yet to be examined in any baryonic system. Therefore, given its closer connection to the baryon asymmetry in the universe, the investigation of CP violation in the baryon sector remains a critical task in experimental research.
Many investigations have been conducted to search for CP violation in baryon decays, as evidenced by various studies [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. These searches have introduced several intriguing and innovative techniques. For instance, the implementation of quantum entanglement in baryon-antibaryon pairs [17; 18] provides opportunities to measure CP violation observables independent of strong phases. Additionally, non-traditional observables have been systematically developed and explored. Notably, the investigation of CP violation induced by time-reversal-odd correlations [26; 27; 28; 29; 30; 31] provides well-defined and complementary observables for experimental CP violation searches.
While CP violation in the baryon sector still requires further exploration in both experimental and theoretical domains, there have been encouraging pieces of evidence of its presence in \(\Lambda_{b}\) baryon decays [19]. Moreover, the LHCb experiment has not only successfully generated and collected unprecedented data on \(b\overline{b}\)-pairs but also has plans to increase its luminosity in the near future [32; 33; 34; 35]. Therefore, there is considerable optimism for the eventual discovery of CP violation in \(b\)-baryons at the LHCb.
In this study, we propose a potential decay channel for the discovery of baryonic CP violation: \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\), where \(D\) represents a superposition of \(D^{0}\) and \(\bar{D}^{0}\), and \(N\) can correspond to any excited state of the neutron. Specifically, we focus on \(N(1440)\) and \(N(1520)\) as examples in this investigation, which possess spin-1/2 and -3/2, respectively. Since the CP violation in this decay arises from the interference between \(D^{0}\)- and \(\bar{D}^{0}\)-contributing amplitudes with a significant weak phase difference, we anticipate a substantial level of CP violation. Furthermore, based on the previous measurements by LHCb [36], we estimate that approximately \(\mathcal{O}(10^{2})\) events can be collected after the RUN 1-2 phase of LHCb, with even more expected during RUN 3-4. This provides a promising opportunity for the experimental discovery of baryonic CP violation.
In the subsequent sections of this paper, we will begin by deriving the expressions for the CP asymmetry in the decay chain \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\). These expressions, formulated within the framework of helicity amplitudes [37], will illustrate the dependence of CP asymmetry on various strong phases. Subsequently, we will conduct phenomenological analyses to demonstrate that CP violation remains significant over a wide range of strong phases. Based on these compelling findings, we strongly recommend that LHCb investigates the CP violation in this specific decay channel.
_Theoretical Setup.--_In this section, we investigate the CP violation of the decay chain \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\), as depicted in FIG. 1. It's worth noting that the intermediate \(D\) meson can manifest as either \(D^{0}\) or \(\bar{D}^{0}\), and the baryon \(N\) donates excited states of neutrons contributing in the channel depending on the specific \(p\pi^{-}\) invariant mass. For a particular \(N\), two decay paths contribute to the desired final state, namely, \(\Lambda_{b}\to\bar{D}^{0}(\to K^{+}\pi^{-})N\) and \(\Lambda_{b}\to D^{0}(\to K^{+}\pi^{-})N\). The mixing effect of the neutral \(D\) meson is neglected in our
analysis due to its high suppression [38; 39].
The relation between the decay amplitudes of \(\Lambda_{b}\) and \(D\) can be parameterized as \(\left\langle K^{+}\pi^{-}\middle|D^{0}\right\rangle=-r_{D}e^{-i\delta_{D}}\left\langle K ^{+}\pi^{-}\middle|\bar{D}^{0}\right\rangle\) and \(\left\langle\bar{D}^{0}N\middle|\Lambda_{b}\right\rangle=r_{B}e^{i(\delta_{B}+ \omega)}\)\(\left\langle D^{0}N\middle|\Lambda_{b}\right\rangle\), respectively. The parameters \(r_{D}\) and \(r_{B}\) are the relevant magnitude ratios, \(\delta_{D}\) and \(\delta_{B}\) are the relevant strong phases, \(\omega\) is the weak phase between the \(\Lambda_{b}\) decays, and the weak phase between the \(D\) decays is neglected. As multiple \(N\) states and various partial-wave amplitudes will contribute to \(\Lambda_{b}\to DN\) decays, a series of \(r_{B}\)'s and \(\delta_{B}\)'s will be introduced in the subsequent calculations, as listed in TABLE 1. Notably, both \(r_{B}\) and \(r_{D}\) are of the order \(\mathcal{O}(10^{-2})\), signifying that the two decay paths have comparable amplitude magnitudes and therefore result in considerable interference effects. In addition, the weak phase is substantial with \(\sin\omega\approx 0.9\), suggesting the potential for significant CP violation effect.
In practical analyses, the consideration of \(N\)-baryon resonances becomes inevitable due to the presence of the \(p\pi^{-}\) final state. Furthermore, these resonance peaks are closely spaced, making them almost indistinguishable in experiments. To address this point, we take the \(N\)-baryon as a superposition of \(N(1440)\) and \(N(1520)\) for an illustrative example to investigate the complete decay chain \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\). For the spin-1/2 \(N(1440)\) resonance, the angular momentum conservation allows \(\Lambda_{b}\to DN(1440)\) to occur through both the \(S\)-wave and \(P\)-wave, characterized by amplitudes denoted as \(S_{1/2}\) and \(P_{1/2}\) in the subsequent analysis. In the case of the spin-3/2 \(N(1520)\) resonance, the \(\Lambda_{b}\to DN(1520)\) process can occur through both the \(P\)-wave and \(D\)-wave, with amplitudes represented as \(P_{3/2}\) and \(D_{3/2}\). Since the strong interaction conserves both parity (P) and charge-parity (CP), each secondary \(N\to p\pi^{-}\) decay amplitude can be encoded by one single amplitude. Therefore, they can be directly absorbed into the aforementioned \(S,P,D\) amplitudes, inducing no new parameters.
Based on the formulae of the helicity framework [37], we can derive the differential decay width of \(\mathrm{d}\Gamma/\mathrm{d}\cos\theta\) in terms of the partial-wave amplitudes, where \(\theta\) is the angle between the proton motion direction in the \(N\) rest frame and the \(N\) motion direction in the \(\Lambda_{b}\) rest frame, as illustrated in FIG. 1. With also the differential decay width \(\mathrm{d}\bar{\Gamma}/\mathrm{d}\cos\theta\) of the charge conjugate process, we define the \(\theta\)-dependent CP asymmetry as
\[A_{\mathrm{CP}}\equiv\frac{\mathrm{d}\Gamma/\mathrm{d}\cos\theta-\mathrm{d} \bar{\Gamma}/\mathrm{d}\cos\theta}{\mathrm{d}\Gamma/\mathrm{d}\cos\theta+ \mathrm{d}\bar{\Gamma}/\mathrm{d}\cos\theta}\equiv\frac{\mathcal{N}(\theta)}{ \mathcal{D}(\theta)}\;, \tag{1}\]
where different polarization states in the final state are summed and in the initial state are averaged.
The numerator, denoted as \(\mathcal{N}(\theta)\), reads
\[\mathcal{N}(\theta)= \left|S_{1/2}\right|^{2}\mathcal{N}_{d}(r_{B1},\delta_{B1})+ \left|P_{1/2}\right|^{2}\mathcal{N}_{d}(r_{B3},\delta_{B3}) \tag{2}\] \[+\left(1+3\cos^{2}\theta\right)\left|P_{3/2}\right|^{2}\mathcal{N }_{d}(r_{B2},\delta_{B2})\] \[+\left(1+3\cos^{2}\theta\right)\left|D_{3/2}\right|^{2}\mathcal{N }_{d}(r_{B4},\delta_{B4})\] \[+2\cos\theta\left|S_{1/2}P_{3/2}\right|\mathcal{N}_{i}(r_{B1},r_ {B2},\delta_{B1},\delta_{B2},\delta_{PS})\] \[+2\cos\theta\left|P_{1/2}D_{3/2}\right|\mathcal{N}_{i}(r_{B3},r_ {B4},\delta_{B3},\delta_{B4},\delta_{DP}),\]
where the \(\mathcal{N}_{d}\) and \(\mathcal{N}_{i}\) functions are defined by
\[\mathcal{N}_{d}(r_{i},\delta_{i}) =4r_{i}r_{D}\sin\omega\sin\left(\delta_{D}+\delta_{i}\right), \tag{3}\] \[\mathcal{N}_{i}(r_{1},r_{2},\delta_{1},\delta_{2},\delta_{3}) =4r_{D}\sin\omega[r_{1}\sin\left(\delta_{D}+\delta_{1}-\delta_{3}\right)\] \[\quad+r_{2}\sin\left(\delta_{D}+\delta_{2}+\delta_{3}\right).\]
The newly introduced ratios \(r_{Bi}\) and strong phases \(\delta_{Bi}\) parameterize the relations between different partial-wave contributions to the \(\Lambda_{b}\) decay amplitudes \(\left\langle\bar{D}^{0}N\middle|\Lambda_{b}\right\rangle_{i}=r_{Bi}e^{i(\delta_{ Bi}+\omega)}\left\langle D^{0}N\middle|\Lambda_{b}\right\rangle_{i}\), with \(i=1,2,3,4\) corresponding to the \(S_{1/2}\), \(P_{3/2}\), \(P_{1/2}\) and \(D_{3/2}\), respectively. The denominator, denoted as \(\mathcal{D}(\theta)\), is defined as
\[\mathcal{D}(\theta)= \left|S_{1/2}\right|^{2}\mathcal{D}_{d}(r_{B1},\delta_{B1})+\left| P_{1/2}\right|^{2}\mathcal{D}_{d}(r_{B3},\delta_{B3}) \tag{4}\] \[+\left(3\cos^{2}\theta+1\right)\left|P_{3/2}\right|^{2}\mathcal{ D}_{d}(r_{B2},\delta_{B2})\] \[+\left(3\cos^{2}\theta+1\right)\left|D_{3/2}\right|^{2}\mathcal{ D}_{d}(r_{B4},\delta_{B4})\] \[+2\cos\theta\left|S_{1/2}P_{3/2}\right|\mathcal{D}_{i}(r_{B1},r_ {B2},\delta_{B1},\delta_{B2},\delta_{PS})\] \[+2\cos\theta\left|P_{1/2}D_{3/2}\right|\mathcal{D}_{i}(r_{B3},r_ {B4},\delta_{B3},\delta_{B4},\delta_{DP}),\]
where the \(\mathcal{D}_{d}\) and \(\mathcal{D}_{i}\) functions are defined by
\[\mathcal{D}_{d}(r_{i},\delta_{i}) =2r_{i}^{2}+2r_{D}^{2}-4r_{i}r_{D}\cos\omega\cos\left(\delta_{i}+ \delta_{D}\right), \tag{5}\] \[\mathcal{D}_{i}(r_{1},r_{2},\delta_{1},\delta_{2},\delta_{3})=4r_{ 1}r_{2}\cos\left(\delta_{1}-\delta_{2}-\delta_{3}\right)\] \[\quad+4r_{D}^{2}\cos\delta_{3}-4r_{D}\cos\omega[r_{1}\cos\left( \delta_{D}+\delta_{1}-\delta_{3}\right)\] \[\quad+r_{2}\cos\left(\delta_{D}+\delta_{2}+\delta_{3}\right)].\]
With unpolarized initial states, interferences between different partial-wave amplitudes of the same \(N\)-baryon vanish, eliminating terms proportional to \(\mathrm{Re}[S_{1/2}P_{1/2}^{\star}]\) and \(\mathrm{Re}[P_{3/2}D_{3/2}^{\star}]\), which are analogous to the Lee-Yang parameter \(\alpha\)[40]. Consequently, only interferences between partial waves associated with different \(N\)-baryons survive. To formulate these, we introduce \(\delta_{PS}\) as the strong phase between the \(P_{3/2}\) and \(S_{1/2}\) amplitudes, and \(\delta_{DP}\) as the strong phase between the \(D_{3/2}\) and \(P_{1/2}\) amplitudes, namely,
and \(\delta_{DP}=\arg[\left\langle D^{0}N\big{|}\Lambda_{b}\right\rangle_{d}/\left\langle D ^{0}N\big{|}\Lambda_{b}\right\rangle_{3}]\). It is evident that \(\mathcal{N}_{d}\)'s indicate the CP asymmetries of each independent partial wave, while \(\mathcal{N}_{i}\)'s reflect the CP violation effects induced by interferences between different partial waves.
_Phenomenology.--_ In the phenomenological analysis, we introduce two observables of interest. The first observable, denoted as \(A_{1}\), is defined as
\[A_{1}\equiv\frac{\int_{-1}^{1}\mathcal{N}(\theta)\;\mathrm{d}\cos\theta}{\int _{-1}^{1}\mathcal{D}(\theta)\;\mathrm{d}\cos\theta}\equiv\frac{\mathcal{N}_ {1}}{\mathcal{D}_{1}}, \tag{6}\]
where the numerator \(\mathcal{N}_{1}\) and denominator \(\mathcal{D}_{1}\) are calculated to be
\[\mathcal{N}_{1} =\big{|}S_{1/2}\big{|}^{2}\mathcal{N}_{d}(r_{B1},\delta_{B1})+ \big{|}P_{1/2}\big{|}^{2}\mathcal{N}_{d}(r_{B3},\delta_{B3}) \tag{7}\] \[\quad+2\big{|}P_{3/2}\big{|}^{2}\mathcal{N}_{d}(r_{B2},\delta_{B 2})+2\big{|}D_{3/2}\big{|}^{2}\mathcal{N}_{d}(r_{B4},\delta_{B4}),\] \[\mathcal{D}_{1} =\big{|}S_{1/2}\big{|}^{2}\mathcal{D}_{d}(r_{B1},\delta_{B1})+ \big{|}P_{1/2}\big{|}^{2}\mathcal{D}_{d}(r_{B3},\delta_{B3})\] \[\quad+2\big{|}P_{3/2}\big{|}^{2}\mathcal{D}_{d}(r_{B2},\delta_{B 2})+2\big{|}D_{3/2}\big{|}^{2}\mathcal{D}_{d}(r_{B4},\delta_{B4}).\]
It can be noticed that \(A_{1}\) is expressed as the sum of contributions from four independent partial waves, regardless of the strong phases \(\delta_{PS}\) and \(\delta_{DP}\). This suggests that any interferences between different partial waves are entirely eliminated. The second observable, denoted as \(A_{2}\), is designed to retain the interference of distinct partial waves. It is defined by
\[A_{2}\equiv\frac{\int_{-1}^{1}\mathrm{sgn}[\cos\theta]\cdot\mathcal{N}(\theta )\;\mathrm{d}\cos\theta}{\int_{-1}^{1}\mathcal{D}(\theta)\;\mathrm{d}\cos \theta}\equiv\frac{\mathcal{N}_{2}}{\mathcal{D}_{1}}, \tag{8}\]
analogous to the CP violation induced by the forward-backward asymmetry. Here, \(\mathrm{sgn}[x]\) represents the sign function, and \(\mathcal{N}_{2}\) is given by
\[\mathcal{N}_{2} =\big{|}S_{1/2}\big{|}\,\big{|}P_{3/2}\big{|}\,\mathcal{N}_{i}(r _{B1},r_{B2},\delta_{B1},\delta_{B2},\delta_{PS}) \tag{9}\] \[\quad+\big{|}P_{1/2}\big{|}\,\big{|}D_{3/2}\big{|}\,\mathcal{N}_{i }(r_{B3},r_{B4},\delta_{B3},\delta_{B4},\delta_{DP}).\]
It can be observed from the formula (2) that six unconstrained strong phases and various ratios of amplitudes are involved in the prediction for the CP asymmetries. Consequently, conducting a comprehensive analysis presents a formidable challenge. To streamline our investigation, we leave these unknown phases and ratios as free parameters, and show in most areas of the parameter space large CP violation is expected. In practice, we consider three cases, in which certain relations between the ratios and phases are assumed. We initially consider **Case 1** with a single resonance of the \(N\)-baryon, and allow for the simultaneous contributions from \(N(1440)\) and \(N(1520)\) in **Case 2** and **Case 3**. More detailed setups are described below.
**Case 1.** We initiate our investigation by examining a single \(N\)-baryon resonance, specifically when \(|S_{1/2}|=|P_{1/2}|\) and \(|P_{3/2}|=|D_{3/2}|=0\). Under these conditions, the asymmetry \(A_{2}\) remains precisely zero. In addition, we set both involved strong phases to a common value, denoted as \(\delta_{B1}=\delta_{B3}=\delta\). For the amplitude ratios, we consider three distinct benchmarks: (a) \(r_{B1}=r_{B3}=r_{0}/2\); (b) \(r_{B1}=r_{B3}=r_{0}\); and (c) \(r_{B1}=r_{B3}=2r_{0}\). The results for the CP
\begin{table}
\begin{tabular}{c c} \hline
**Parameter** & **Value** \\ \hline \(r_{0}\equiv|V_{ub}V_{cd}^{\prime}/V_{cb}V_{ud}^{\prime}|\) & \((2.0\pm 0.1)\times 10^{-2}\)[38] \\ \(\omega=\arg\left(V_{ub}V_{cd}/V_{cb}V_{ud}^{\prime}\right)\) & \((114.4\pm 1.5)^{\circ}\)[38] \\ \(r_{D}\) & \((5.86\pm 0.02)\times 10^{-2}\)[38] \\ \(\delta_{D}\) & \(\left(7.2^{+7.9}_{-9.2}\right)^{\circ}\)[39] \\ \hline \end{tabular}
\end{table}
Table 1: The input parameters and their values.
asymmetry \(A_{1,2}\) as functions of \(\delta\) are depicted in FIG. 2.
Across all three benchmarks, the CP asymmetry \(A_{1}\) generally reaches magnitudes of \(\mathcal{O}(10\%)\), with the exception of narrow regions of the strong phases. Notably, in the third benchmark, the peak value of \(A_{1}\) can reach an impressive magnitude of approximately \(90\%\).
**Case 2.** Subsequently, we consider both \(N(1440)\) and \(N(1520)\) contributing with \(|S_{1/2}|=|P_{1/2}|=|P_{3/2}|=|D_{3/2}|\). Furthermore, we suppose that \(\delta_{B1}=\delta_{B2}=\delta_{B3}=\delta_{B4}\) and \(\delta_{PS}=\delta_{DP}\), allowing for two independent strong phases, and select three benchmarks for \(\{r_{Bi}\}\): (a) \(r_{B1}=r_{B3}=r_{0}/2,r_{B2}=r_{B4}=2r_{0}\); (b) \(r_{B1}=r_{B2}=r_{B3}=r_{B4}=r_{0}\); (c) \(r_{B1}=r_{B3}=2r_{0},r_{B2}=r_{B4}=r_{0}/2\). The dependencies of \(A_{1,2}\) on the two strong phases are displayed in FIG. 3 and FIG. 4, respectively. Notably, \(A_{1}\) remains completely unaffected by the strong phases between different partial waves, yielding results similar to those presented in FIG. 2.
The behavior of \(A_{1}\) closely resembles that observed in **Case 1**. Because of the interferences between the \(N(1440)\) and \(N(1520)\) amplitudes, \(A_{2}\) is also nonzero and significant across a majority of the parameter space defined by the strong phases. Therefore, \(A_{2}\) serves as an additional observable to complement \(A_{1}\), particularly in situations where \(A_{1}\) is small.
**Case 3.** Finally, we explore a scenario identical to **Case 2** except for setting the strong phases set as \(\delta_{B1}=\delta_{B2}=-\delta_{B3}=-\delta_{B4}\), and \(\delta_{PS}=\delta_{DP}\). The dependencies of \(A_{1,2}\) on the two strong phases are displayed in FIG. 5 and FIG. 6, respectively.
In this case, the contributions to \(A_{1}\) from two different partial waves of resonances largely cancel each other out, resulting in relatively small typical values for \(A_{1}\). This emphasizes the significance of \(A_{2}\) as a complementary observable. With the exception of the second benchmark for the amplitude ratios, the \(A_{2}\) values are generally quite substantial across the majority of the parameter space defined by the strong phases.
Despite the various patterns depicted in FIG. 2 - FIG. 6, it is evident that the CP asymmetry can eas
ily reach magnitudes of \(\mathcal{O}(10\%)\) across a wide range of strong phases in most scenarios. Furthermore, in specific regions, the CP asymmetry can even exceed \(50\%\). In cases where the direct CP asymmetry \(A_{1}\) is small, the complementary information provided by the sizable \(A_{2}\) offers additional insight into CP violation. Taking into account the relevant measurements from LHCb [36] and the decay width of this process (as indicated in (4)), we estimate that approximately \(\mathcal{O}(10^{2})\) events will be collected after the completion of RUN 1-2 at LHCb, with even more expected in RUN 3 and 4. Therefore, this channel presents a significant opportunity to uncover CP violation in the baryon sector.
In practice, our analysis does not encompass the entire contributions of both \(N(1440)\) and \(N(1520)\) due to their wide decay widths. Instead, our study applies to the phase space with a specific value of the \(p\pi^{-}\) invariant mass and the neighbor region. Interestingly, this situation presents an advantage as different \(p\pi^{-}\) invariant masses correspond to distinct strong phases, and it is highly probable that in certain regions, these strong phases would result in substantial CP asymmetries.
_Conclusion.--_ We have systematically investigated the CP violation in the \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\) decay. The large weak phase difference between the contributing amplitudes from \(D^{0}\) and \(\bar{D}^{0}\) suggests the potential for significant CP violation in this process. While the CP asymmetry is influenced by various parameters, including strong phases and amplitude ratios, our phenomenological analysis demonstrates that CP violation remains substantial across a wide range of parameter values. Considering the luminosity and reconstruction efficiency of the LHCb experiment, we estimate that the reconstruction of approximately \(\mathcal{O}(10^{2})\) events is feasible during the RUN 1-2 phase, with even more events expected in RUN 3-4. Based on these findings, we strongly recommend that the LHCb collaboration prioritizes the investigation of baryonic CP violation using the \(\Lambda_{b}\to D(\to K^{+}\pi^{-})N(\to p\pi^{-})\) decay channel.
_Acknowledgement.--_ The authors are grateful to Bo Fang, Liang Sun, and Fu-Sheng Yu for their insightful discussions pertaining to both the experimental and theoretical aspects of this study. This work is supported in part by the National Natural Science Foundation of China under Grant Nos. 12005068, 12375086, and 11975112, and National Key Research and Development Program of China under Contract No. 2020YFA0406400. |
2306.17551 | Comparative study of subset selection methods for rapid prototyping of
3D object detection algorithms | Object detection in 3D is a crucial aspect in the context of autonomous
vehicles and drones. However, prototyping detection algorithms is
time-consuming and costly in terms of energy and environmental impact. To
address these challenges, one can check the effectiveness of different models
by training on a subset of the original training set. In this paper, we present
a comparison of three algorithms for selecting such a subset - random sampling,
random per class sampling, and our proposed MONSPeC (Maximum Object Number
Sampling per Class). We provide empirical evidence for the superior
effectiveness of random per class sampling and MONSPeC over basic random
sampling. By replacing random sampling with one of the more efficient
algorithms, the results obtained on the subset are more likely to transfer to
the results on the entire dataset. The code is available at:
https://github.com/vision-agh/monspec. | Konrad Lis, Tomasz Kryjak | 2023-06-30T11:09:20Z | http://arxiv.org/abs/2306.17551v1 | Comparative study of subset selection methods for rapid prototyping of 3D object detection algorithms
###### Abstract
Object detection in 3D is a crucial aspect in the context of autonomous vehicles and drones. However, prototyping detection algorithms is time-consuming and costly in terms of energy and environmental impact. To address these challenges, one can check the effectiveness of different models by training on a subset of the original training set. In this paper, we present a comparison of three algorithms for selecting such a subset - _random sampling_, _random per class sampling_, and our proposed _MONSPeC_ (Maximum Object Number Sampling per Class). We provide empirical evidence for the superior effectiveness of random per class sampling and MONSPeC over basic random sampling. By replacing random sampling with one of the more efficient algorithms, the results obtained on the subset are more likely to transfer to the results on the entire dataset. The code is available at: _[https://github.com/vision-agh/monspec_](https://github.com/vision-agh/monspec_).
LiDAR, point cloud, object detection, PointPillars, CenterPoint, subset selection, MONSPeC, random per class sampling
## I Introduction
Advanced Driver Assistance Systems (ADAS), Autonomous Vehicles (AVs), and Unmanned Aerial Vehicles (UAVs) rely on object detection for obstacle avoidance, traffic sign recognition, and object tracking. Vision stream processing is the most common approach, but LiDAR is sometimes preferred due to its resilience to lighting conditions and accurate 3D mapping. LiDAR is used in autonomous vehicles like Waymo and Mercedes S-Class. Working with LiDAR's 3D point clouds differs from vision systems due to the data format, which uses polar coordinates and includes reflection intensity information.
Object detection systems for autonomous vehicles commonly use three datasets: KITTI, Waymo Open Dataset, and NuScenes. The KITTI Vision Benchmark Suite (2012) [1] is the most widely used dataset, containing a training set of \(7481\) images, along with the corresponding point clouds and annotated objects. KITTI also maintains a ranking of object detection methods.
The Waymo Open Dataset (2019) [2] contains \(1950\) sequences, corresponding to \(200000\) frames, but only \(1200\) sequences are annotated. However, it includes \(12.6\) million objects. Waymo holds challenges in several topics each year, including 3D object detection and motion prediction.
NuScenes [3] includes \(1000\) sequences, which comprise approximately \(1.4\) million images, \(390\) thousand LiDAR scans, and \(1.4\) million annotated objects. It also maintains a ranking of object detection methods. From 390k LiDAR scans only 40k are annotated - 28310 are used for training, 6019 for validation and 6008 for testing. Object detection methods on nuScenes are evaluated using standard mAP (mean Average Precision) metric and a metric called NDS (nuScenes detection score). It includes mAP and several error measures, e.g. orientation error or scale error.
There are two main approaches to object detection in point clouds: classical methods and those based on deep convolutional neural networks (DCNN). While DCNN-based approaches often achieve state-of-the-art results, they typically come with high computational and memory requirements.
Improving 3D LiDAR object detection involves academic and industrial research to enhance accuracy and real-time performance. This includes developing new architectures and optimizing hyperparameters using techniques like Neural Architecture Search and Hyperparameter Optimization, which require multiple learning processes. Therefore, prototyping detection algorithms, especially with large datasets and/or complex computational tasks, requires a lot of computation time. This leads to high energy consumption, costs, and environmental impact. One solution to this problem is to work on a subset of the training set instead of the entire set. This allows for computation time savings, which translates into reducing the aforementioned costs. However, careful selection of the subset is crucial to ensure comparable results to those obtained on the entire training set. This issue is the main focus of this study.
The main contributions of our work are:
* MONSPeC and compare it with other two algorithms for selecting a subset of the training set,
* statistical analysis of the solutions with respect to the amount of training data in the subsets and deviation from the expected distribution of the number of objects in each class,
* presentation of empirical evidence for the superior effectiveness of MONSPeC and an other algorithm compared to completely random subset selection.
Experiments were conducted on the nuScenes dataset.
The reminder of this paper is organised as follows. In Section II we discuss two issues related to our work: DCNN approaches to object detection in LiDAR data and subset selection methods. Next, in Section III we elaborate on the motivation of our work, clarify the objective and present three subset selection algorithms. The results obtained are summarised in Section IV. The paper ends with a short summary with conclusions and discussion of possible future work.
## II Related work
### _3D object detection based on LiDAR data_
LiDAR-based 3D detectors can be categorized into point-based, voxel-based, and hybrid methods based on the representations of point clouds. Point-based approaches handle the point cloud in its original form, without any structure imposed. To process the point cloud, these methods first subsample the data and then employ DNNs based on PointNet++ [4]. PointRCNN [5] is an example of such an approach. Voxel-based methods involve voxelizing point clouds and processing the resulting tensor of voxels using 2D/3D DCNNs, as seen in methods such as PointPillars [6] and VoxelNet [7]. Hybrid LiDAR-based 3D object detection methods combine elements from both voxel-based and point-based approaches. An example of such a method is PV-RCNN [8].
In this work, we perform experiments with two 3D object detection algorithms - PointPillars and CenterPoint [9] (version based on pillars).
#### Ii-1 The PointPillars
The PointPillars network [6] is a voxel-based 3D object detection method that employs a pseudo-Bird's Eye View (BEV) map to replace the 3D convolutions with 2D convolutions, enabling end-to-end learning with less computational cost. The point cloud is divided into a set of "pillars", which are 3D cells in the XY plane. The first component of the network, the Pillar Feature Net (PFN), converts the point cloud into a sparse "pseudo-image". The second component is the Backbone, which is a 2D DCNN that processes the pseudo-image and extracts high-level features. The Backbone is composed of two subnets: the "top-down" subnet, which progressively reduces the dimension of the pseudo-image, and a "bottom-up" subnet that upsamples the intermediate feature maps and combines them into the final output map. Finally, the anchor-based Detection Head (Single-Shot Detector (SSD) network) performs object detection and regression of the 3D cuboids surrounding the objects on a 2D grid. The Non-Maximum Suppression (NMS) algorithm is then used to merge overlapping objects after inference.
#### Ii-2 The CenterPoint
CenterPoint [9] is a 3D object detection method that uses either a voxel-based or a pillar-based representation. The input to CenterPoint is a point cloud from a LiDAR sensor. The voxel-based and pillar-based versions of CenterPoint differ in type of backbone used. The former uses voxel encoder and backbone from SECOND [10], the latter pillar encoder and backbone from PointPillars. The output of the backbone is fed into the 1st stage detection head. It consists of several regression maps - one represents object centres as 2D gaussians on a heatmap, others encode information such as sub-voxel location refinement, height above ground, 3D bounding box size and yaw rotation angle. For each detected object, the second stage extracts 5 feature vectors from the backbone feature map, which correspond to the predicted object centre and centres of each of 4 sides of the bounding box. The extracted feature vectors are concatenated and fed into an efficient MLP (MultiLayer Perceptron) which outputs a classification score and refines the bounding box.
### _Selecting dataset subset_
Machine learning algorithms typically require as much training data as possible to achieve the best performance. However, selecting a subset of the training set and training algorithms on it also has its applications. One of them is Active Learning (AL), where unlabeled samples are gradually selected for labeling, aiming to maximise the improvement in model performance. In the object detection task on images, there are many approaches to AL, such as [11] and [12]. They are based on ensemble of models, collecting their results and adding a sample to the dataset if there is a significant difference between the results of the models. As for processing point clouds, existing AL approaches, [13] do not focus directly on object detection tasks, but on semantic segmentation of point clouds. Other tasks, where selecting a subset of the training set is useful, include Neural Architecture Search (NAS) or Hyperparameter Optimisation (HO). NAS involves automatically selecting the best network architecture in terms of performance, subject to constraints usually imposed on computational complexity or related factors such as speed. HO is a broader term that also includes NAS. While NAS focuses on selecting hyperparameters directly related to network architecture, such as the number of layers or channels, HO focuses on selecting hyperparameters related to the entire training process (learning rate, weight decay, batch size and number of epochs). In these two tasks, working only on a subset of the training set can significantly speed up the search for the best model.
In the context of point cloud processing, there are few works related to NAS or HO - to our knowledge, the issue of NAS has only been addressed a few times, e.g. [14]. However, the authors do not mention anything about working on a subset of the training set. In the context of image processing, there are many more works. Most of them focus on the classification task, and a few of them use an approach with working on a subset of the training set: [15]. The majority of algorithms for obtaining such a subset, like in AL, are based on detecting significant differences between the results of the ensemble of models. Work focusing on the object detection task is NAS-FCOS [16], the authors describe an efficient way to optimise the architecture for object detection. They use a "carefully chosen subset of MS-COCO", but the selection algorithm is briefly described. Other papers, such as [17], specifically address dataset subset selection but are tailored for
image classification datasets and are not suitable for selecting samples in object detection.
## III Proposed experimental method
### _Motivation and objective_
#### Iii-A1 Motivation
One of the tasks we are involved in is the search for fast and efficient algorithms for 3D object detection in LiDAR sensor point clouds. Training the CenterPoint-Pillar network on the entire nuScenes took about 20h (using two Nvidia RTX 3080 GPUs). In contrast, using an oversampled dataset as in [18], the training took approximately 36h. Prototyping an algorithm by repeatedly changing the architecture manually or searching automatically with NAS methods, such a validation time for a single architecture is far too long. Longer computing times are associated with higher energy costs and higher environmental costs. Therefore, we started to look for a faster and less expensive way to compare different versions of the architectures. A fairly obvious direction is to train models on a subset of the original training set.
#### Iii-A2 Objective
The perfect subset of the training set for validating object detection architectures should meet two criteria:
1. Be as small as possible
2. If a model A trained on a subset performs better than model B trained on a subset, then model A trained on the whole set should also perform better.
It is challenging, if not impossible, to provide condition 2 in the general case. However, intuitively, the condition should be approximately fulfilled if the data distribution does not deviate too much from that of the entire dataset and there is as much training data (objects) as possible for each class in a given subset. For small subsets, the condition of maximising the number of objects is particularly important for sparse classes so as to reduce overfitting. On the other hand, intuitively, the higher the detection efficiency for several architectures learned on a subset relative to their detection efficiency obtained when training on the whole dataset, the better the subset reflects the training dataset and the higher the probability that condition 2 will be met. We therefore propose the following procedure for creating a subset:
1. Assume a subset size \(N\) and select \(N\) samples so as to maximise the number of objects for each class while not deviating too much from the distribution of the data across the dataset. Section III-B presents 3 algorithms to implement this step, each presenting a different trade-off between the number of objects and the distance from the original data distribution.
2. Validate a given subset by training several architectures on it and comparing their effectiveness with models trained on the whole set.
### _Subset selection algorithms_
Let us introduce the notations:
* the number of samples in a subset,
* the total number of objects in the subset,
* the number of objects of class \(k\) in the subset,
* the frequency of occurrence of objects of class \(k\) in a subset,
* the number of samples in the entire dataset,
* the total number of objects in the entire dataset,
* the number of objects of class \(k\) in the entire dataset,
* the frequency of occurrence of objects of class \(k\) in the entire dataset,
* the number of classes.
Certainly, the following equations apply: \(n=\sum_{k=1}^{k=C}n_{k}\), \(p_{k}^{subset}=\frac{n_{k}}{n}\) and \(\sum_{k=1}^{k=C}p_{k}^{subset}=1\). Similarly: \(d=\sum_{k=1}^{k}d_{k}\), \(p_{k}^{dataset}=\frac{d_{k}}{d}\) and \(\sum_{k=1}^{k}p_{k}^{dataset}=1\).
The most basic algorithm for selecting a subset of the training set is a completely random selection of samples - _random sampling_. The expected value of the number of objects of class \(k\) in a subset is \(\vec{n_{k}}=\frac{N}{D}\times d_{k}\), and the expected value of the total number of objects is \(\bar{n}=\frac{N}{D}\times d\). Thus, the expected ratio of the number of objects of a class to the total number of objects is: \(\frac{n_{k}}{n}=\frac{N}{D}\times d_{k}\times\frac{D}{N}\times\frac{1}{d}= \frac{d_{k}}{d}\). Therefore, with high probability, the distribution of \(p_{k}^{subset}\) with this type of sampling will be close to the distribution of \(p_{k}^{dataset}\).
Another algorithm we are considering is the random selection of \(\frac{N}{C}\) samples for each class separately - _random per class sampling_. It is inspired by _DS Sampling_ from [18], where it was used to balance the distribution of classes in the training set. The steps of the algorithm are as follows:
1. For each class, create a set \(idx_{k}\) which includes the indexes of training samples containing at least one object of that class.
2. From each set \(idx_{k}\), randomly select \(\frac{N}{C}\) samples.
It is worth noting that duplicate samples may appear in the resulting subset, as each sample may belong to several \(idx_{k}\) sets. Intuitively, the distribution of \(p_{k}^{subset}\) will deviate more from \(p_{k}^{dataset}\) than in the case of _random sampling_. However, here, on average, the number of objects will be higher. We will evaluate this assumption in Section IV-A.
The last algorithm is a version of random sampling per class, which directly maximises the number of objects of a given class - _maximum object number sampling per class_ - MONSPeC. The steps of the algorithm are as follows:
1. For each class, create a set \(idx_{k}\) which includes the indexes of training samples containing at least one object of that class.
2. From each set \(idx_{k}\), select \(\frac{N}{C}\) samples that have the largest number of objects of class \(k\).
Similarly to _random sampling per class_, duplicate samples may be present in the resulting subset. Intuitively, the distribution of \(p_{k}^{subset}\) for this algorithm will deviate most from \(p_{k}^{dataset}\). However, the number of objects in this case will be maximal. These assumptions, as for random sampling per class, will be verified in Section IV-A.
We decided to compare these three algorithms because of their efficiency and speed. There are several reasons why we did not consider more complex approaches. Firstly, to the best of our knowledge, there is no work that focuses on subset selection for the task of object detection based on point
clouds. The sole reference available involves an approach from OpenPCDet [19] in which models are trained on a completely random Waymo subset, representing 20% of the total dataset size. This most basic subset selection algorithm is included in our analysis. Among image-based algorithms, there are also few approaches that focus on the task of object detection. They usually employ ensemble models, incorporating a sample into the dataset when there is a significant difference in the models' results. However, these ensemble-based approaches, e.g. [11], require multiple model trainings, leading to time-consuming subset generation without necessarily improving efficiency. We initially tested the _build-up_ approach from [11], but it ended up giving slightly worse results than _random sampling per-class_ with incomparably more computational and time effort. With a subset size of 20% nuScenes, the _random sampling per class_ took less than a millisecond, while the approach with [11] took over 40h on a 12-core AMD Ryzen 9 3900X processor with two Nvidia RTX 3080 GPUs. In contrast, the other approaches are adapted to the classification task [15] or inaccurately described as [16].
In Section IV we will evaluate each method - select subsets of several fixed sizes with each method, train the PointPillars and CenterPoint-Pillar networks on each, and compare the resulting detection efficiencies. In addition, we will check with the Monte Carlo method the distribution of the distance \(p_{k}^{subset}\) from \(p_{k}^{dataset}\) and the distribution of the number of objects of each class.
## IV Experiments
The first stage of the experiments is to estimate for each subset size the distribution of the L1 distance between \(p_{k}^{subset}\) and \(p_{k}^{dataset}\) and the distribution of the number of objects of each class. In addition, we measure average sampling time for each considered method. In the second stage, we will select, using each of the three algorithms (described in the section III-B), subsets of each of the fixed counts, train the PointPillars and CenterPoint-Pillar networks on each of them, and compare the obtained detection efficiencies.
### _Estimating subset selection algorithms characteristics using Monte Carlo methods_
The Monte Carlo method is a well-known technique for the mathematical modelling of complex processes. We will apply it to estimate the distribution of the number of objects of each class and the distribution of the L1 distance between \(p_{k}^{subset}\) and \(p_{k}^{dataset}\). We will determine the number of objects normalised by the expected number of objects returned by _random sampling_, i.e. \(n_{k}^{norm}=\frac{n_{k}}{n_{k}}=\frac{n_{k}}{\frac{n}{\mathcal{D}}\times d_{k }}\). Thus, we make the value of the number of objects independent of the size of the subset. To further facilitate data analysis, values \(n_{k}^{norm}\) for all classes we reduce to two values: \(n_{min}^{norm}=\min_{vk=1,...,C}n_{k}^{norm}\) and \(n_{avg}^{norm}=\frac{1}{C}\sum_{k=1}^{k=C}n_{k}^{norm}\). The value \(n_{min}^{norm}\) informs whether there is enough learning data for each class. On the other hand, the value \(n_{avg}^{norm}\) informs about the average relative number of objects. By representing the number of objects with these two indicators, it will be easy to compare the results between them for different subset sizes. On the other hand, the distance L1 in our case is given by the formula: \(L1=\sum_{k=1}^{k=C}|p_{k}^{dataset}-p_{k}^{subset}|\).
In this experiment, we consider subsets representing 5%, 10%, 20%, 40% and 80% of the nuScenes training set, respectively. For the estimation, we will draw 1'000'000 subsets for very subset size using the algorithms _random sampling_ and _random sampling per class_. The MONSPeC algorithm is deterministic, so the values of \(n_{min}^{norm}\), \(n_{avg}^{norm}\) and \(L1\) only need to be computed once for each subset size.
Figure 1, 2, and 3 show the distribution of \(n_{min}^{norm}\), \(n_{avg}^{norm}\), and \(L1\) for each subset selection method, respectively. For _random sampling_ and _random per class sampling_, the results are presented as a box-plot with whiskers. The bottom and top edges of the box are the first and third quartiles, respectively, the line in the middle of the box is the median, and the
Fig. 2: Box plot of \(n_{avg}^{norm}\) versus subset size for _random sampling, random sampling per class_ and MONSPeC. The distribution of the \(n_{avg}^{norm}\) values was estimated using the Monte Carlo method with 1’000’000 samples.
Fig. 1: Box plot of \(n_{min}^{norm}\) versus subset size for _random sampling_, _random sampling per class_ and MONSPeC. The distribution of the \(n_{min}^{norm}\) values was estimated using the Monte Carlo method with 1’000’000 samples.
whiskers cover the entire range of values - from minimum to maximum.
The results presented here confirm the assumptions made in the Section III-B:
* The distance \(p_{k}^{subset}\) from \(p_{k}^{dataset}\) is statistically smallest for _random sampling_, slightly larger for _random per class sampling_ and largest for MONSPeC.
* the plot in Figure 2 has the \(y\) axis split into two ranges, as the values for MONSPeC differ significantly from the other two algorithms. Following MONSPeC, algorithms _random sampling per class_ and _random sampling_ rank second and third, respectively.
In addition, in Figure 1 all the boxes of the _random sampling_ method are noticeably below the line \(n_{min}^{norm}=1\). It is therefore very likely that in the subset drawn by _random sampling_, at least for one class the condition \(n_{k}^{norm}<1\) will be satisfied. It has been verified experimentally that this probability is about 99.7%. This means that at least one class will have relatively little learning data. If such a situation occurs for a rare class, then the already small number of objects of this class will be further reduced. This, in turn, is likely to translate into poor detection performance for that class. Such a situation is not likely to occur for the _random sampling per class_ and MONSPeC methods - their boxes are above the line \(n_{min}^{norm}=1\). Therefore, one can expect that the detection efficiency on subsets selected by the _random sampling_ method will be the lowest.
For CenterPoint-Pillar, in terms of mAP, _random sampling per class_ performs slightly better, while in terms of NDS, both methods give virtually the same results.
Hence, it follows that _random sampling_ is the least efficient way to select a subset among those considered. Differences in speed between all the methods are insignificant, as was shown in Section IV-A. At the same time, the other two methods give better detection efficiency. On the other hand, the choice between _random sampling per class_ and _MONSPeC_ is not obvious. Their results are very close, sometimes one of them slightly dominates. However, we lean more towards the MONSPeC due to the deterministic nature of its results. In this way, we can eliminate the risk of the subset selected by the _random sampling per class_ significantly deviating from the mean and returning exceptionally low detection accuracy.
## V Conclusions
We presented a comparison of three algorithms for choosing a subset from the training set. We evaluated their effectiveness on the NuScenes dataset in terms of statistics and the detection performance achieved by two detection algorithms: Point-Pillars and CenterPoint-Pillar. We estimated their statistical properties using the Monte Carlo method.
The obtained results indicate that a completely random subset selection can be replaced by insignificantly slower and more efficient methods - _random sampling per class_ or _MONSPeC_. Out of these two algorithms, _MONSPeC_ is generally preferred due to its determinism.
Efficient selection of a training set subset allows for faster prototyping of 3D object detection algorithms by reducing computation time on the GPU, thereby resulting in reduced energy consumption and associated environmental costs.
In our future work, we primarily plan to accelerate the prototyping of 3D object detection algorithms using either random sampling per class or MONSPeC. Additionally, we aim to evaluate the efficacy of the aforementioned subset selection algorithms on the KITTI and Waymo Open Dataset with a wider range of LiDAR object detection DCNN architectures, e.g. SECOND, PointRCNN or PV-RCNN. Finally, we aim to explore alternative subset selection methods that consider attributes beyond the number of objects in each class, such as distribution of object sizes or orientations.
|
2307.16566 | The accretion/ejection link in the neutron star X-ray binary 4U 1820-30
I: A boundary layer-jet coupling? | The accretion flow / jet correlation in neutron star (NS) low-mass X-ray
binaries (LMXBs) is far less understood when compared to black hole (BH) LMXBs.
In this paper we will present the results of a dense multi-wavelength
observational campaign on the NS LMXB 4U 1820-30, including X-ray (Nicer,
NuSTAR and AstroSAT) and quasi-simultaneous radio (ATCA) observations in 2022.
4U 1820-30 shows a peculiar 170 day super-orbital accretion modulation, during
which the system evolves between "modes" of high and low X-ray flux. During our
monitoring, the source did not show any transition to a full hard state. X-ray
spectra were well described using a disc blackbody, a Comptonisation spectrum
along with a Fe K emission line at 6.6 keV. Our results show that the observed
X-ray flux modulation is almost entirely produced by changes in the size of the
region providing seed photons for the Comptonisation spectrum. This region is
large (about 15 km) in the high mode and likely coincides with the whole
boundary layer, while it shrinks significantly (<10 km) in low mode. The
electron temperature of the corona and the observed RMS variability in the hard
X-rays also exhibit a slight increase in low mode. As the source moves from
high to low mode, the radio emission due to the jet becomes about 5 fainter.
These radio changes appear not to be strongly connected to the hard-to-soft
transitions as in BH systems, while they seem to be connected mostly to
variations observed in the boundary layer. | A. Marino, T. D. Russell, M. Del Santo, A. Beri, A. Sanna, F. Coti Zelati, N. Degenaar, D. Altamirano, E. Ambrosi, A. Anitra, F. Carotenuto, A. D'Ai, T. Di Salvo, A. Manca, S. E. Motta, C. Pinto, F. Pintore, N. Rea, J. Van den Eijnden | 2023-07-31T10:57:53Z | http://arxiv.org/abs/2307.16566v1 | The accretion/ejection link in the neutron star X-ray binary 4U 1820-30 I: A boundary layer-jet coupling?
###### Abstract
The accretion flow / jet correlation in neutron star (NS) low-mass X-ray binaries (LMXBs) is far less understood when compared to black hole (BH) LMXBs. In this paper we will present the results of a dense multi-wavelength observational campaign on the NS LMXB 4U 1820-30, including X-ray (_NICER_, _NuSTAR_ and _AstroSat_) and quasi-simultaneous radio (ATCA) observations in 2022. 4U 1820-30 shows a peculiar 170 day super-orbital accretion modulation, during which the system evolves between "modes" of high and low X-ray flux. During our monitoring, the source did not show any transition to a full hard state. X-ray spectra were well described using a disc blackbody, a Comptonisation spectrum along with a Fe K emission line at \(\sim\)6.6 keV. Our results show that the observed X-ray flux modulation is almost entirely produced by changes in the size of the region providing seed photons for the Comptonisation spectrum. This region is large (\(\sim\)15 km) in the high mode and likely coincides with the whole boundary layer, while it shrinks significantly (\(\lesssim\)10 km) in low mode. The electron temperature of the corona and the observed RMS variability in the hard X-rays also exhibit a slight increase in low mode. As the source moves from high to low mode, the radio emission due to the jet becomes \(\sim\)5 fainter. These radio changes appear not to be strongly connected to the hard-to-soft transitions as in BH systems, while they seem to be connected mostly to variations observed in the boundary layer.
keywords: accretion, accretion discs - stars:neutron - X-rays: binaries - X-rays, individuals: 4U 1820-30 - ISM: jets and outflows
## 1 Introduction
Neutron Star (NS) low-mass X-ray Binaries (LMXBs) are binary systems composed of a NS accreting matter from a companion low-mass, i.e., typically lower than 1 M\({}_{\odot}\), star. The material accreted onto the compact object is responsible for most of the electromagnetic emission of these sources from the optical to the X-rays. These systems have been historically grouped in two main classes, Z- and atoll sources, based on the particular tracks they draw in X-rays colour-colour diagrams (Hasinger & van der Klis, 1989).
Z-sources are the brightest among the two classes, being persistently active and always accreting around the Eddington limit, i.e., at a typically observed X-ray luminosity range of L\({}_{X}\sim 10^{37}-10^{38}\) erg/s (see, e.g. van der Klis, 2006). Atolls are typically fainter, i.e., at L\({}_{X}\sim 10^{36}-10^{37}\) erg/s (Munoz-Darias et al., 2014) and thereby are considered to be accreting at lower rates. They can be persistent or transients, i.e. usually found in a dormant, faint X-ray regime called quiescence with only episodic outbursts where the X-ray luminosity becomes comparable to the level of the persistent NS LMXBs. However, the mere existence of several systems showing both Z or atoll behaviours make the distinction between these regimes somehow blurred. This is the case for XTE J1701-462 which transitioned from an atoll to a Z-source regime in both its 2006/2007 (see, e.g., Homan et al., 2010) and 2022 (Homan et al., 2022a, b)
outbursts and the other two transient Z-sources IGR J17480-2446 (e.g. Chakraborty et al., 2011) and MAXI J0556-332 (Homan et al., 2011).
The accretion flow consists typically of an optically thick accretion disc, an optically thin cloud of hot electrons usually dubbed corona, and a boundary layer (BL) connecting the inner edge of the disc with the NS surface. The observed X-ray spectral continuum from these sources can be broken down in several components, corresponding to these regions of the accretion flow: a multi-colour disc black body in the soft X-rays and a Comptonisation spectrum, which can extend up to hundreds of keV (Pintore et al., 2016; Ludlam et al., 2016; Gambino et al., 2019). The contribution from a boundary layer between the disc and the NS surface, or the surface itself, can also be visible directly as an additional black body component or as source of the seed photons for the Comptonisation spectrum at a blackbody temperature \(kT_{\rm bb}\) of about 1-2 keV (see, e.g., Mondal et al., 2020; Marino et al., 2022).
The X-ray spectral-timing properties of these objects evolve between two main states: one dominated by the corona emission, typically referred to as the hard state, and the other dominated by the accretion disc and/or the NS/boundary layer, i.e., the soft state (for a review, Done et al., 2007).
In the radio-mid-IR region, the contribution of a compact jet, i.e., a collimated ionised outflow of relativistic particles ejected by the system, is instead dominant. The observed properties of the compact jet show a clear correlation with the accretion flow. This connection is well-established for black hole (BH) transients (see, e.g., Corbel et al., 2000; Coriat et al., 2011), where as their X-ray spectral-timing properties display dazzling variations between hard and soft states, similar to the states observed in NS LMXBs, the jet evolves from being radio loud to a quenched state, respectively (Fender & Belloni, 2004). These radio/X-ray changes occur usually over time-scales of the order of one or few weeks in both BH and NS LMXBs (see, e.g., Russell et al., 2014; Marino et al., 2019; Marcel et al., 2019; De Marco et al., 2021; Rhodes et al., 2022). However, this standard pattern is less clear for NS systems. Despite being observed several times (e.g. Gusinskaia et al., 2017), it is yet not established whether jet quenching in the soft state is also the norm for all NS LMXBs, with some cases where the presence of compact jets was reported even after the transition to the soft state (Migliari et al., 2004). Interestingly, jet quenching in some accreting NSs has been sometimes observed without a state transition but corresponding to an evolution in the flux (Russell et al., 2021; Panurach et al., 2023), suggesting a more complex accretion/ejection coupling with respect to BH X-ray binaries (XRBs). When observed, jets in NS LMXBs are generally \(\sim\)20 times radio-fainter than BH LMXBs (e.g., Gallo et al., 2018; van den Eijnden et al., 2021), although several systems, often harbouring X-ray millisecond pulsars (e.g., Russell et al., 2018; Gusinskaia et al., 2020; Coti Zelati et al., 2021), have been observed at radio luminosity comparable to BH systems. Different jet geometries and/or weaker couplings with the accretion flow in accreting NSs with respect to BHs have been also proposed (Marino et al., 2020). The emerging observational picture is yet not well understood and, in particular, the role (if any) played by the NS magnetic field, its emitting surface, its spin, or the presence of a boundary layer is unclear. Multi-wavelength surveys of these systems that probe the behaviour of the jets and the accretion flow over a variety of mass accretion rates/X-ray states are crucial to self-consistently explore jet launching in NSs.
### 4u 1820-303
A particularly intriguing target to perform this kind of studies is the NS LMXB 4U 1820-30. This source is composed by a NS accreting matter from a white dwarf (WD) companion in a tight orbit. Indeed, with its orbital period of only 11.4 minutes (Stella et al., 1987), i.e., the shortest orbital period known for any XRB, 4U 1820-30 is identified as an Ultra-Compact XRB (UCXB, for a recent review Armas Padilla et al., 2023). Located at a distance of about 7.6 kpc (Kuulkers et al., 2003), the system resides within the globular cluster NGC 6624. The source is classified as an and it is persistently accreting at a relatively high luminosity, displaying mostly soft X-ray spectral states (the so-called "banana" state for atolls), with occasional transitions to the hard state ("island" state) (Tarana et al., 2007). Its behaviour is indeed similar to what observed in the so-called "bright atolls" group (e.g., D'Ai et al., 2010; Egron et al., 2013; Iaria et al., 2020).
Perhaps the most peculiar aspect of the source is the \(\sim 170\) d, superfluid accretion cycle exhibited by the source. The luminosity modulation is intrinsic, i.e., not due to absorption from occulting plasma (Zdziarski et al., 2007) and it likely originates from the presence of a third object orbiting the system from afar (Chou & Grindlay, 2001). According to this theory, the tidal forces exerted by this third object on 4U 1820-30 trigger fluctuations in the orbital eccentricity of the binary, which in turn translate to periodic increases in the mass-accretion rate. As a consequence, 4U 1820-30 oscillates between a low mode, at \(L_{\rm low}\sim 8\times 10^{36}\) erg s\({}^{-1}\) and a high mode, at \(L_{\rm high}\sim 6\times 10^{37}\) erg s\({}^{-1}\) (in the range 0.5-10 keV and assuming a distance D= 7.6 kpc) over a time-scale of about 170 d. These modes can be characterized by different spectral properties, most likely connected to changes in the accretion flow. In the low mode, the system can be found in soft ("banana") and sometimes in the hard ("island") X-ray spectral states (in't Zand et al., 2012), during which it exhibits frequent type-1X-ray bursts. On the other hand, in the high mode, the system is only soft and bursts are typically absent (see, e.g., Titarchuk et al., 2013).
The presence of the Fe K line in the X-ray spectrum is debated, with detections reported only by a number of X-ray spectral studies (Cackett et al., 2008; Titarchuk et al., 2013) and not by others (e.g., Costantini et al., 2012; Mondal et al., 2016). Moreover, while 4U 1820-30 was initially identified as a non-quenching jet NS (Migliari et al., 2004), it has been recently suggested that compact jet quenching does occur (Russell et al., 2021). Indeed, the change in spectral properties of the jet seem to occur only when, in the transition from low to high mode, the X-ray flux exceeds a certain threshold (see, Fig. 2, Russell et al., 2021). Such a result would suggest a critical connection between jet and mass-accretion rate rather than spectral hardness, in a way that was never observed for BH systems and that is instead sometimes observed in accreting NSs, both in LMXBs (Panurach et al., 2023) and in XRBs hosting high-mass companion stars (HMXBs, see van den Eijnden et al., 2018). However, those results were obtained from observations taken sporadically over the past \(\sim\)17 years. To properly understand the accretion-ejection coupling in this system - and potentially in similar objects - it is fundamental to measure the accretion flow/jet properties over a single accretion cycle.
In this series of papers, we present the results of a comprehensive radio-X-rays observational campaign performed throughout 2022, aimed at following 4U 1820-30 in the X-rays and radio during a whole super-orbital cycle. In this manuscript, we present the results of a broadband X-ray spectral analysis and timing analysis using _NICER_,_NuSTAR_ and _AstroSat_ data from the 2022 April-September cycle. We also used data from the Australia Telescope Compact Ar
ray (ATCA) taken close to our broadband X-ray spectra to show the jet evolution during this cycle. A following paper (Paper II of this series, Russell et al., in preparation) will present the results of the complete radio monitoring and how they compare with the X-rays data analysis reported here.
## 2 Observations & Data Reduction
A dense X-ray campaign has been performed between April 2 and July 26 2022, with 17, 3 and 1 observations for _NICER_, _NuSTAR_ and _AstroSat_, respectively. A summary of these observations is reported in Table 1. All data were reprocessed using the HEAsoft package v. 6.30. The latest available CALDB version was used for all the instruments. More details on the data reduction for each observatory are given in the following subsections.
### Nicer
A two-weeks cadence monitoring campaign was performed by _NICER_ (see Table 1). Data were reduced using nicerl2 task (NICERDAS 2019-05-21 v006). We set recommended calibration processes, standard screening and we added the niprefilter2_coltypes=base,3c50 parameter so that the 3C50 model can be used to derive background spectra later. We extracted the cleaned event files, checking that all detectors were active during observations and excluding data from the "noisy" detectors (labelled 14 and 34), in order to reduce the detector noise. We accumulated light curves from all the observations using the xselect tool, with the aim to check for type-I X-ray bursts, finding only one in observation 5604010304. In the following data analysis, the burst was excluded. Then we selected the GTI using NITAKETIME and applied them to the data via NIEXTRACT-EVENT, selecting events with PI channel between 25 and 1200 (0.25-12.0 keV). We used the nibackgen3C50 tool to extract both the spectra of the source and the background from the cleaned event files, selecting the 2020 gain calibration.
During the writing of this paper, the updated HEAsoft version 6.31 was released, along with substantial changes to the _NICER_ mission data analysis pipeline. To check for consistency, we reanalysed three observations of the 2022 _NICER_ campaign using a different background model, i.e. the SCORPEON model. The spectra obtained with the different HEAsoft versions are basically identical and the best-fit parameters obtained are not significantly affected by the change of version and/or background model. We therefore did not reanalyse the whole data set and kept working on the spectra obtained with the previous HEAsoft version.
### NuSTAR
_NuSTAR_ observed the system three times during this campaign, for a total exposure of about 58.6 ks. We reduced the data using standard tools within the Nustardas package. A 100" radius circular region centered at the coordinates of the source was selected as source region. In order to take into account any background non-uniformity on the detector, we extracted the background spectra using four circles of \(\sim\)50" radii placed on different areas which were well far away from the source region. Finally, we used Nuproducts to extract spectra and light curves. We systematically checked for the presence of type-I X-ray bursts within the _NuSTAR_ observations, but we did not find any.
### AstroSat
As part of the multi-wavelength campaign, _AstroSat_ observed 4U 1820-30 on April 15, 2022. We have analysed data from the Soft X-ray Telescope (SXT, Singh et al., 2016) and the Large Area X-rays Proportional Counter (LAXPC Yadav et al., 2016; Antia et al., 2017, 2021) on-board _AstroSat_(Agrawal, 2006; Singh et al., 2014). LAXPC, one of the primary instruments on-board and consists of three co-aligned identical proportional counter detectors, viz. LAXPC10, LAXPC20 and LAXPC30. Each of these works in the energy range of 3 - 80 keV (for details see, Yadav et al., 2016; Antia et al., 2017, 2021). Due to the gain instability caused by the gas leakage, LAXPC10 data were not used while LAXPC30 was switched off during these observations1. Therefore, we have used data from LAXPC20 for our work. These data were collected in the Event mode (EA) which contains the information about the time, channel number and anodID of each event. LAxpcSoft v3.32 software package was used to extract spectra. Background files are generated using the blank sky observations (see Antia et al., 2017, for details).
Footnote 1: LAXPC30 is switched off since 8 March 2018, refer to [http://astrosat-ssc.iucaa.in/](http://astrosat-ssc.iucaa.in/)
Footnote 2: [http://www.tifr.res.in/~astrosat_lamxpc/LaxpcSoft.html](http://www.tifr.res.in/~astrosat_lamxpc/LaxpcSoft.html)
The SXT aboard _AstroSat_ is a focusing instrument sensitive mainly in the 0.3-7 keV energy band (Singh et al., 2014, 2017) and its camera assembly uses an e2v CCD, identical to that on _XMM-Newton_/MOS and _Swift_-XRT. The SXT observations of 4U 1820-30 were carried
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Id.\({}^{a}\) & ObsID & \multicolumn{2}{c}{Start Time} & Exposure \\ & & (UTC) & (MJD) & ks \\ \hline \multicolumn{5}{c}{_NuSTAR_} \\ \hline Nu01 & 30802009002 & 2022-04-14 & 59683 & 23.9 \\ Nu02 & 30802009004 & 2022-05-31 & 59730 & 14.6 \\ Nu03 & 30802009006 & 2022-07-10 & 59770 & 20.1 \\ \hline \multicolumn{5}{c}{_AstroSat_} \\ \hline As01 & 9000005070 & 2022-04-15 & 59684 & 22.8 \\ \multicolumn{5}{c}{_NICER_} \\ \hline Ni01 & 5604010101 & 2022-04-02 & 59671 & 8.1 \\ Ni02 & 5604010102 & 2022-04-14 & 59683 & 4.2 \\ Ni03 & 5604010103 & 2022-04-16 & 59685 & 1.1 \\ Ni04 & 5604010301 & 2022-04-30 & 59699 & 2.6 \\ Ni05 & 5604010302 & 2022-05-01 & 59700 & 6.5 \\ Ni06 & 5604010304 & 2022-05-31 & 59730 & 9.0 \\ Ni07 & 5604010305 & 2022-06-01 & 59731 & 2.0 \\ Ni08 & 5604010401 & 2022-06-12 & 59743 & 9.2 \\ Ni09 & 5604010402 & 2022-06-13 & 59743 & 1.3 \\ Ni10 & 5604010501 & 2022-06-26 & 59756 & 1.3 \\ Ni11 & 5604010502 & 2022-06-27 & 59757 & 1.3 \\ Ni12 & 5604010503 & 2022-06-27 & 59758 & 3.4 \\ Ni13 & 5604010601 & 2022-07-08 & 59768 & 6.3 \\ Ni14 & 5604010602 & 2022-07-09 & 59769 & 4.1 \\ Ni15 & 5604010701 & 2022-07-24 & 59784 & 2.9 \\ Ni16 & 5604010702 & 2022-07-25 & 59785 & 4.0 \\ Ni17 & 5604010703 & 2022-07-26 & 59786 & 2.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of the _NuSTAR_, _AstroSat_ and _NICER_ observations of 4U 1820-30 used in this work. \({}^{a}\)Each observation is marked with an identification code.
out in the photon counting mode. We have used Level-2 data provided by the SXT payload operation center (POC) in Mumbai, India, reduced using the most recent pipeline and calibration database (version 1.4b). Events from each orbit were merged using SXT Event Merger Tool (Julia Code3). These merged events were used to extract image and spectra using the ftool task NSELC. In SXT, pile up effects are notable at count rates higher than 40 \(cs^{-14}\). In our observations, the source showed 80 \(cs^{-1}\), suggesting a significant pile-up. To minimise the effect of the pile-up, source spectrum is extracted from an annulus between 5 \(\arcmin\) and 15 \(\arcmin\) from the centre of the image, following the method described in (Chakraborty et al., 2020). We have used the response (sxt_pc_mat_g012.rmf) and background (SkyBkg_comb_EL3p5_Cl_Rd16p_v01.pha) files provided by the SXT team. The appropriate ARF file suitable for the specific source region is generated using the command line auxiliary tool sxtARFModule.
Footnote 3: [http://www.tifr.res.in/~astrosat_sxt/dataanalysis.html](http://www.tifr.res.in/~astrosat_sxt/dataanalysis.html)
Footnote 4: [https://web.iucaa.in/~astrosat/AstroSat_handbook.pdf](https://web.iucaa.in/~astrosat/AstroSat_handbook.pdf)
## 3 Spectral Analysis
We present the three-year light curve of 4U 1820-30 as observed by the Monitor of All Sky X-ray Image (_MAXI_Matsuoka et al., 2009) in Figure 1 (top panel), displaying its super-orbital modulation. On the bottom panel, we zoom in on the period during which our campaign was carried out, with the dates of the _NICER_, _NuSTAR_ and _AstroSat_vis highlighted. It is apparent how our source showed a slightly more irregular behaviour during our \(\sim\)150 days observing window, with several highs and lows in rapid succession. In order to set the
Figure 1: (_Top_) 4U 1820-30 daily light curve as observed by _MAXI_ from June 2020 to June 2023, displaying the super-orbital modulation typical of the system. The time interval enclosing the times of the observations used in this paper is drawn as a green rectangle. (_Bottom_) A zoom of the top light curve displaying _MAXI_ light curve from April to September 2022. The observation dates for _NICER_, _NuSTAR_ and _AstroSat_ are highlighted as vertical lines in green dashed, purple solid and red solid, respectively. Colored boxes with numbers are displayed to identify the dates of the single _NICER_, _NuSTAR_ and _AstroSat_ observations using the labels introduced in Table 1 and the same colors chosen for the vertical lines.
Figure 2: Hardness Intensity Diagram of 4U 1820-30, produced by using all available _MAXI_ observations and all corresponding _Swift_/BAT count-rates. In particular we used _MAXI_ count-rates over the 2–20 keV energy range for the intensity, while we estimate the hardness as the ratio between the BAT count-rate (15–50 keV) and the _MAXI_ count-rate in the soft band (2–4 keV) taken the same day (as in Russell et al., 2021). Superimposed to the grey data points, we highlight with green circles the position in the HID during the dates of the single _NICER_ observations, labeled with progressive numbers. For the _NICER_ observations used in broadband spectra A, B and C, we adopted purple diamonds with those letters as labels. The labels “UB” and “LB” indicate the locations of the “Upper” and “Lower” Banana branch.
evolution of the system during this cycle in the framework of its observational history, we created a Hardness Intensity Diagram (HID) using _MAXI_ and _Swift_ /BAT data (Krimm et al., 2013). In particular we collected all the available _MAXI_ count-rates (in the 2-4 keV and 2-20 keV energy ranges) since October 2009 and the corresponding, i.e. taken the same day, BAT count-rates (15-50 keV) from the respective websites5. We then plotted _MAXI_ total count-rates versus the ratio between BAT and _MAXI_ count-rates in the 2-4 keV band to build the HID (see Fig. 2). In the Figure, we highlighted the position of the source in the HID in the days of the _NICER_ and _NuSTAR_ observations, showing that the source ingered in the soft ("banana") state during the whole surveyed cycle.
Footnote 5: _MAXI_: [http://mxi.riken.jp/top/index.html](http://mxi.riken.jp/top/index.html), BAT: [https://swift.gsfc.nasa.gov/results/transients/](https://swift.gsfc.nasa.gov/results/transients/)
In order to perform the spectral analysis, we grouped all _NICER_ and _NuSTAR_ spectra through optimal binning (Kaastra & Bleeker, 2016), i.e. in order to have a grouping consistent with the spectral resolution of the instrument (and avoid oversampling) in a given energy range, but at the same time keeping at least 20 counts per bin to ensure the use of the \(\chi^{2}\) statistics. The SXT and LAXPC spectra were grouped to have at least 25 counts/bin. The LAXPC spectra showed a large calibration uncertainty, with background dominating above 20 keV. Similar issues have also been observed in other observations (see e.g., Beri et al., 2021; Sharma et al., 2023). Therefore, we have used data only up to 20 keV for our spectral analysis.
We used Xspec v.12.12.1 for all the performed spectral fits. In all cases, we used the tbabs model to take into account interstellar absorption, including the photoelectric cross-sections and the element abundances to the values provided by Verner et al. (1996) and Wilms et al. (2000), respectively. When data from different instruments were used simultaneously, a constant component was also included to account for inter-calibration uncertainties between different instruments. We consistently checked that discrepancies between the values for the constant term found for different instruments were not larger than 20%. When fitting together _NICER_ and _NuSTAR_, a small offset between the slopes of those spectra could be appreciated, as often observed using data from different instruments (see, e.g., Ludlam et al., 2020). To take into account such an instrumental issue, we left the \(\Gamma\) parameter in the Xspec Comptonisation component united between _NICER_ and _NuSTAR_, in order to allow some flexibility in between the two data sets. However, we checked systematically that the difference in \(\Gamma\) was never above 10%.
Among the seventeen _NICER_ observations used, several have been performed in a span of a few days within each other. In order to maximize the statistics in each spectrum, we checked whether any difference in flux and/or hardness could be appreciated between close-in-time observations. If the spectra were compatible, we summed them by means of the FTOOL addspec. Otherwise, we analysed them separately. This strategy leaves us with eleven final _NICER_ observations. The three _NuSTAR_ observations caught the source during two high modes and a low mode. In all three cases, at least one _NICER_ observation was performed within \(\sim\)days from the _NuSTAR_ visit, giving us the opportunity to investigate the broadband X-ray spectral behaviour of the source at the extremes of its super-orbital oscillation. In particular we paired observations Ni02 with Nu01, Ni06 with Nu02, Ni10 with Nu03. In the following, these broadband spectra will be simply referred to as A, B and C respectively.
### Broadband spectra: the continuum
An initial fit of spectra A, B and C with a simple absorbed powerlaw model resulted in poor fits in all cases. We replaced the power-law with a more physically motivated thermal Comptonisation model, using the convolution model thcomp(Zdziarski et al., 2020) and a blackbody model bbodyrad as seed photons spectrum. The involved parameters are the power-law photon index \(\Gamma\), the electron temperature of the Comptonising medium \(kT_{e}\), the fraction of the seed source covered by the scattering cloud \(f_{\rm cov}\), the blackbody temperature \(kT_{\rm bb}\) and its normalisation \(K_{\rm bb}\), the latter being connected to the actual blackbody radius of the source through the formula: \(K_{\rm bb}=\left(R_{\rm bb}/D_{10\;{\rm kpc}}\right)^{2}\), with \(D_{10\;{\rm kpc}}\) the distance of the source in units of 10 kpc. Using a model with a high energy roll-over such as thcomp improved the fit, but apparent residuals at low energies suggested the presence of an additional spectral component to be included. We therefore added a diskbb component, characterised by a disc blackbody temperature \(kT_{\rm disc}\) and a normalisation \(K_{\rm disc}\) which, similarly to the bbodyrad normalisation, can be translated into the size of the black body emitting region by the formula \(K_{\rm disc}=\left(R_{\rm disc}/D_{10\;{\rm kpc}}\right)^{2}\cos i\) with \(i\) the inclination of the system. In the disc case, the size of the emitting region can be considered as the inner radius of the disc. Using diskbb, the fit to the continuum becomes acceptable and the residuals are significantly flattened. The model chosen is therefore:
\[{\rm Model\ 0:\ tbabs\times(thcomp\times bbbodyrad+diskbb)} \tag{1}\]
We tried to swap diskbb and bbodyrad, testing a scenario where the disc photons serve as Comptonisation seed photons but we found that the resulting estimates for \(R_{\rm disc}\) are relatively small for all spectra, in particular for spectrum C with an unphysical upper limit of 8 km. We also tried to apply a second thcomp component to di skbb, driven by the expectation that the corona should also Compton scatter at least a fraction of the disc photons, but without any statistical improvement.
We used Model 0 also to describe the _AstroSat_ observation (labelled As01), obtaining a satisfactory fit and overall accordance between the best-fit parameters found and A, B and C spectra.
### Broadband spectra: discrete spectral features
Even with a satisfying description of the continuum, some localised residuals could still be spotted at about 6.5-7 keV, clear sign of an unmodelled feature. The presence of residuals in this region has been also reported by other authors, and tackled with a combination of absorption edges (in particular in Mondal et al., 2016) or as a reflection Fe line (e.g., Tarenchuk et al., 2013). In this work, we successfully modelled the feature with a diskline component in all three spectra. We refer to the Appendix, section A, for a comparison between different models to describe these residuals and to justify our final choice for the diskline model.
The line was relatively weak in all observations (as typically observed in UCXBs, e.g., Koliopanos et al., 2021), i.e., normalisation \(K_{\rm line}\) ranging from 2\(\times 10^{-3}\) for spectra A and B to 8\(\times 10^{-4}\) for spectrum C. The contribution to the total flux from the line was about 0.1%-0.2% in all cases, so that the lower normalisation in spectrum C can be ascribed to a global fading of the X-ray output from the source. Due to the weakness of the feature, not all line parameters could be well constrained. We then left the line energy \(E_{\rm line}\), the inner radius of the disc \(R_{\rm in}\) and \(K_{\rm line}\) as the only free parameters, fixing the inclination
to 45\({}^{\circ}\) (in accordance with the inclination range individuated by Anderson et al. 1997), the emissivity index \(\epsilon\) to -2 and the outer radius of the disc \(R_{\rm out}\) to 1000 \(\rm R_{G}\). The feature is found at an energy \(E_{\rm line}\) of \(\sim\)6.6 keV in all three spectra. Only upper limits could be posed to \(R_{\rm in}\), in all cases suggesting a disc that extends relatively close to the NS.
Residuals in the hard X-ray band were apparently flattened by the simple addition of a diskline and did not show any trace of other reflection features, such as the Compton hump beyond 10 keV. However, the presence of the Iron line feature signals the existence of an underlying reflection component in the spectrum, despite being probably very weak and contributing only marginally to the continuum. A more detailed analysis of the reflection component with self-consistent and more sophisticated models will be presented in a forthcoming companion paper (Anitra et al., in preparation).
Clear absorption residuals were also present in the soft X-ray band, below 1 keV. Those features are consistent with O VIII (at \(\sim\)0.6 keV) and the Ne IX complex (at \(\sim\)0.9 keV), respectively, which are known to be present in the spectra of the source and have been ascribed to the interstellar medium (ISM) (e.g. Costanini et al. 2012). We improved the fit by multiplying the whole model by two gaps components, i.e., to account for absorption lines of gaussian profiles.
With these additions, our final model, herafter Model 1, is the following:
\[\tt tbabs\times gabs\times gabs\times(thcomp\times bbodyrad+diskbb+diskline) \tag{2}\]
Furthermore, some absorption features were found in the _NICER_ spectra at energies of \(\sim\)1.8 keV, \(\sim\)2.2 keV as well as an emission feature at \(\sim\)1.7 keV. The nature of such lines is most likely instrumental, e.g. due to silicon and gold in the detector or the filters. We therefore introduced some additional gaussian features to take them into account.
For the fit to the _AstroSat_ As01 spectrum, we included the diskline component, but left out instead the two absorption gaussian component. Indeed, _AstroSat_ known calibration issues makes evaluate the presence of such features more challenging. The broadband spectra fitted with Model 1 and respective residuals are shown in Figure 3, while we refer to Table 2 for the best-fit parameters obtained.
### Results on the _Nicer_ monitoring
In order to reconstruct a more detailed physical evolution of the system during this cycle, we then analysed individually each of the final 11 _NICER_ spectra (see above for the criteria followed in pairing some of the original 17 spectra to increase the statistics). We consistently used a modified version of Model 1. Indeed, without the hard X-rays coverage provided by _NuSTAR_ and/or _AstroSat_, the degeneracies between \(kT_{e}\), \(\Gamma\) and the blackbody parameters made the parameters of the fits with thcomp\(\times\)bbbodyrad completely unconstrained. We therefore decided to replace this component with the simpler nthcomp, the main difference being the lack of knowledge on the normalisation of the seed photons spectrum. For the same reason, we had to fix the seed photons temperature in nthcomp\(kT_{\rm seed}\) to 1.0 keV, compatible with the values obtained for the broadband fits. In addition, for the modelling of the discrete features, we had to fix some parameters as the fit was unable to find meaningful constraints for them. In particular, we fixed \(R_{\rm in}\) in diskline to 20 \(\rm R_{G}\) and the energies of the absorption features \(E_{\rm line,1}\) and \(E_{\rm line,2}\) to 0.68 keV and 0.87 keV respectively. The best-fit parameters obtained are shown in Tab. 2. In Fig. 2 we display how the main best-fit parameters, the 0.5-10 keV flux and the hardness ratio evolve over time during this cycle. From the Table and the plot it is apparent how some parameters, e.g., \(K_{\rm compt}\), seem to follow the super-orbital modulation traced by the flux, while others, in particular the disc temperatures \(kT_{\rm disc}\) and radius \(R_{\rm disc}\), appear stable.
## 4 Timing analysis
In order to investigate the short-term X-ray variability of the source during the mid-2022 accretion cycle, we extracted Leahy-normalised power density spectra (PDS) from _NuSTAR_, _NICER_ and _AstroSat_/LAXPC using as energy ranges 3-25 keV, 0.5-10 keV and 3-25 keV, respectively. We performed dead-time correction on each
Figure 3: Broadband _NICER_ (green) and _NuSTAR_ (blue) spectra with Model 1 and residuals. Different line styles were adopted to distinguish between the different components: dot for diskbb, dash for thcomp\(\times\)bbbodyrad and dash-dot for diskline.
_NuSTAR_ PDS using the Fourier Amplitude Difference (FAD) technique (Bachetti & Huppenkothen, 2018) and then extracted PDS averaging over 150-s long segments with bin time of 1 ms. A representative sample of the obtained PDS for _NuSTAR_ is shown in Figure 4. For _NICER_ we broke down the observations in 26-s long segments and used a bin-time of 0.2 ms. We then averaged the PDS created from each segment to produce one averaged PDS per observation with a Nyquist frequency of \(\approx\) 2.5 kHz. Finally, for LAXPC, light curves created with a time resolution of 10 ms were used to create Leahy-normalized PDS with 1/8192 s time bins. Also in this case, we average the power spectra obtained from all the segments to obtain one resultant power spectrum. Instead of subtracting the Poisson noise contribution, we rather fitted it with a constant component.
We did not find any significant detection of discrete features like quasi-periodic oscillations in those spectra. From each PDS, we estimated the fractional root mean square (RMS) variability, listed in Tables3 (see Table B2 for a complete breakdown of the _NICER_ sample). While the obtained values for RMS in _NICER_ show little-to-no evolution through the 17 considered snapshots, the third _NuSTAR_ observation (C) shows a stronger RMS variability compared to the previous ones (A-B).
## 5 Radio Observations
### ATCA data reduction
4U 1820-30 was observed by the Australia Telescope Compact Array (ATCA) 13 times throughout 2022. In the present work, we will only use the observations which were performed close to the _NuSTAR_ observations, while an analysis of the complete sample will be presented in a forthcoming paper (Russell et al., in preparation). The dates of the three observations presented here are: 2022 April 15, May 28 and July 10. During these observations, ATCA was in its more extended 6D, 1.5B, and 6B configurations6, respectively. In all cases, the fixed location antenna 6 (located 6 km from the array core) was used during the analysis, providing angular resolutions of \(\sim\) a few arcseconds for all observations. Observations were recorded simultaneously at central frequencies of 5.5 and 9 GHz, with a bandwidth of 2 GHz at each central frequency comprised of 2048 1-MHz channels. We used PKS B1934\(-\)638 for bandpass and flux density calibration, and B1817\(-\)254 for phase calibration. Data were flagged, calibrated, and imaged following standard procedures7 in the Common Astronomy Software Application (casa version 5.1.2; CASA Team et al., 2022). Imaging used a Briggs weighting scheme with a robust parameter of
Figure 4: X-rays PDS for the _NuSTAR_ (3-25 keV) observations used in the broadband spectral analysis. A cons model at value 2.0 is displayed as an horizontal black line.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{6}{c}{Broadband spectral analysis} \\ \hline & \multicolumn{2}{c}{**model:** tbabs\(\times\)gabs\(\times\)gabs\(\times\)(thComp\(\times\)bbodyrad+diskbb+diskline)} \\ \hline & & A1 & A2 & B & C \\
**Parameters** & & N02+Nu01 & As01 & (NiO6+Ni07)+Nu02 & (Ni13+Ni14)+Nu03 \\ \cline{2-6} \(N_{\rm H}\) & (\(\times 10^{22}\) cm\({}^{-2}\)) & 0.161\({}^{+0.003}_{-0.004}\) & 0.237\(\pm\)0.007 & 0.165\(\pm\)0.002 & 0.165\({}^{+0.002}_{-0.001}\) \\ \(\Gamma\) & & 1.84 \(\pm\) 0.03 & 2.60\(\pm\)0.03 & 1.70\({}^{+0.03}_{-0.02}\) & 2.34\({}^{+0.07}_{-0.06}\) \\ \(kT_{\rm e}\) & (keV) & 3.17 \(\pm\) 0.04 & (4.80) & 2.97\({}^{+0.03}_{-0.02}\) & 4.85\({}^{+0.15}_{-0.24}\) \\ \(kT_{\rm bb}\) & (keV) & 1.16\({}^{+0.05}_{-0.03}\) & 1.15\(\pm\)0.04 & 1.04\({}^{+0.05}_{-0.04}\) & 1.46\({}^{+0.04}_{-0.06}\) \\ \(R_{\rm bb}\) & (km) & 12.1\(\pm\)5.0 & 12.2\(\pm\)5.0 & 16.0\(\pm\)6.0 & 6.0\(\pm\)2.0 \\ \(kT_{\rm disc}\) & (keV) & 0.80\({}^{+0.03}_{-0.03}\) & 0.57\(\pm\)0.02 & 0.73\({}^{+0.03}_{-0.02}\) & 0.811\({}^{+0.010}_{-0.012}\) \\ \(R_{\rm disc}\) & (km) & 18\({}^{+7}_{-6}\) & 35\({}^{+14}_{-16}\) & 22\({}^{+6}_{-7}\) & 16\(\pm\)3.0 \\ \(E_{\rm diskline}\) & (keV) & 6.62 \(\pm\) 0.06 & (6.6) & 6.53\({}^{+0.16}_{-0.08}\) & 6.6\(\pm\)0.1 \\ \(R_{\rm in}\) & (km) & (20) & (20) & \(<\)35 & \(<\)18 \\ \(K_{\rm diskline}\) & (\(\times 10^{-3}\)) & 1.6\(\pm\)0.4 & (1.8) & 1.9\({}^{+0.4}_{-0.6}\) & 0.8\(\pm\)0.2 \\ \(F_{\rm bol}\) & \({}^{(\times 10^{-8}\,\rm erg\,cm^{-2}\,s^{-1})}_{-1.21\pm 0.10}\) & 1.20\(\pm\)0.15 & 1.34\(\pm\)0.15 & 0.70\(\pm\)0.08 \\ \(\chi^{2}_{\nu}\) & (d.o.f.) & 1.02(377) & 1.13(489) & 0.75(390) & 0.93(374) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the spectral analysis of our 4 broadband spectra (3 _NICER_ +_NuSTAR_ and 1 _AstroSat_). Quoted errors reflect 90% confidence level. The parameters that were kept frozen during the fits are reported between round parentheses. The reported flux values correspond to the 0.01–100 keV energy range. The values of \(R_{\rm disc}\) have been calculated assuming a distance of 7.6 kpc (Kuulkers et al., 2003) and an inclination of 45\({}^{\circ}\)(Anderson et al., 1997).
0, balancing sensitivity and resolution, providing angular resolutions of \(\sim\)a few arcseconds.
### Results
The flux density, \(S_{\nu}\) at frequency \(\nu\), of the point source was measured by fitting an elliptical Gaussian with full width at half maximum (FWHM) set by the synthesised beam shape. Errors on the absolute flux density scale include conservative systematic uncertainties of \(4\%\) for the 5.5/9 GHz ATCA data8(e.g., Murphy et al., 2010), which were added in quadrature with the root mean square (RMS) of the image noise. The radio luminosity, \(L_{\rm R}\), was calculated as \(L_{\rm R}=4\pi S_{\nu}\nu D^{2}\), where \(\nu\) is the observing frequency and \(D\) is the distance to the source.
Footnote 8: [https://www.atnf.csiro.au/observers/memos/d96783-1.pdf](https://www.atnf.csiro.au/observers/memos/d96783-1.pdf)
According to our measurements, a significant radio enhancement can be appreciated during the low X-ray mode (compared to the high-mode). In particular, the radio flux density increases from 70-110 \(\mu\)Jy (in epochs A and B) to \(\sim\)500 \(\mu\)Jy in epoch C. This is the brightest radio flux density recorded for 4U 1820-30 at these frequencies (see, for records of radio flux density of the source, Migliari et al., 2004; Diaz Trigo et al., 2017; Russell et al., 2021).
We then estimated the radio spectral index \(\alpha\), where \(S_{\nu}\propto\nu^{\alpha}\), in order to explore the properties of the outflow in each of the three observations. A flat/inverted radio spectrum, where \(\alpha\gtrsim 0\), is associated with persistent synchrotron emission from an optically-thick, self-absorbed compact jet, typically observed in XRBs during their hard states. On the other hand, steep radio spectra, where \(\alpha\approx-0.7\), are observed from discrete, optically-thin knots of synchrotron emitting plasma that are launched from the system, often referred to as transient jets. Transient jets are detected as the source transitions from the hard to soft state (e.g. Fender, 2001).
For 4U 1820-30 the radio spectral shape was found to be flat during epoch C (\(\alpha\sim\)-0.2), steep for epoch B (\(\alpha\sim\)-0.9), and unconstrained for epoch A, being consistent with flat, inverted or steep. This implies a dramatic evolution of the jet properties between high and low modes, as already reported by Russell et al. (2021). We will compare this intriguing trend with the accretion flow evolution tracked by the X-rays data analysis in Section 6.2.
## 6 Discussion
4U 1820-30 provides an almost unique opportunity to observe superorbital modulations of the accretion rate over relatively short time-scales in a X-ray binary. The spectral evolution of the source during several accretion cycles has been studied in detail by Titarchuk et al. (2013) using Rossi X-ray Timing Explorer, at energies higher than 3 keV. In the work presented here, thanks to the excellent effective area of _NICER_ in the soft X-ray band, we are able to investigate the system's behaviour during one super-orbital modulation in the soft X-ray band for the first time. In addition, we see how the radio emission, and, therefore, the jet, evolved corresponding to these changes in the X-ray spectra.
### The origin of the fluctuation in the Comptonisation component and the accretion flow geometry
In this paper we showed how the X-ray spectral-timing parameters evolve as the systems bounces between high and low modes. According to the traditional classification scheme based on the colour-colour diagram, 4U 1820-30 lingered in the banana state for the whole considered period, moving back and forth between the upper and the lower banana branch. Such a trend seems to reflect more an evolution in the flux rather than in the hardness value, whose relative stability is consistent with the lack of a spectral transition. We found that a model composed of a disc blackbody, a Comptonisation component and a Fe K line was sufficient to describe the continuum in all the considered spectra. Moving along the super-orbital modulation, however, these two components behave in a rather distinct way. The disc parameters stay more or less constant, in terms of both temperature (about 0.7-0.8 keV), inner radius (20-30 km) and normalisation. On the contrary, the Comptonisation component displays a substantial evolution mostly
Figure 5: Radio emission from 4U 1820-30 during epochs A, B and C (see Tab. 4 for more details). Best-fit models for the spectra of each epoch are shown. Radio spectral indices \(\alpha\) are reported on the plots as well. A dramatic change in radio flux can be appreciated between epoch C (in low mode) and epochs A and B (high mode). In addition, epoch B shows also a steeper spectrum.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{Fractional RMS variability (\%)} \\ \hline _NuSTAR_ (3-25 keV) & _NICER_ (0.5-10 keV) & _AstroSat_/ LAXPC (3-25 keV) \\ \hline Nu01 & 8.5\(\pm\)0.6 & Ni02 & 3.4\(\pm\)1.0 & As01 & 5.0\(\pm\)0.5 \\ Nu02 & 5.0\(\pm\)0.9 & Ni06 & 2.8\(\pm\)1.7 & \\ Nu03 & 13.9\(\pm\)1.0 & Ni13 & 2.6\(\pm\)1.3 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Fractional RMS variability for the _NICER_, _NuSTAR_ and ATCA observations used in the broadband spectral analysis, computed for the frequency range 0.1-100 Hz. All reported errors and upper limits correspond to a confidence levels of 3\(\sigma\).
in its normalisation, i.e., an almost 50% reduction in the 0.5-10 keV flux going from high to low mode, compared to the less than 10% drop in the disc component. Such a trend has been already spotted by Titarchuk et al. (2013) using RXTE data, although with a model slightly different than ours. A closer inspection to the obtained best-fit parameters reveals that such an evolution is driven mostly by the normalisation, thereby the physical size, of the source of the seed photons. Such a source has been described with a black body component and associated to the innermost part of the accretion flow. In the high mode, seed photons seem to be radiated by a bigger region with respect to the low modes, i.e., the size going from about 15 km in spectra A-B to about a third of this value in spectrum C. A change can be also observed in the electron temperature of the corona, which seems slightly anti-correlated with the flux, i.e., going from 3 keV (high mode) to about 5 keV (low mode). While a clear trend is visible in energy spectra, the PDS show less variability. In particular, the fractional RMS is stable at about 2-3.5% in all _NICER_ observations. On the other hand, an increase in the power subtended by the PDS, and thereby in the RMS, can be spotted in _NuSTAR_ PDS going from A-B (RMS\(\sim\)5-8%) to C (RMS\(\sim\)14%), i.e. with C being significantly more variable. Such a trend is compatible with the increase in temperature of the corona in spectrum C, as hotter coronae are expected to drive stronger X-rays fluctuations.
We can piece together all these observed spectral-timing properties into a single interpretation scheme making the following points: (i) in high mode, the mass-accretion rate increases as a consequence of the accretion cycle characterising the system. (ii) The energy fuelled by accretion is dissipated by a \(\sim\)15 km region in the BL, resulting in more photons and thereby a stronger cooling of the corona. (iii) In low mode, while \(M\), and therefore the energy supply to the innermost regions of the system, drops, only a smaller, i.e. \(\lesssim\)10 km, region of the accretion flow remains hot enough to cool the corona by charging it with photons; this region could be a hot spot on the NS surface or a fraction of the BL. Alternatively, the shrinking of the region that provides seed photons for Comptonisation could be explained by invoking a spreading layer covering a larger fraction of the NS surface in high modes than in low modes, once again in response to a changing \(\dot{m}\) in the two regimes. (iv) With less photons injected, the corona cools down less and stays hotter, driving more power in the X-rays PDS.
Recently, the new X-ray spectro-polarimetry mission _Imaging X-ray Polarimetry Explorer (IXPE)_(Weisskopf et al., 2022) has opened a new avenue to study the accretion flows in XRBs. An _IXPE_ observational campaign has been performed between 2022 and 2023 on 4U 1820-30 (Di Marco et al., 2023). The source was found in high mode in all those observations. The model used by these authors to analyse the broadband X-rays spectra, consisting of a disc blackbody, a Comptonisation spectrum and a gaussian component for the Fe K line, is consistent with ours. In addition, quasi-simultaneous radio observations (with ATCA ) revealed a steep radio spectrum as in our epoch B. These results are consistent with compact jet quenching or transient ejecta during the high modes of 4U 1820-30. In addition, these authors measured a significant (\(\sim\)10%) polarisation degree beyond 7 keV along the direction perpendicular to the disc and interpreted these results as possibly due to a mildly relativistic outflow or to reflection. Further studies are necessary to investigate whether such an outflow can be connected with the ejecta responsible for the radio emission in the same state.
### A boundary layer-jet (anti-)coupling?
In the radio band, which is dominated by the jet emission, the source shows dramatic changes between our observations. In the low mode (epoch C), its radio flux density is enhanced by a factor of 5 with respect to the high mode (epochs A and B) and also its spectral shape seems to change. The mode dependency of the jet in 4U 1820-30 has already been reported Russell et al. (2021), using new and also archival radio data (i.e., from Diaz Trigo et al., 2017; Panurach et al., 2021), and recently confirmed by Di Marco et al. (2023).
In this work, for the first time, we can compare the jet evolution with the simultaneous X-rays spectral-timing properties of the source. In Section 6.1 we have showed that the X-rays evolution during a super-orbital cycle in 4U 1820-30 is connected to fluctuations in the boundary layer emission, going from bright to faint in the high to low mode transitions. Such a behaviour produces oscillations in the X-ray light curves that however seem not to be accompanied by transitions between hard and soft states. Indeed, during all of our observations, the source never departs from the vertical "banana" track, as displayed in Fig. 2. Despite slight increases in \(kT_{\rm e}\) and the hard X-rays RMS seem to indicate that in epoch C a "micro-transition" towards harder states might be ongoing, the spectral and timing properties are not compatible with a full hard state either. Indeed, when 4U 1820-30 has been observed in hard / "island" state, it showed remarkably higher \(kT_{\rm e}\) (up to 20-30 keV, e.g., Titarchuk et al., 2013) and hard X-rays RMS (up to 20% and above, e.g., Munoz-Darias et al., 2014). Since the observed jet evolution is not occurring in tandem with a soft-to-hard tradition, the behaviour shown by 4U 1820-30 seems markedly distinct from what is typically observed in BH and several NS LMXBs (e.g., Migliari & Fender, 2006; Miller-Jones et al., 2010; Rhodes et al., 2022; Fijma et al., 2023). What then determines the evolution in the jet? A link with the boundary layer (which was also suggested by Russell et al., 2021, for the same source) seems plausible, as it drives the entire evolution of the accretion flow that we observe in our X-ray observations of 4U 1820-30. According to several accretion/ejection models (e.g. Blandford & Payne, 1982; Marcel et al., 2018), jet launching requires the presence of a hot, thin and
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{ATCA observations} \\ \hline Start date & End date & 5.5 GHz Flux density & 9 GHz Flux density & \(\alpha\) \\ (UTC) & (UTC) & (\(\mu\)Jy) & (\(\mu\)Jy) & \\ \hline
2022-04-15T18:18:40 & 2022-04-16T00:22:00 & 76\(\pm\)16 & 74\(\pm\)16 & -0.1\(\pm\)0.7 \\
2022-05-28T13:52:50 & 2022-05-28T19:19:50 & 110\(\pm\)13 & 74\(\pm\)11 & -0.9\(\pm\)0.4 \\
2022-07-10T12:43:30 & 2022-07-10T18:01:40 & 500\(\pm\)23 & 450\(\pm\)25 & -0.22\(\pm\)0.15 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results from our ATCA radio observations. A bandwidth of 2 GHz is associated to each frequency band. Flux density errors include systematic uncertainties. Radio spectral indices, provided as \(\alpha\), i.e., where \(S_{\nu}\propto\nu^{\alpha}\), are also reported.
extended corona (however, see Mendez et al. 2022, for observational evidences of an anti-correlation between radio bright jets and hot coronae in the BH XRB GRS 1915+105). The temperature and the geometrical thickness of the corona change as a consequence of the interaction with the photons coming from the disc or, in NS systems, from the BL and the NS surface itself. During the high modes, the boundary layer is bright and extended and irradiates the corona more than during the low modes, making it colder and thicker. On the other hand, during low modes, the corona is charged with less photons and can therefore expand and heat up, an ideal condition to support robust matter ejections. This is also witnessed by the increment in \(kT_{\rm e}\) and RMS variability (in _NuSTAR_) going from A/B to C (see Tab. 2 and 3). Furthermore, it is noteworthy that according to the Internal Shocks Model (Malzac 2013, 2014), an increase in X-rays variability would indeed produce brighter jets. Indeed, with more X-rays power we expect larger variability in the velocity with which ejecta in the jet are launched and subsequently more energy dissipated in the shocks between shells moving at different velocity. The proposed scenario is sketched in Fig. 6. Ultimately in this source, the BL seems to take the main role in regulating the jet properties, to such an extent that we should talk about BL-jet coupling instead of disc-jet coupling, a common term used in the context of LMXBs. An intriguing implication of this scenario is that jet quenching can occur beyond a certain X-ray luminosity but within the same spectral state. Disentangling jet suppression from spectral state transition could be key to explain the proposed presence of compact jets even after the transition to the soft state in a few NS LMXBs (Miglariari et al. 2004, 2011). It is also noteworthy that a BL-jet coupling has been already proposed for a system very different from 4U 1820-30, i.e., in the dwarf noa SS Cyg where jet launching was found to be possibly connected with the formation of the boundary layer (Russell et al. 2016). Unfortunately, the radio-X-ray observational campaign presented in this paper is one of the very few available multi-wavelength data sets of a NS LMXB followed through different regimes and/or spectral states. New radio-X-rays observational campaigns, dense enough to investigate how accretion and ejection evolve over time, have to be performed in the future in order to confirm the existence of such BL-jet coupling in other NS LMXBs.
Based on the results presented in this paper, we can speculate that BLs, a rather common ingredient in NS LMXBs, may play a role in regulating jet launching in several different classes, e.g., in in Z-sources and/or in bright, persistent atolls. All of these sources exhibit rather soft spectra, with contribution from both the disc and the BL (see, e.g. D'Ai et al. 2010; Mazzola et al. 2021; Saavedra et al. 2023). In addition, Z-sources continuously traverse along three branches in their color-color diagrams, showing drastic changes in both the radio emission and the X-rays variability, but without traditional hard-to-soft transitions (Pennix et al. 1988; Migliari and Fender 2006; Soleri et al. 2009). A similar behaviour has been exhibited also by the bright atoll GX 13+1 (Homan et al. 2004). Finally, jet-related radio emission from the persistent atoll Ser X-1 has been reported while the source was in a soft state (Miglari et al. 2004).
On the other hand, we do not expect BLs to form in other classes of NS LMXBs, namely the sources where the disc is truncated very far away from the NS surface. This could be the case for faint atolls typically found in rather hard states, such as Accreting Millisecond X-ray Pulsars (AMXPs Di Salvo and Sanna 2020). In these sources, the magnetospheric pressure could disrupt the accretion flow in proximity of the compact object, inhibiting the formation of a BL (Degenaar et al. 2017; Bult et al. 2021; Marino et al. 2022). Of note is that AMXPs can be remarkably radio bright (Russell et al. 2018), sometimes as much as BH LMXBs at the same X-ray luminosity, where BLs can not form either. As an extra ingredient that however may not be present in all NS LMXBs, the role of the BL could then be crucial to explain e.g. why NS LMXBs are typically radio fainter than BH LMXBs, why compact jets could still survive in NS LMXBs during low luminosity soft states and what produces large scatter in radio luminosity within the NS LMXB population (e.g., Tetarenko et al. 2016).
## 7 Conclusions
In this paper, we have presented the results of the 2022 multi-wavelength campaign on the ultra-compact X-ray binary 4U 1820-30, using X-rays (_NICER_, _NuSTAR_ and _AstroSat_) and radio (ATCA) observatories. Throughout this campaign, we were able to follow how the spectral/timing behavior of the accretion flow and the jet properties evolve in tandem, something rarely seen in NS XRBs. During the surveyed period, the source went through its typical low and high modes, showing a X-ray flux oscillation amplitude of about 40%. Despite such a strong variability, the source lingered in the banana state and did not show dramatic spectral changes throughout this cycle. A model including a Comptonisation spectrum (with seed photons provided by a boundary layer), the disc multi-colour blackbody emission and a Fe K line worked well in describing the broadband _NICER +NuSTAR_ and _AstroSat_ spectra and the single _NICER_ observations as well. We showed that among these components, the Comptonisation spectrum shows the most dramatic variability, while the disc component is relatively stable. The results of our analysis suggest that modulations in the amplitude of the Comptonisation spectral component are indeed driving such variability, following the very same pattern observed in the X-ray light curve. These modulations suggest an energy flow going from a more extended region in high mode, i.e., the boundary layer, from a more confined region in the low mode, i.e., the NS surface. Furthermore, the hot corona also changes in response to this trend, becoming colder in the high mode as more radiation is pumped into it by the innermost regions of the accretion flow. A correlation between the corona temperature and the values found for the _NuSTAR_ RMS variability can also be noted. Meanwhile, the jet evolved dramatically in the three observations
Figure 6: A simplified scheme for our interpretation of the X-rays and radio data. In particular, we show here how the accretion flow and the jet change going from high to low modes.
presented here, brightening considerably during the low mode, coupled with a change in the radio spectrum. We propose that the jet is responding to the changes in the corona, becoming respectively stronger (weaker) as the corona becomes hotter (colder). Ultimately, we suggest that the jet evolution is then driven by the radiation emitted from the BL in an anti-correlation pattern where the jet switches on as the BL switches off. Such a BL-jet (anti-)coupling could be acting also in other NS LMXBs and might in part explain the complex phenomenology of matter ejection in accreting NSs. In order to confirm the above scenario and pinpoint the exact moment where the jet properties evolve, a more detailed look at the radio evolution during this cycle is required (Russell et al., in preparation).
## Acknowledgements
We thank the anonymous referee for their helpful comments. AMarino, FCZ and NR are supported by the H2020 ERC Consolidator Grant "MAGNESIA" under grant agreement No. 817661 (PI: Rea) and National Spanish grant PGC2018-095512-BI00. This work was also partially supported by the program Unidad de Excelencia Maria de Maeztu (CEX2020-001058-M, and by the PHAROS COST Action (No. CA16214). MDS and TDR acknowledge support from the INAF grant "ACE-BANANA". AB is grateful to the Royal Society, United Kingdom. She is supported by an INSPIRE Faculty grant (DST/INSPIRE/04/2018/001265) by the Department of Science and Technology, Govt. of India. A.B. also acknowledges the financial support of ISRO under the AstroSat Archival Data Utilisation Programme (No.DS-2B-13013(2)/4/2019-Sec. 2). TDS and AS acknowledge financial support from PRIN-INAF 2019 with the project "Probing the geometry of accretion: from theory to observations" (PI: Belloni). FCZ is supported by a Ramon y Cajal fellowship (grant agreement RYC2021-030888-I). EA acknowledges funding from the Italian Space Agency, contract ASI/INAF. n_I/004/11/4. FC acknowledges support from the Royal Society through the Newton International Fellowship programme (NIF/R1/211296). JvL acknowledges a Warwick Astrophysics prize post-doctoral fellowship made possible thanks to a generous philanthropic donation, and was supported by a Lee Hysan Junior Research Fellowship awarded by St. Hilda's College, Oxford, during part of this work. We thank Jamie Stevens and ATCA staff for making the radio observations possible. ATCA is part of the Australia Telescope National Facility ([https://ror.org/05qajv042](https://ror.org/05qajv042)) which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We acknowledge the Wiradjuri people as the Traditional Owners of the ATCA observatory site. NICER is a 0.2-12 keV X-ray telescope operating on the International Space Station, funded by NASA. NuSTAR is a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory and funded by NASA. This publication uses the data from the AstroSat mission of the Indian Space Research Organisation (ISRO), archived at the Indian Space Science Data Centre (ISSDC).
## Data Availability
The X-rays data utilised in this article are publicly available at [https://heasarc.gsfc.nasa.gov/cgi-bin/W3browse/w3browse.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/W3browse/w3browse.pl), while the analysis products and the ATCA data will be shared on reasonable request to the corresponding author.
|
2309.16545 | Bounding mean orders of sub-$k$-trees of $k$-trees | For a $k$-tree $T$, we prove that the maximum local mean order is attained in
a $k$-clique of degree $1$ and that it is not more than twice the global mean
order. We also bound the global mean order if $T$ has no $k$-cliques of degree
$2$ and prove that for large order, the $k$-star attains the minimum global
mean order. These results solve the remaining problems of Stephens and
Oellermann [J. Graph Theory 88 (2018), 61-79] concerning the mean order of
sub-$k$-trees of $k$-trees. | Stijn Cambie, Bradley McCoy, Stephan Wagner, Corrine Yap | 2023-09-28T15:56:14Z | http://arxiv.org/abs/2309.16545v1 | # Bounding mean orders of sub-\(k\)-trees of \(k\)-trees
###### Abstract
For a \(k\)-tree \(T\), we prove that the maximum local mean order is attained in a \(k\)-clique of degree \(1\) and that it is not more than twice the global mean order. We also bound the global mean order if \(T\) has no \(k\)-cliques of degree \(2\) and prove that for large order, the \(k\)-star attains the minimum global mean order. These results solve the remaining problems of Stephens and Oellermann [J. Graph Theory 88 (2018), 61-79] concerning the mean order of sub-\(k\)-trees of \(k\)-trees.
+
Footnote †: Keywords: trees, \(k\)-trees, average subtree order
+
Footnote †: Keywords: trees, \(k\)-trees, average subtree order
## 1 Introduction
In [10] and [11] Jamison considered the mean number of nodes in subtrees of a given tree. He showed that for trees of order \(n\), the average number of nodes in a subtree of \(T\) is at least \((n+2)/3\), with this minimum achieved if and only if \(T\) is a path. He also showed that the average number of nodes in a subtree containing a root is at least \((n+1)/2\) and always exceeds the average over all unrooted subtrees. The mean subtree order in trees was further investigated, e.g. in [3, 7, 14, 20, 22], as well as extensions to arbitrary graphs [2, 4, 5, 6] and the mean order of the connected induced subgraphs of a graph [8, 9, 17, 18, 19].
In [16], Stephens and Oellermann extended the study to \(k\)-trees and families of sub-\(k\)-trees. A _\(k\)-tree_ is a generalization of a tree that has the following recursive construction.
**Definition 1** (\(k\)-tree).: _Let \(k\) be a fixed positive integer._
1. _The complete graph_ \(K_{k}\) _is a_ \(k\)_-tree._
2. _If_ \(T\) _is a_ \(k\)_-tree, then so is the graph obtained from_ \(T\) _by joining a new vertex to all vertices of some_ \(k\)_-clique of_ \(T\)_._
3. _There are no other_ \(k\)_-trees._
Note that for \(k=1\) we have the standard recursive construction of trees. A _sub-\(k\)-tree_ of a \(k\)-tree \(T\) is a subgraph that is itself a \(k\)-tree. Let \(S(T)\) denote the collection of all sub-\(k\)-trees of \(T\) and let \(N(T):=|S(T)|\) be the number of sub-\(k\)-trees. We denote by \(R(T)=\sum_{X\in S(T)}|X|\) the total number of vertices in all sub-\(k\)-trees (where for a graph \(G\) the notation \(|G|\) is used throughout to mean the number of vertices in \(G\)). The _global mean (sub-\(k\)-tree) order_ is
\[\mu(T)=\frac{R(T)}{N(T)}.\]
For an arbitrary \(k\)-clique \(C\) of \(T\), let \(S(T;C)\) denote the collection of sub-\(k\)-trees containing \(C\) and let \(N(T;C):=|S(T;C)|\). The _local clique number_ is \(R(T;C)=\sum_{X\in S(T;C)}|X|\) and the _local mean (sub-\(k\)-tree) order_ is
\[\mu(T;C)=\frac{R(T;C)}{N(T;C)}.\]
The _degree_ of \(C\) is the number of \((k+1)\)-cliques that contain \(C\).
Stephens and Oellermann concluded their study of the mean order of sub-\(k\)-trees of \(k\)-trees with several open questions. Our main contribution here is to answer three of them.
It was conjectured by Jamison in [10] and proven by Vince and Wang in [20] that for trees of order \(n\) without vertices of degree \(2\)--called _series-reduced_ trees--the global mean subtree order is between \(\frac{n}{2}\) and \(\frac{3n}{4}\). For \(k\)-trees we provide similar asymptotically sharp bounds, answering [16, Problem 6].
**Theorem 2**.: _For every \(k\)-tree \(T\) without \(k\)-cliques of degree \(2,\) the global mean sub-\(k\)-tree order satisfies_
\[\frac{n+k}{2}-o_{n}(1)<\mu(T)<\frac{3n+k-3}{4}.\]
_These bounds are asymptotically sharp. In particular, for large \(k\), \(\frac{3n}{4}\) is not an upper bound._
_For large \(n\), the \(k\)-star is the unique extremal \(k\)-tree for the lower bound._
Wagner and Wang proved in [21] that the maximum local mean subtree order occurs at a leaf or a vertex of degree \(2\). We prove an analogous result for \(k\)-trees, answering [16, Problem 4]. In contrast to the result for trees, it turns out that for \(k\geq 2\), the maximum can only occur at a \(k\)-clique of degree \(1\), unless \(T\) is a \(k\)-tree of order \(k+2\).
**Theorem 3**.: _Suppose that \(k\geq 2\). For a \(k\)-tree \(T\) of order \(n\neq k+2\), if a \(k\)-clique \(C\) maximizes \(\mu(T;C)\), then \(C\) must be a \(k\)-clique of degree \(1\). For \(n=k+2,\) every \(k\)-clique \(C\) satisfies \(\mu(T;C)=k+1\)._
Lastly, Jamison [10] conjectured that for a given tree \(T\) and any vertex \(v\), the local mean order is at most twice the global mean order of all subtrees in \(T\). Wagner and Wang [21] proved that this is true. Answering [16, Problem 2] affirmatively, we show
**Theorem 4**.: _The local mean order of the sub-\(k\)-trees containing a fixed \(k\)-clique \(C\) is less than twice the global mean order of all sub-\(k\)-trees of \(T\)._
### Related Results
A total of six questions were posed in [16]. Problems 1 and 3 were solved by Luo and Xu [13]. Regarding the first problem, Jamison [10] showed that for any tree \(T\) and any vertex \(v\) of \(T\), the local mean order of subtrees containing \(v\) is an upper bound on the global mean order of subtrees of \(T\). Stephens and Oellermann asked about a generalization to \(k\)-trees, to which Luo and Xu showed:
**Theorem 5** ([13]).: _For any \(k\)-tree \(T\) of order \(n\) with a \(k\)-clique \(C\), we have \(\mu(T;C)\geq\mu(T)\) with equality if and only if \(T\cong K_{k}\)._
For the third problem, it was shown in [10] that paths have the smallest global mean subtree order. For \(k\)-trees we have:
**Theorem 6** ([13]).: _For any \(k\)-tree \(T\) of order \(n\), we have \(\mu(T)\geq\frac{\binom{n-k+2}{3}}{\binom{n-k+1}{2}+(n-k)k+1}+k\) with equality if and only if \(T\) is a path-type \(k\)-tree._
Very recently, Li, Ma, Dong, and Jin [12] gave a partial proof of Theorem 3, showing that the maximum local mean order always occurs at a \(k\)-clique of degree \(1\) or \(2\), thus also solving [16, Problem 4]. They did this by combining the fact that the \(1\)-characteristic trees of adjacent \(k\)-cliques can be obtained from each other by a partial Kelmans operation [12, Lem. 4.6], an inequality between local orders of neighboring vertices after performing a partial Kelmans operation [12, Thm. 3.3], as well as [16, Lem. 11] and [21, Thm. 3.2]. Theorem 3 also solves Problem 5.4 from [12], asking whether the maximum local mean order can ever occur at a \(k\)-clique of degree \(2\), but not at a \(k\)-clique of degree \(1\).
**Outline:** In Section 2, we go over definitions and notation. Theorems 2, 3, 4 are proven in Sections 3, 4, 5 respectively. We additionally address [16, Problem 5], which is a more general question asking what one can say about the local mean order of sub-\(k\)-trees containing a fixed \(r\)-clique for \(1\leq r\leq k\). We give a possible direction and partial results in the concluding section.
## 2 Notation and Definitions
The global mean sub-\(k\)-tree order \(\mu(T)\) and the local mean sub-\(k\)-tree order \(\mu(T;C)\) are defined in the introduction. The local mean order always counts the \(k\) vertices from \(C\) and it sometimes will be more convenient to work with the average number of additional vertices in a (uniform random) sub-\(k\)-tree of \(T\) containing \(C\), in which case we use the notation \(\mu^{\bullet}(T;C)=\mu(T;C)-k\). Moreover, the number of sub-\(k\)-trees not containing \(C\) will be denoted by \(\overline{N}(T;C)=N(T)-N(T;C)\) and the total number of vertices in the sub-\(k\)-trees that do not contain \(C\) will be denoted by \(\overline{R}(T;C)=R(T)-R(T;C)\).
A _\(k\)-leaf_ or _simplicial vertex_ is a vertex belonging to exactly one \((k+1)\)-clique of \(T\), i.e., a vertex of degree \(k\). A _simplicial \(k\)-clique_ is a \(k\)-clique containing a \(k\)-leaf. Note that a \(k\)-clique of degree \(1\) is not necessarily simplicial. A _major \(k\)-clique_ is a \(k\)-clique with degree at least \(3\). Two \(k\)-cliques are _adjacent_ if they share a \((k-1)\)-clique.
The _stem_ of a \(k\)-tree \(T\) is the \(k\)-tree obtained by deleting all \(k\)-leaves from \(T\).
Subclasses of trees generalize to subclasses of \(k\)-trees. We will reference two in particular: paths generalize to _path-type \(k\)-trees_ that are either isomorphic to \(K_{k}\) or \(K_{k+1}\) or have precisely two \(k\)-leaves. Note that for every \(n\geq k+3\) and \(k\geq 3\), there are multiple non-isomorphic path-type \(k\)-trees of order \(n\).
A _\(k\)-star_ is either \(K_{k}\) or \(K_{k+1}\), or it is the unique \(k\)-tree with \(n-k\) simplicial vertices when \(n\geq k+2\).
Furthermore, the combination of a \(k\)-star and a \(k\)-path is called a \(k\)-broom: take a \(k\)-path of a certain length and add some simplicial vertices to a simplicial \(k\)-clique of the \(k\)-path.
For a given \(k\)-clique \(C\) in \(T\), it is often useful to decompose \(T\) into \(C\) and the sub-\(k\)-trees that result from deleting \(C\). Let \(B_{1},B_{2},\ldots,B_{d}\) be the \((k+1)\)-cliques that contain \(C\), and let \(v_{i}\) be the vertex of \(B_{i}\setminus C\). Moreover, let \(C_{i,1},\ldots,C_{i,k}\) be the \(k\)-subcliques of \(B_{i}\) other than \(C\). The \(k\)-tree \(T\) can be _decomposed_ into \(C\) and \(k\)-trees \(T_{1,1},\ldots,T_{d,k}\), rooted at \(C_{1,1},\ldots,C_{d,k}\) respectively, that are pairwise disjoint except for the vertices of the cliques \(C_{i,j}\).
In a \(1\)-tree, any two vertices are connected by a unique path. This fact generalizes to \(k\)-trees
Figure 1: Examples of a \(3\)-path, \(3\)-star and \(2\)-broom
through the construction of the \(1\)-characteristic tree of a \(k\)-tree \(T\)[13, 16]. For a \(k\)-clique \(C\) in \(T\), a _perfect elimination ordering of \(T\) to \(C\)_ is an ordering \(v_{1},v_{2},\ldots,v_{n-k}\) of its vertices different from \(V(C)=\{c_{1},\ldots,c_{k}\}\) such that each vertex \(v_{i}\) is simplicial in the \(k\)-tree spanned by \(C\) and \(v_{j},1\leq j\leq i\). In [16] it is shown that for any \(v\notin C\), there is a unique sequence of vertices that along with \(C\) induce a path-type \(k\)-tree \(P(C,v)\) and that form a perfect elimination ordering of \(P(C,v)\) to \(C\). It is also proven that \(T\) can be written as \(\bigcup P(C,v)\) where the union is taken over all \(k\)-leaves \(v\in V\). Each \(k\)-tree \(P(C,v)\) has an associated \(1\)-tree \(P^{\prime}(C,v)\) where the vertices consist of a single vertex representing the entire clique \(C\), along with the remaining non-\(C\) vertices of \(P(C,v)\). The edges are consecutive pairs from the perfect elimination ordering. Taking \(\bigcup P^{\prime}(C,v)\) over all \(k\)-leaves \(v\) gives us what is called the \(1\)_-characteristic tree_ of \(T\), which we will denote \(T^{\prime}_{C}\). See Figure 2 for an example.
## 3 Excluding \(k\)-cliques of degree \(2\)
In this section, we prove
**Theorem 2**.: _For every \(k\)-tree \(T\) without \(k\)-cliques of degree \(2,\) the global mean sub-\(k\)-tree order satisfies_
\[\frac{n+k}{2}-o_{n}(1)<\mu(T)<\frac{3n+k-3}{4}.\]
_These bounds are asymptotically sharp. In particular, for large \(k\), \(\frac{3n}{4}\) is not an upper bound. For large \(n\), the \(k\)-star is the unique extremal \(k\)-tree for the lower bound._
### The lower bound
Among series-reduced trees of order \(n\), for \(4\leq n\leq 8\) the star attains the maximum mean subtree order, but for \(n\geq 11\) it attains the minimum mean subtree order. It can be derived from [7, Lem. 12] and an adapted version of [7, Cor. 11] (with \(2\) replaced by any \(\varepsilon>0\), at the cost of replacing \(30\) by \(n_{\varepsilon}\)) that this is indeed the case for \(n\) sufficiently large.1 In [1], it is shown that for \(6\leq n\leq 10\), the series-reduced trees attaining the minimum mean subtree order are those presented in Figure 3, and for \(n\geq 11\), \(S_{n}\) is always the unique extremal graph.
Footnote 1: We thank John Haslegrave for this remark
In this subsection, we will prove that the above extremal statement generalizes to \(k\)-trees without a \(k\)-clique of degree \(2\).
Figure 3: Series-reduced trees with minimum mean subtree order for \(6\leq n\leq 10\)
Figure 2: A \(2\)-tree (left) with \(2\)-clique \(C=\{c_{1},c_{2}\}\) and the \(1\)-characteristic tree (right).
First, we prove the following lemma, which states that every \(k\)-tree contains a \((k+1)\)-clique \(C\) that plays the role of a centroid in a tree (a vertex or edge whose removal splits the tree into components of size at most \(\frac{n}{2}\)). Figure 4 is an example of a \(2\)-tree demonstrating why we must take a \((k+1)\)-clique and not a \(k\)-clique.
**Lemma 7**.: _Any \(k\)-tree \(T\) (of order \(n\geq k+1\)) has a \((k+1)\)-clique \(C\) such that the order of all components of \(T\setminus C\) is at most \(\left\lceil\frac{n-(k+1)}{2}\right\rceil\)._
Proof.: Suppose for contradiction that such a \((k+1)\)-clique does not exist. Consider the \((k+1)\)-clique \(C\) for which the largest component of \(T\setminus C\) has minimum order over all choices of \(C\). By assumption, the largest component of \(T\setminus C\) has order \(n_{0}>\left\lceil\frac{n-(k+1)}{2}\right\rceil\), and by the construction of a \(k\)-tree, there exist vertices \(u\in C\) and \(v\) in the largest component of \(T\setminus C\) such that \(v\cup(C\setminus u)\) forms a \((k+1)\)-clique \(C^{\prime}\). Now \(T\setminus C^{\prime}\) has components whose sizes are bounded by \(\max\left\{n-n_{0}-k,n_{0}-1\right\}<n_{0}\), which gives a contradiction. Hence \(C\) satisfies the stated property.
The following lemma and its proof are similar to [15, Lemma 5.1].
**Lemma 8**.: _For any \(k\)-tree \(T\) without \(k\)-cliques of degree \(2\), there is a \(k\)-clique \(C^{\prime}\) for which \(\mu(T;C^{\prime})-\mu(T)\leq\frac{n^{3}}{2^{(n-k)/4}}=o_{n}(1)\)._
Proof.: Take \(C\) as in Lemma 7. Let its vertices be \(\{u_{1},u_{2},\ldots,u_{k+1}\}\).
For every component of \(T\setminus C\), there is a unique vertex \(u_{i},1\leq i\leq k+1\), such that the component together with \(C\setminus u_{i}\) forms a \(k\)-tree. Now we consider two cases that are handled analogously.
**Case 1:** There is some \(u_{i}\) such that the union of components that form a \(k\)-tree when adding \(C^{\prime}=C\setminus u_{i}\) has order at least \(\frac{n-k}{2}\).
In this case, we consider the \(r\leq n-k\) components of \(T\setminus C^{\prime}\) and for every \(1\leq i\leq r\), we let \(T_{i}\) be the union of such a component and \(C^{\prime}\).
We then apply the following claim:
**Claim 9**.: _Let \(C^{\prime}\) be a \(k\)-clique in a \(k\)-tree \(T\) of order \(n\) with no \(k\)-cliques of degree \(2\). Then \(T\setminus C^{\prime}\) has at least \(\frac{n-k}{2}\) simplicial vertices._
Proof.: Consider the \(1\)-characteristic tree \(T^{\prime}_{C^{\prime}}\). For every \(u\in V\setminus C^{\prime}\), either all the \(k\)-cliques containing \(u\) have degree \(1\), in which case \(u\) has degree \(1\) in \(T^{\prime}_{C^{\prime}}\), or (at least) one of them has degree at least \(3\) and so does \(u\) in \(T^{\prime}_{C^{\prime}}\). Thus \(T^{\prime}_{C^{\prime}}\) is series-reduced and thus has at least \(\frac{n-k}{2}+1\) leaves. The latter implies that \(T\) has at least \(\frac{n-k}{2}\) simplicial vertices.
From the claim, there are at least \(\frac{n-k}{4}-1\) simplicial vertices not belonging to \(T_{i}\) for every \(1\leq i\leq r\). Observe that given any sub-\(k\)-tree of \(T\) containing \(C^{\prime}\), we can map it to its sub-\(k\)-tree
Figure 4: \(2\)-tree with a centroid \(3\)-clique but no centroid \(2\)-clique
intersection with \(T_{i}\). If two elements of \(S(T;C^{\prime})\) differ only in some subset of \(k\)-leaves of \(\bigcup_{j\neq i}T_{j}\), then they map to the same element of \(S(T_{i};C^{\prime})\). Thus, each element of \(S(T_{i};C^{\prime})\) is mapped to at least \(2^{\frac{n-k}{4}-1}\) times, and \(N(T;C^{\prime})\geq 2^{\frac{n-k}{4}-1}N(T_{i};C^{\prime})\).
Using the perfect elimination ordering, every sub-\(k\)-tree \(S\) in \(T\) not containing \(C\) can be extended in a minimal way into a sub-\(k\)-tree containing \(C^{\prime}\). Furthermore, by considering the \(1\)-characteristic tree \(T^{\prime}_{C^{\prime}}\) of \(C^{\prime}\), it is clear that there are no more than \(|T_{i}|-k\leq\left\lceil\frac{n-(k+1)}{2}\right\rceil\leq\frac{n}{2}\)\(k\)-trees that extend to the same tree. Here we have used Lemma 7. Thus, if we define a map from \(\overline{S}(T_{i};C^{\prime})\) to \(S(T_{i};C^{\prime})\) using the minimal extension, every element of \(S(T_{i};C^{\prime})\) is mapped to at most \(\frac{n}{2}\) times.
Putting the previous two observations together, we have that the number of sub-\(k\)-trees containing \(C^{\prime}\) is \(N(T;C^{\prime})\geq 2^{\frac{n-k}{4}-1}N(T_{i};C^{\prime})\geq\frac{1}{n}2^{ \frac{n-k}{4}}\overline{N}(T_{i};C^{\prime})\).
By summing over all \(i\), we obtain that
\[rN(T;C^{\prime})\geq\frac{1}{n}2^{\frac{n-k}{4}}\sum_{i=1}^{r}\overline{N}(T_{ i};C^{\prime})=\frac{1}{n}2^{\frac{n-k}{4}}\overline{N}(T;C^{\prime}).\]
Since \(r\leq n\), we have that \(N(T;C^{\prime})\geq\frac{1}{n^{2}}2^{\frac{n-k}{4}}\overline{N}(T;C^{\prime}).\) This implies that \(\frac{\mu(T)}{\mu(T;C^{\prime})}\geq 1-\frac{n^{2}}{2^{\frac{n-k}{4}}}.\) The result follows now from \(\mu(T;C^{\prime})\leq n\).
**Case 2:** For every \(u_{i}\) the union of components in \(T\setminus C\) that form a \(k\)-tree when adding \(C\setminus u_{i}\) has order smaller than \(\frac{n-k}{2}\).
Let \(T_{i}\), \(1\leq i\leq k+1\), be the \(k\)-trees obtained above when adding \(C\setminus u_{i}\). By the claim above, for each \(i\) there are at least \(\frac{n-k}{4}-1\)\(k\)-leaves not belonging to \(T_{i}\). Thus, the same computations apply and in particular we have that \(N(T;C^{\prime})\geq\frac{1}{n^{2}}2^{\frac{n-k}{4}}\overline{N}(T;C^{\prime})\) as before. Now for \(C^{\prime}=C\setminus u_{1}\), we conclude by inclusion monotonicity [13, Thm. 33] that \(\mu(T;C^{\prime})-\mu(T)\leq\mu(T;C)-\mu(T)\leq\frac{n^{3}}{2^{(n-k)/4}}\).
Proof of Theorem 2, lower bound.: Take a \(k\)-clique \(C^{\prime}\) which satisfies Lemma 8. Let \(T^{\prime}_{C^{\prime}}\) be the \(1\)-characteristic tree of \(T\) with respect to \(C^{\prime}.\) By [13, Thm. 33] and [7, Lem. 12], we conclude that \(\mu(T;C^{\prime})=\mu(T^{\prime}_{C^{\prime}};C^{\prime})+k-1\geq\frac{n+k}{2 }+\frac{i-1}{10},\) where \(i\) is the number of internal vertices in \(T^{\prime}_{C^{\prime}}.\) By Lemma 8, we conclude.
For sharpness, observe that if \(T\) is not a \(k\)-star, we have \(i\geq 2\) and the lower bound inequality is strict. When \(T\) is a \(k\)-star, it contains no \(k\)-cliques of degree \(2\) provided that \(n>k+2\). As computed in [16] (page 64), \(\mu(T)=\frac{R(T)}{N(T)}=\frac{(n+k)2^{n-k-1}}{2^{n-k}+(n-k)k}=\frac{n+k}{2}- o_{n}(1)\).
### The upper bound
In this subsection, we generalize to \(k\)-trees the statement that a series-reduced tree has average subtree order at most \(\frac{3n}{4}\) by giving a lower bound for the number of \(k\)-leaves and proving that \(k\)-leaves belong to at most half of the sub-\(k\)-trees. This idea was also used in [7]. Note that the upper bound is slightly larger than \(\frac{3n}{4}\) for larger \(k\), which intuitively can be explained by the fact that the smallest sub-\(k\)-tree already has \(k\) vertices, and more precisely the vertices in the base \(k\)-clique will all be major vertices.
Proof of Theorem 2, upper bound.: Our upper bound will come from the observation that \(\mu(T)=\sum_{v\in T}p(v)\) where \(p(v)\) is the fraction of sub-\(k\)-trees containing \(v\). We will specifically consider when \(v\) is a \(k\)-leaf and bound the corresponding terms in the summation.
We first prove that the \(1\)-characteristic tree \(T^{\prime}_{C}\) of a \(k\)-tree \(T\) without \(k\)-cliques of degree \(2\) is a series-reduced tree, for any \(k\)-clique \(C\) of \(T\). Indeed, given a \(k\)-clique \(C\), there is either exactly
one vertex adjacent to \(C\) or at least \(3\). As such, the degree of \(C\) in \(T_{C}^{\prime}\) is not \(2\). For any other vertex \(v\in T\), either it is a leaf in \(T_{C}^{\prime}\), or at least one other vertex has been added to a \(k\)-clique \(C^{\prime}\) containing \(v\) in \(T\). In the latter case, since \(T\) has no \(k\)-cliques of degree \(2\), there must be at least two vertices other than \(v\) joined to \(C^{\prime}\) and thus the degree of \(v\) in \(T_{C}^{\prime}\) is at least \(3\). Since \(T_{C}^{\prime}\) has at least \(\frac{n-k+1}{2}+1\) leaves, \(T\) contains at least this many \(k\)-leaves (here one must also observe that if \(C\) has degree \(1\) in \(T_{C}^{\prime}\), some vertex of \(C\) is simplicial in \(T\)).
Now fix a \(k\)-leaf \(v\). Since \(n\geq k+2\), there is a vertex \(u\) (different from \(v\)) such that \(N(v)\cup\{u\}\) spans a \(K_{k+1}\). Define a function \(f\) on sub-\(k\)-trees containing \(v\) such that \(f(C^{\prime})=(C^{\prime}\setminus\{v\})\cup\{u\}\) for a \(k\)-clique \(C^{\prime}\) and \(f(T^{\prime})=T^{\prime}\setminus v\) otherwise. We can check that \(f\) maps \(k\)-cliques to \(k\)-cliques and is in fact an injection from sub-\(k\)-trees containing \(v\) to sub-\(k\)-trees not containing \(v\). Indeed, because \(v\) is a \(k\)-leaf, \(C^{\prime}\cup\{u\}\) is not a \((k+1)\)-clique and thus not a sub-\(k\)-tree. Hence there does not exist a \((k+1)\)-clique \(C^{\prime\prime}\) for which \(C^{\prime\prime}\setminus\{v\}=(C^{\prime}\setminus\{v\})\cup\{u\}\).
This implies that every \(k\)-leaf belongs to at most half of the sub-\(k\)-trees in \(T\). Remembering that there are at least \(\frac{n-k+3}{2}\)\(k\)-leaves, the global mean order of \(T\) is then
\[\mu(T)=\sum_{v\text{ non-k-leaf}}p(v)+\sum_{v\text{ $k$-leaf}}p(v)\leq n- \frac{1}{2}\cdot\frac{n-k+3}{2}=\frac{3n+k-3}{4}\]
For sharpness, let \(n=2s+3-k\) for an integer \(s\). We construct \(T\) by first constructing a caterpillar \(T^{\prime}\) which consists of a path-type \(k\)-tree \(P_{s}^{k+1}\) on vertices \(v_{1},v_{2},\ldots,v_{s}\), for which \(k+1\) consecutive vertices form a clique, and adding a \(k\)-leaf connected to every \(k\) consecutive vertices. To obtain \(T\), we extend \(T^{\prime}\) by adding two \(k\)-leaves, which are connected to \(\{v_{1},\ldots,v_{k}\}\) and \(\{v_{s-k+1},\ldots,v_{s}\}\) respectively. Note that \(T\) has a "stem" of \(s\) vertices and the number of \(k\)-leaves is \(\ell+2=(s-k+1)+2=s-k+3\). For every \(1\leq i\leq s-k\), there are \(i\) sub-\(k\)-trees of the stem each of order \(s+1-i\). Each of these can be extended by adding any subset of the \(\ell+1-i\) neighboring \(k\)-leaves, or even one or two more if some of the end-vertices of the stem are involved.
Now we can compute that
\[N(T) =(k(n-k)+1-\ell)+2^{\ell+2}+2\sum_{i=2}^{\ell}2^{i}+\sum_{i=3}^{s -k+1}(i-2)\cdot 2^{\ell+1-i}\] \[\sim 9\cdot 2^{\ell}.\]
The first expression \(k(n-k)+1-\ell\) counts the number of simplicial \(k\)-cliques different from the ones consisting of \(k\) consecutive vertices in the stem. \(2^{\ell+2}\) is the number of sub-\(k\)-trees containing the whole stem. The third term counts the number of sub-\(k\)-trees containing \(v_{1}\) or \(v_{s}\) but not both and at least \(k\) vertices of the stem. The last summation counts the sub-\(k\)-trees containing at least \(k\) vertices of the stem and none of \(v_{1}\) and \(v_{s}\). We can also compute \(R(T)\) by summing the total size of the respective sub-\(k\)-trees.
\[R(T) =(k(n-k)+1-\ell)k+2^{\ell+2}\left(s+\frac{\ell+2}{2}\right)+2 \sum_{i=2}^{\ell}2^{i}\left(k+i-2+\frac{i}{2}\right)\] \[+\sum_{i=3}^{s-k+1}(i-2)\cdot 2^{\ell+1-i}\left(s+1-i+\frac{\ell+1-i }{2}\right)\] \[\sim 2^{\ell+2}\left(s+\frac{\ell+2}{2}\right)+2^{\ell+2}(k-2)+3 \cdot 2^{\ell+1}(\ell-1)+2^{\ell}\left(s+1+\frac{\ell+1}{2}\right)-15\cdot 2^{\ell-1}\] \[=\left(5s+\frac{17}{2}\ell+4k-16\right)2^{\ell}\]
\[=\left(\frac{27}{4}n+\frac{9}{4}k-\frac{111}{4}\right)2^{\ell}.\]
Finally, we conclude that \(\mu(T)=\frac{R(T)}{N(T)}\sim\frac{3}{4}n+\frac{1}{4}k-\frac{37}{12}\), which is only \(\frac{7}{3}\) away from the upper bound. These computations have also been verified in [https://github.com/StijnCambiE/AvSubOrder_ktree/blob/main/M_comb_ktree.mw](https://github.com/StijnCambiE/AvSubOrder_ktree/blob/main/M_comb_ktree.mw).
## 4 The maximum local mean order
In this section, we prove
**Theorem 3**.: _Suppose that \(k\geq 2\). For a \(k\)-tree \(T\) of order \(n\neq k+2\), if a \(k\)-clique \(C\) maximizes \(\mu(T;C)\), then \(C\) must be a \(k\)-clique of degree \(1\). For \(n=k+2,\) every \(k\)-clique \(C\) satisfies \(\mu(T;C)=k+1.\)_
We consider the \(k\)-clique in a \(k\)-tree \(T\) for which the local mean order is greatest. Our aim is to show that its degree cannot be too large, specifically at most \(2\). To do so, we prove that if a \(k\)-clique \(C\) has degree at least \(2\), then there is a neighboring \(k\)-clique whose local mean order is not smaller, with strict inequality when \(C\) has degree at least \(3\). From this, it will follow that any \(k\)-clique attaining the maximum has degree at most \(2\) and there is always at least one \(k\)-clique with degree \(1\) in which the maximum is attained.
The proof requires some technical inequalities, which we prove first. We start with a generalization of [21, Lemma 2.1] to the \(k\)-tree case as follows.
**Lemma 10**.: _For a sub-\(k\)-tree \(T\) and a \(k\)-clique \(C\), we have_
\[R(T;C)\leq\frac{N(T;C)^{2}+(2k-1)N(T;C)}{2}.\]
_Equality holds if and only if \(T\) is a path-type \(k\)-tree and \(C\) is a simplicial \(k\)-clique._
Proof.: The statement is clearly true when \(T=C\), so assume \(|T|>k\). Observe that every sub-\(k\)-tree \(T^{\prime}\) of \(T\) containing \(C\) has a vertex \(v\) (not belonging to \(C\)) which is simplicial within
Figure 5: Sketch of \(k\)-trees with mean sub-\(k\)-tree order roughly \(\frac{n}{2}\) and \(\frac{3n}{4}\) for \(k=3\)
\(T^{\prime}\) (consider a perfect elimination order where \(C\) is taken as the base \(k\)-clique). This implies that \(T^{\prime}\backslash v\) is a sub-\(k\)-tree containing \(C\) as well.
If there exists a sub-\(k\)-tree containing \(C\) of order \(\ell>k\), then the above implies that there must also exist a sub-\(k\)-tree of order \(\ell-1\) containing \(C\). Thus, if we list the sub-\(k\)-trees in \(S(T;C)\) from smallest to largest order, we see that
\[R(T;C)\leq k+(k+1)+\cdots+(k+N(T;C)-1)=\frac{1}{2}(N(T;C)^{2}+(2k-1)N(T;C))\]
as desired.
The equality case is clear, since every sub-\(k\)-tree \(T^{\prime}\neq C\) of \(T\) must have exactly one \(k\)-leaf not belonging to \(C\), and equality is attained when \(T\) is a path-type \(k\)-tree.
Using the previous lemma, we can now bound the local mean order in terms of the number of sub-\(k\)-trees.
**Lemma 11**.: _For a \(k\)-tree \(T\) and one of its \(k\)-cliques \(C\), we have_
\[k+\frac{\log_{2}N(T;C)}{2}\leq\mu(T;C)\leq\frac{N(T;C)+(2k-1)}{2}.\]
_The minimum occurs if and only if \(T\) is a \(k\)-star and \(C\) its base \(k\)-clique. The maximum is attained exactly when \(T\) is a path-type \(k\)-tree and \(C\) is simplicial._
Proof.: The upper bound follows immediately from Lemma 10 by dividing both sides of the inequality by \(N(T;C)\). The equality cases are the same.
By [16, Thm. 12] we have \(\mu(T;C)\geq\frac{|T|+k}{2}.\) Since a sub-\(k\)-tree is determined by its vertices, we also have
\[N(T;C)\leq 2^{|T|-k}. \tag{1}\]
Combining these two inequalities, we get \(\mu(T;C)\geq\frac{\log_{2}N(T;C)}{2}+k\).
To attain the lower bound, equality must hold for (1). This is the case if and only if \(T\) is a \(k\)-star and \(C\) its base \(k\)-clique. Indeed, for a \(k\)-star of order \(n\) with base clique \(C\), we have \(N(T;C)=2^{n-k}\) and \(\mu(T;C)=\frac{n+k}{2}\). In the other direction, every vertex together with \(C\) needs to form a \(k\)-tree and thus a \((k+1)\)-clique, which is possible only for a \(k\)-star.
We will also make use of the following elementary inequality, which can be considered as the opposite statement of the inequality between the arithmetic mean and geometric mean (AMGM).
**Lemma 12**.: _Let \(x_{1},x_{2},\ldots,x_{n}\in\mathbb{R}_{\geq 1}\), and let \(P=\prod_{i}x_{i}\). Then \(\sum_{i}x_{i}\leq P+(n-1).\) Furthermore, equality is attained if and only if \(x_{i}=P\) for some \(i\) and \(x_{j}=1\) for all \(j\neq i\)._
Proof.: This can be proven in multiple ways. The most elementary way is to observe that if \(x_{i},x_{j}>1\), then \(x_{i}x_{j}+1>x_{i}+x_{j}\) since \((x_{i}-1)(x_{j}-1)>0.\) Repeating this with pairs of elements which are strictly larger than \(1\) gives the result. Alternatively, one could consider the variables \(\alpha_{i}=\log(x_{i})\geq 0\). Since their sum is the fixed constant \(\log P\) and the exponential function is convex, as a corollary of Karamata's inequality \(\sum_{i}\exp(\alpha_{i})\) is maximized when all except one are equal to \(0\).
We are now ready to prove Theorem 3. Recall that \(T\) can be decomposed into \(C\) and \(k\)-trees \(T_{1,1},\ldots,T_{d,k}\) rooted at \(C_{1,1},\ldots,C_{d,k}\) respectively that are pairwise disjoint except for the vertices of the cliques \(C_{i,j}\). Let \(B_{i}\) denote the \((k+1)\)-clique that contains \(C\) as well as \(C_{i,1},\ldots,C_{i,k}\), and let \(v_{i}\) be the vertex of \(B_{i}\setminus C\). Finally, set \(N_{i,j}=N(T_{i,j};C_{i,j})\) and \(\mu^{\bullet}_{i,j}=\mu^{\bullet}(T_{i,j};C_{i,j})\).
Proof of Theorem 3.: We first observe that
\[N(T;C)=\prod_{i=1}^{d}\left(1+\prod_{j=1}^{k}N_{i,j}\right). \tag{2}\]
This is because a sub-\(k\)-tree \(S\) of \(T\) containing \(C\) is specified as follows: given \(1\leq i\leq d\), choose the sub-\(k\)-tree intersection of \(S\) with \(T_{i,1},\ldots,T_{i,k}\). There are \(\prod_{j=1}^{k}N_{i,j}\) ways to do this if \(v_{i}\in S\) and one if \(v_{i}\notin S\). We do this independently for each \(i\), resulting in the product above.
Next, we would like to express the local mean order at \(C\) in terms of the quantities \(N_{i,j}\) and \(\mu^{\bullet}_{i,j}\): it is given by
\[\mu^{\bullet}(T;C)=\sum_{i=1}^{d}\frac{1+\sum_{j=1}^{k}\mu^{\bullet}_{i,j}}{1 +\prod_{j=1}^{k}N_{i,j}^{-1}}. \tag{3}\]
To see this, note that we can interpret \(\mu(T;C)\) as the expected size of a random sub-\(k\)-tree chosen from \(S(T;C)\), which can then be written as the sum of the expected sizes of the intersection with each component \(T_{i,j}\) in the decomposition. We have that \(\frac{1}{1+\prod_{j=1}^{k}N_{i,j}}=\frac{\prod_{j=1}^{k}N_{i,j}}{1+\prod_{j=1} ^{k}N_{i,j}}\) is the probability that a randomly chosen sub-\(k\)-tree of \(T\) that contains \(C\) also contains \(v_{i}\) (by the same reasoning that gave us (2)). Once \(v_{i}\) is included, it adds \(1\) to the number of vertices, and an average total of \(\sum_{j=1}^{k}\mu^{\bullet}_{i,j}\) is added from the extensions in \(T_{i,1},\ldots,T_{i,k}\).
Without loss of generality, we can assume that \(N_{1,1}=\min_{i,j}N_{i,j}\). We want to compare \(\mu^{\bullet}(T;C)\) to \(\mu^{\bullet}(T;C_{1,1})\) and prove that \(\mu^{\bullet}(T;C_{1,1})\geq\mu^{\bullet}(T;C)\) provided that \(d\geq 2\), with strict inequality if \(d>2\). Observe that this will be enough to prove our claim: no clique with degree greater than \(2\) can attain the maximum local mean order, and starting from any clique, we may repeatedly apply the inequality above to obtain a sequence of neighboring cliques whose local mean orders are weakly increasing and the last of which is a degree-\(1\)\(k\)-clique (more precisely, for the first step replacing \(C\) with \(C^{\prime}:=C_{1,1}\) and considering the decomposition \(\{C^{\prime}_{i,j}\}\) and \(\{T^{\prime}_{i,j}\}\) with respect to \(C^{\prime}\), the clique adjacent to \(C^{\prime}\) with equal or larger \(\mu^{\bullet}\) cannot be the original clique \(C\) as \(C\) will not correspond to \(\min_{i,j}N^{\prime}_{i,j}\). So in repeatedly applying the inequality, we will obtain a sequence of distinct cliques which must terminate but can only terminate once we have reached a clique of degree \(1\).)
Let us first express \(\mu^{\bullet}(T;C_{1,1})\) in terms of the \(N_{i,j}\) and \(\mu^{\bullet}_{i,j}\) as well. First, we have
\[N(T;C_{1,1})=N_{1,1}+\prod_{j=1}^{k}N_{1,j}\prod_{i=2}^{d}\left(1+\prod_{j=1} ^{k}N_{i,j}\right).\]
The reasoning is similar to (2): there are \(N_{1,1}\) sub-\(k\)-trees that contain \(C_{1,1}\), but not the full \((k+1)\)-clique \(B_{1}\), and the remaining product counts sub-\(k\)-trees containing \(B_{1}\). We also have
\[\mu^{\bullet}(T;C_{1,1})=\frac{N_{1,1}\mu^{\bullet}_{1,1}}{N(T;C_{1,1})}+ \left(1-\frac{N_{1,1}}{N(T;C_{1,1})}\right)\left(1+\sum_{j=1}^{k}\mu^{\bullet }_{1,j}+\sum_{i=2}^{d}\frac{1+\sum_{j=1}^{k}\mu^{\bullet}_{i,j}}{1+\prod_{j=1 }^{k}N_{i,j}^{-1}}\right),\]
using the fact that \(\frac{N_{1,1}}{N(T;C_{1,1})}\) is the probability that a random sub-\(k\)-tree containing \(C_{1,1}\) does _not_ contain \(B_{1}\). Let us now take the difference \(\mu^{\bullet}(T;C_{1,1})-\mu^{\bullet}(T;C)\): we have, after some manipulations,
\[\mu^{\bullet}(T;C_{1,1})-\mu^{\bullet}(T;C) =\frac{N_{1,1}}{N(T;C_{1,1})}\left(\mu^{\bullet}_{1,1}-\mu^{ \bullet}(T;C)\right)\] \[\quad+\left(1-\frac{N_{1,1}}{N(T;C_{1,1})}\right)\left(1+\sum_{j =1}^{k}\mu^{\bullet}_{1,j}-\frac{1+\sum_{j=1}^{k}\mu^{\bullet}_{1,j}}{1+\prod_ {j=1}^{k}N_{i,j}^{-1}}\right)\]
\[=\frac{N_{1,1}}{N(T;C_{1,1})}\left(\mu_{1,1}^{\bullet}-\mu^{\bullet}(T;C)\right)+ \left(1-\frac{N_{1,1}}{N(T;C_{1,1})}\right)\left(\frac{1+\sum_{j=1}^{k}\mu_{1,j }^{\bullet}}{1+\prod_{j=1}^{k}N_{1,j}}\right).\]
Now let \(F=F(k):=\frac{1+\sum_{j=1}^{k}\mu_{1,j}^{\bullet}}{1+\prod_{j=1}^{k}N_{1,j}}\). We want to show that \(\mu^{\bullet}(T;C_{1,1})-\mu^{\bullet}(T;C)\geq 0\), i.e., that
\[\frac{N_{1,1}}{N(T;C_{1,1})}\mu_{1,1}^{\bullet}+F\geq\frac{N_{1,1}}{N(T;C_{1,1 })}\left(\mu^{\bullet}(T;C)+F\right).\]
Equivalently,
\[\mu_{1,1}^{\bullet}+\frac{N(T;C_{1,1})}{N_{1,1}}\cdot F\geq\mu^{\bullet}(T;C)+F. \tag{4}\]
Using the previous computations, we know that
\[\frac{N(T;C_{1,1})}{N_{1,1}}=1+\prod_{j=2}^{k}N_{1,j}\prod_{i=2}^{d}\left(1+ \prod_{j=1}^{k}N_{i,j}\right).\]
So the left-hand side of (4) is equal to
\[\mu_{1,1}^{\bullet}+F+\frac{\prod_{j=2}^{k}N_{1,j}}{1+\prod_{j=1}^{k}N_{1,j}} \left(1+\sum_{j=1}^{k}\mu_{1,j}^{\bullet}\right)\prod_{i=2}^{d}\left(1+\prod_{ j=1}^{k}N_{i,j}\right).\]
We can subtract \(F\) from both sides and use (3) to replace \(\mu^{\bullet}(T;C)\). Taking into account that \(\mu_{1,1}^{\bullet}\geq 0\), it is sufficient to prove
\[\frac{\prod_{j=2}^{k}N_{1,j}}{1+\prod_{j=1}^{k}N_{1,j}}\left(1+\sum_{j=1}^{k} \mu_{1,j}^{\bullet}\right)\prod_{i=2}^{d}\left(1+\prod_{j=1}^{k}N_{i,j}\right) \geq\sum_{i=1}^{d}\frac{1+\sum_{j=1}^{k}\mu_{i,j}^{\bullet}}{1+\prod_{j=1}^{k }N_{i,j}^{-1}}. \tag{5}\]
We first prove (5) for \(d=2\), in which case it can be rewritten as
\[\frac{\prod_{j=1}^{k}N_{1,j}}{N_{1,1}(1+\prod_{j=1}^{k}N_{1,j})}\left(1+\sum_ {j=1}^{k}\mu_{1,j}^{\bullet}\right)\left(1-N_{1,1}+\prod_{j=1}^{k}N_{2,j} \right)\geq\frac{\prod_{j=1}^{k}N_{2,j}}{1+\prod_{j=1}^{k}N_{2,j}}\left(1+\sum _{j=1}^{k}\mu_{2,j}^{\bullet}\right).\]
Write \(\prod_{j=1}^{k}N_{1,j}=N_{1,1}^{k-1}y\) and \(\prod_{j=1}^{k}N_{2,j}=N_{1,1}^{k-1}z\) where \(y,z\geq N_{1,1}.\) By Lemma 11, we have \(\mu_{1,j}^{\bullet}\geq\frac{1}{2}\log_{2}N_{1,j}\) and \(\mu_{2,j}^{\bullet}\leq\frac{1}{2}(N_{2,j}-1)\). Applying Lemma 12 to the numbers \(\frac{N_{2,j}}{N_{1,1}}\), \(1\leq j\leq k\), gives us that \(\sum_{j=1}^{k}N_{2,j}\leq(k-1)N_{1,1}+z\). Hence we find that it suffices to prove that
\[\frac{y\left(1-N_{1,1}+N_{1,1}^{k-1}z\right)}{N_{1,1}(1+N_{1,1}^{k-1}y)}\left( 1+\frac{(k-1)\log_{2}N_{1,1}+\log_{2}y}{2}\right)\geq\frac{z}{1+N_{1,1}^{k-1} z}\left(1+\frac{(k-1)N_{1,1}+z-k}{2}\right).\]
We note that the left-hand side is strictly increasing in \(y\) and the right-hand side is independent of \(y\), which implies that it is sufficient to prove the equality when \(y=N_{1,1}.\) That is, we want to prove that for every \(z\geq y\geq 1\)
\[\frac{1-y+y^{k-1}z}{1+y^{k}}\left(1+\frac{k}{2}\log_{2}y\right)\geq\frac{z}{1 +y^{k-1}z}\left(1+\frac{(k-1)y+z-k}{2}\right). \tag{6}\]
If \(y=N_{1,1}=1\), (6) reduces to the equality \(\frac{z}{2}=\frac{z}{2}.\)
If \(y=N_{1,1}\geq 2\), \(z\geq y\) and \(k\geq 2\) imply that \(2+(k-1)y+z-k\leq kz\). Together with \(\log_{2}y\geq 1\), we conclude that it is sufficient to prove that
\[\frac{1-y+y^{k-1}z}{1+y^{k}}\frac{k+2}{2}\geq\frac{z}{1+y^{k-1}z}\frac{kz}{2}\]
which is equivalent to
\[(1-y+y^{k-1}z)(k+2)(1+y^{k-1}z)\geq(1+y^{k})kz^{2}.\]
For \(k=2\), the difference between the two sides is an increasing function in \(y\), and for \(y=2\) it reduces to \(3z^{2}-2\geq 0\), which holds.
For \(k\geq 3\), the inequality is immediate, using that \(y^{2(k-1)}z^{2}\geq(1.5y^{k}+1)z^{2}\geq y^{k}z+y^{k}z^{2}+z^{2}\) (remember that \(z\geq y\geq 2\)).
Once (5) has been verified for \(d=2\), we can apply induction to prove it for \(d\geq 3\). Let \(C=\frac{\prod_{i=2}^{k}N_{i,j}}{1+\prod_{j=1}^{k}N_{1,j}}\left(1+\sum_{j=1}^{ k}\mu_{1,j}^{\bullet}\right),\)\(g_{i}=1+\prod_{j=1}^{k}N_{i,j}\) and \(f_{i}=\frac{1+\sum_{j=1}^{k}\mu_{1,j}^{\bullet}}{1+\prod_{j=1}^{k}N_{i,j}^{-1}}.\) We can then rewrite (5) as
\[C\prod_{i=2}^{d}g_{i}\geq\sum_{i=1}^{d}f_{i}\]
By the induction hypothesis, we have
\[\frac{C}{g_{m}}\prod_{i=2}^{d}g_{i}\geq\left(\sum_{i=1}^{d}f_{i}\right)-f_{m}\]
for every \(2\leq m\leq d.\) Summing over \(m\), we obtain that
\[C\prod_{i=2}^{d}g_{i}\left(\sum_{m=2}^{d}\frac{1}{g_{m}}\right)\geq f_{1}+(d-2 )\sum_{i=1}^{d}f_{i}.\]
Since we have \(g_{m}\geq 2\) for every \(m\), the conclusion now follows as
\[\sum_{m=2}^{d}\frac{1}{g_{m}}\leq\frac{(d-1)}{2}\leq d-2,\]
and since \(f_{1}>0\), inequality (5) is even strict in the case that \(d>2\).
We conclude that if \(C\) has degree \(d\geq 2\), then \(\mu^{\bullet}(T;C)\leq\mu^{\bullet}(T;C_{1,1}).\) Equality can only be attained when \(d=2\). Looking back over the proof of (5), we also see that for equality to hold, all \(N_{i,j}\) except for \(N_{2,2}\) (up to renaming) must be equal to \(1\), and due to Lemma 11, \(T_{2,2}\) has to be a path-type \(k\)-tree.
For \(n\geq k+3\), we compute that in a path-type \(k\)-tree \(T\) the maximum among the degree-\(1\)\(k\)-cliques is attained by a central one, which implies that no degree-\(2\)\(k\)-cliques can be extremal. Let \(B_{1}\) be the unique \((k+1)\)-clique containing \(C\), and assume that \(T\setminus B_{1}\) contains components of size \(a\) and \(b.\) Thus \(a+b=n-(k+1).\) Then \(\mu(T;C)=\frac{k+(a+1)(b+1)\frac{n+k}{2}}{(a+1)(b+1)+1}=\frac{n+k}{2}-\frac{n- k}{2((a+1)(b+1)+1)},\) and this is maximized if and only if \(|a-b|\leq 1.\) The latter can also be derived from considering the \(1\)-characteristic tree. This is illustrated in Figure 6 below. We emphasize that the maximizing \(k\)-clique of degree \(1\) is not simplicial.
When \(n\leq k+2\), every \(k\)-clique has the same local mean sub-\(k\)-tree order, and so the (unique) degree-\(2\)\(k\)-clique when \(n=k+2\) is the only case where equality can occur at a \(k\)-clique with degree \(2.\) This concludes the proof of Theorem 3.
## 5 Bounding local mean order by global mean order
In this section, we prove
**Theorem 4**.: _The local mean order of the sub-\(k\)-trees containing a fixed \(k\)-clique \(C\) is less than twice the global mean order of all sub-\(k\)-trees of \(T\)._
As before, given a \(k\)-tree \(T\) and a \(k\)-clique \(C\) in \(T\), we utilize the decomposition of \(T\) into \(C\) and \(k\)-trees \(T_{1,1},\ldots,T_{d,k}\) rooted at \(C_{1,1},\ldots,C_{d,k}\). Set \(N_{i,j}=N(T_{i,j};C_{i,j}),\overline{N}_{i,j}=\overline{N}(T_{i,j};C_{i,j})\), \(\mu_{i,j}=\mu(T_{i,j};C_{i,j})\), \(\overline{\mu}_{i,j}=\overline{\mu}(T_{i,j};C_{i,j})\), \(\mu_{i,j}^{\bullet}=\mu^{\bullet}(T_{i,j};C_{i,j})\) and \(R_{i,j}=R(T_{i,j};C_{i,j})\), \(\overline{R}_{i,j}=\overline{R}(T_{i,j};C_{i,j})\). Since a sub-\(k\)-tree not containing \(C\) needs to be a sub-\(k\)-tree of some \(k\)-tree \(T_{i,j}\), we have
\[\overline{N}(T;C)=\sum_{i=1}^{d}\sum_{j=1}^{k}(N_{i,j}+\overline{N}_{i,j})\]
and
\[\overline{R}(T;C)=\sum_{i=1}^{d}\sum_{j=1}^{k}(R_{i,j}+\overline{R}_{i,j}).\]
Following the proof for trees, we show
**Lemma 13**.: _For any \(k\)-tree \(T\) and \(k\)-clique \(C\in T\),_
\[R(T;C)>\overline{N}(T;C).\]
Proof.: Assume to the contrary that there exists a minimum counterexample \(T\). Since the statement is true when \(T=C,\) we have \(|T|>k\) and we can consider the decomposition as before.
Note that if \(N_{i,j}=1\), we have that \(\overline{N}_{i,j}=0\), and otherwise we have \(\overline{N}_{i,j}\leq R_{i,j}=\mu_{i,j}N_{i,j}=(k+\mu_{i,j}^{\bullet})N_{i,j}\).
We can rewrite \(R(T;C)-\overline{N}(T;C)\) as
\[N(T;C)\left(k+\mu^{\bullet}(T;C)\right)-\overline{N}(T;C).\]
Expanding using (2) and (3), we note that the coefficient of \(\mu_{i,j}^{\bullet}\) is at least equal to \(N_{i,j}\). As such, it is sufficient to prove the inequality with \(\mu_{i,j}^{\bullet}=0\) and \(\overline{N}_{i,j}\) bounded by \(kN_{i,j}\).
Let \(f\) be a function on the positive integers defined by \(f(x)=\begin{cases}1&\text{ if }x=1,\\ 1+kx&\text{ if }x>1.\end{cases}\)
We now want to prove that
\[\left(k+\sum_{i=1}^{d}\frac{1}{1+\prod_{j=1}^{k}N_{i,j}^{-1}}\right)N(T;C)\geq \sum_{i=1}^{d}\sum_{j=1}^{k}f(N_{i,j}). \tag{7}\]
Figure 6: \(2\)-path for which the maximum local mean is attained in a non-simplicial clique \(C\), with its \(1\)-characteristic tree \(T_{C}^{\prime}\)
When \(d=1\), this becomes \(k+(k+1)\prod_{j=1}^{k}N_{1,j}\geq\sum_{j=1}^{k}f(N_{1,j}).\) When increasing a value \(N_{1,j}\) which is at least equal to \(2\), the left-hand side increases more than the right-hand side. As such, it is sufficient to consider the case where \(a\) of the terms \(N_{1,j}\) equal \(2\), while the other \(k-a\) terms equal \(1\). In this case, the desired inequality holds since \(k+(k+1)2^{a}>k+2ak\) for every integer \(0\leq a\leq k\).
Next, we consider the case \(d\geq 2.\) In this case, when \(N_{i,j}\) increases by \(1\), the left-hand side of (7) increases by at least \(2k\) and the right-hand side by at most \(2k.\) When all \(N_{i,j}\) are equal to \(1\), the conclusion follows from \(\left(k+\frac{d}{2}\right)2^{d}>k2^{d}>dk.\)
We now bound the local mean order by the global mean order.
Proof of Theorem 4.: Let \(T\) be a \(k\)-tree and \(C\) a \(k\)-clique in \(T\). We want to prove that
\[\mu(T;C)<2\mu(T).\]
We proceed by induction on the number of vertices in \(T\). Note first that the inequality is trivial if \(|T|\leq 2k\): since the mean is taken over sub-\(k\)-trees, which have at least \(k\) vertices each, we have \(\mu(T)\geq k\). On the other hand, we clearly have \(\mu(T;C)\leq|T|\), and both inequalities hold with equality only if \(|T|=k\).
We thus proceed to the induction step, and assume that \(|T|>2k\). We have two cases with respect to \(C\).
**Case 1:**\(C\) is simplicial.
Let \(v\) be a \(k\)-leaf in \(C\), let \(C^{\prime}\) denote the clique adjacent to \(C\), and let \(T^{\prime}=T-\{v\}\). Moreover, let \(N,\overline{N},R,\) and \(\overline{R}\) denote \(N(T^{\prime};C^{\prime}),\overline{N}(T;C^{\prime}),R(T^{\prime};C^{\prime})\) and \(\overline{R}(T^{\prime};C^{\prime})\), respectively. We have
\[\mu(T;C)=\frac{N+R+k}{N+1}\]
and
\[\mu(T)=\frac{2R+\overline{R}+N+k^{2}}{2N+\overline{N}+k}.\]
We want to prove that
\[(2N+\overline{N}+k)(2\mu(T)-\mu(T;C))>0,\]
which is equivalent to
\[2R+2\overline{R}+2k^{2}-2k-\frac{(N+R+k)(\overline{N}+k-2)}{N+1}>0.\]
By the induction hypothesis, we have
\[2\frac{R+\overline{R}}{N+\overline{N}}=2\mu(T^{\prime})>\mu(T^{\prime};C^{ \prime})=\frac{R}{N},\]
so it is sufficient to prove that
\[\frac{R(N+\overline{N})}{N}+2k^{2}-2k-\frac{(N+R+k)(\overline{N}+k-2)}{N+1}>0.\]
Multiplying by \(\frac{N+1}{N}\), this is seen to be equivalent to
\[R-\overline{N}+2k^{2}-3k+2-\frac{(k-3)R+k\overline{N}-k^{2}}{N}+\frac{ \overline{N}R}{N^{2}}>0.\]
This can be broken up into three terms as follows:
\[\frac{(N^{2}-kN+3N+\overline{N})(R-\overline{N})}{N^{2}}+\left(\frac{\overline{N} }{N}-k\right)^{2}+\left((k-1)(k-2)+\frac{k^{2}}{N}+\frac{3\overline{N}}{N} \right)>0.\]
Note here that the second term is trivially nonnegative, and the last term trivially positive. Since \(|T^{\prime}|\geq 2k\), we have \(N>k\) (one gets at least \(k+1\) sub-\(k\)-trees containing \(C^{\prime}\) by successively adding vertices); hence \(N^{2}-kN+3N+\overline{N}>0\). Thus the first term is positive by Lemma 13, completing the induction step in this case.
**Case 2:**\(C\) is not simplicial.
By Theorem 3, we only need to consider the case where \(C\) has degree \(1\). Let \(v\) be the unique common neighbor of \(C\), and let \(C_{i}\), \(1\leq i\leq k\), be the other \(k\)-cliques in the \((k+1)\)-clique spanned by \(C\cup\{v\}\).
Let \(T_{i}\), \(1\leq i\leq k\), be the sub-\(k\)-trees rooted at \(C_{i}\) (pairwise disjoint except for the vertices of the cliques \(C_{i}\)). Let \(N_{i}=N(T_{i};C_{i})\), \(\overline{N}_{i}=\overline{N}(T_{i};C_{i})\), \(R_{i}=R(T_{i};C_{i})\), \(\overline{R}_{i}=\overline{R}(T_{i};C_{i})\), \(\mu_{i}=\mu(T_{i};C_{i})\) and \(\mu_{i}^{\bullet}=\mu^{\bullet}(T_{i};C_{i})\). We can assume without loss of generality that \(N_{1}\geq N_{2}\geq\dots\geq N_{j}>1=N_{j+1}=\dots=N_{k}\), where \(j\geq 2\) since \(C\) is not simplicial.
We can now express the local and global mean in a similar way to Case 1. Here \(N(T;C)=\prod_{i=1}^{k}N_{i}+1\), and all the sub-\(k\)-trees counted here, except for \(C\), contain \(v\). We have
\[\mu(T;C)=\frac{\prod_{i=1}^{k}N_{i}(1+\sum_{i=1}^{k}\mu_{i}^{\bullet})}{\prod _{i=1}^{k}N_{i}+1}+k,\]
\[\mu(T)=\frac{\prod_{i=1}^{k}N_{i}(1+\sum_{i=1}^{k}\mu_{i}^{\bullet}+k)+\sum_{i =1}^{k}(R_{i}+\overline{R}_{i})+k}{\prod_{i=1}^{k}N_{i}+\sum_{i=1}^{k}(N_{i}+ \overline{N}_{i})+1}.\]
In the remainder of this section, we will omit the bounds in products and sums if they are over the entire range from \(1\) to \(k\): \(\sum N_{i}\) and \(\prod N_{i}\) mean \(\sum_{i=1}^{k}N_{i}\) and \(\prod_{i=1}^{k}N_{i}\) respectively.
Then \(\left(\prod N_{i}+\sum(N_{i}+\overline{N}_{i})+1\right)(2\mu(T)-\mu(T;C))\) equals
\[(\prod N_{i})(1+\sum\mu_{i}^{\bullet}+k)+k+2\sum(R_{i}+\overline{R}_{i})-k \sum(N_{i}+\overline{N}_{i})-\frac{(\prod N_{i})(1+\sum\mu_{i}^{\bullet})\sum (N_{i}+\overline{N}_{i})}{\prod N_{i}+1}. \tag{8}\]
We want to show that this expression is positive. By induction, we know that \(2(R_{i}+\overline{R}_{i})>(k+\mu_{i}^{\bullet})(N_{i}+\overline{N}_{i})\), thus \(2(R_{i}+\overline{R}_{i})-k(N_{i}+\overline{N}_{i})>\mu_{i}^{\bullet}(N_{i}+ \overline{N}_{i})\). It follows that (8) is greater than
\[(\prod N_{i})(1+\sum\mu_{i}^{\bullet}+k)+k+\sum\mu_{i}^{\bullet}(N_{i}+ \overline{N}_{i})-\frac{(\prod N_{i})(1+\sum\mu_{i}^{\bullet})\sum(N_{i}+ \overline{N}_{i})}{\prod N_{i}+1}.\]
Multiplying by \(\frac{\prod N_{i}+1}{\prod N_{i}}\) and observing that this factor is greater than \(1\), we find that (8) is indeed positive if we can prove that
\[(\prod N_{i}+1)(1+\sum\mu_{i}^{\bullet}+k)+k+\sum\mu_{i}^{\bullet}(N_{i}+ \overline{N}_{i})\geq(1+\sum\mu_{i}^{\bullet})\sum(N_{i}+\overline{N}_{i}). \tag{9}\]
In particular, a potential counterexample would have to satisfy
\[\frac{(1+\sum\mu_{i}^{\bullet})\sum(N_{i}+\overline{N}_{i})}{\prod N_{i}+1}>1+ \sum\mu_{i}^{\bullet}+k. \tag{10}\]
To simplify proving Eq. (9), we first note that it is sufficient to consider the case where \(k=j\). Once \(N_{i},\mu_{i}^{\bullet}\) and \(\overline{N}_{i}\) are fixed for \(1\leq i\leq j\), the terms that are dependent on \(k\) are \((\prod_{i\leq j}N_{i}+1)k+k\) on the left, and \((1+\sum_{i\leq j}\mu_{i}^{\bullet})(k-j)\) on the right. The latter since if \(N_{i}=1\), then \(T_{i}\) only consists of \(C_{i}\), and thus \(\overline{N}_{i}=\mu_{i}^{\bullet}=0\). Now since \(\prod_{i\leq j}N_{i}+1\geq 1+\sum_{i\leq j}N_{i}>1+\sum_{i\leq j}\mu_{i}^{\bullet}\), the increase of the left side is larger than the increase on the right side. Here \(\prod_{i\leq j}N_{i}\geq\sum_{i\leq j}N_{i}\) is true since the product of \(j\geq 2\) numbers, each greater than or equal to \(2\), is at least equal the sum of the same \(j\) numbers. Moreover, \(N_{i}>\mu_{i}^{\bullet}\) follows from Lemma 11.
So from now on, we can assume that \(j=k\) and all \(N_{i}\) are at least equal to \(2\).
By Lemma 13, \(\overline{N}_{i}\leq R_{i}=\mu_{i}N_{i}=(k+\mu_{i}^{\bullet})N_{i}\). Since (9) is a linear inequality in each \(\overline{N}_{i}\) and the coefficient on the right-hand side is always greater than the coefficient on the left-hand side, we can reduce Eq. (9) to a sufficient inequality that is only dependent on \(k\), \(N_{i}\) and \(\mu_{i}^{\bullet}\) for \(1\leq i\leq k\) by taking \(\overline{N}_{i}=(k+\mu_{i}^{\bullet})N_{i}\). This will be assumed in the following.
If now all the parameters in (9) are fixed except for one \(\mu_{i}^{\bullet}\), we have a linear inequality in \(\mu_{i}^{\bullet}\): the quadratic terms stemming from \(\mu_{i}^{\bullet}\overline{N}_{i}\) are equal on both sides and cancel. As such, it is sufficient to prove the inequality for the extremal values of \(\mu_{i}^{\bullet}\). Here we use the trivial inequality \(\mu_{i}^{\bullet}\geq 0\) as well as the upper bound \(\mu_{i}^{\bullet}\leq\frac{N_{i}-1}{2}\), which is taken from Lemma 11. So if (9) can be proven in the case where \(\overline{N}_{i}=(k+\mu_{i}^{\bullet})N_{i}\) and \(\mu_{i}^{\bullet}\in\{0,\frac{N_{i}-1}{2}\}\) for all \(i\) with \(1\leq i\leq k\), we are done.
For \(2\leq k\leq 5\), this is achieved by exhaustively checking all \(2^{k}\) cases that result (using symmetry, there are actually only \(k+1\) cases to consider). See the detailed verifications in [https://github.com/StijnCambie/AvSubOrder_ktree](https://github.com/StijnCambie/AvSubOrder_ktree). So for the rest of the proof, we assume that \(k\geq 6\), and we will use the slightly weaker bound \(\mu_{i}^{\bullet}\leq\frac{N_{i}}{2}\) instead of \(\mu_{i}^{\bullet}\leq\frac{N_{i}-1}{2}\) for \(1\leq i\leq k\) in a few cases.
We distinguish two further cases, depending on the value of \(\mu_{1}^{\bullet}\). In these cases, we will use the following two inequalities.
**Claim 14**.: _Let \(k\geq 6\) and \(N_{1}\geq N_{2}\geq\ldots\geq N_{k}\geq 2\). Then_
\[\prod_{i=2}^{k}N_{i} \geq 3\sum_{2\leq i\leq k}N_{i}, \tag{11}\] \[\frac{5}{3}(\prod_{i=1}^{k}N_{i}+1) \geq\left(\sum_{2\leq i\leq k}N_{i}-2\right)\sum_{1\leq i\leq k}N _{i}. \tag{12}\]
Proof.: The first inequality, Eq. (11), is true if all the \(N_{i}\) are equal to \(2\), since \(2^{k-1}>6(k-1)\) for every \(k\geq 6\). Increasing some \(N_{i}\) by \(1\) increases the product by at least \(2^{k-2}\), while the sum increases by only \(3\). So the inequality holds by a straightforward inductive argument.
Next, we prove Eq. (12). When all \(N_{i}\) are equal to \(2\), it becomes \(\frac{5}{3}(2^{k}+1)\geq 4(k-2)k\). This is easily checked for \(k\in\{6,7\}\), and for \(k\geq 8\), the stronger inequality \(2^{k}\geq 4k^{2}\) can be shown by induction.
Now observe that the difference between the left- and right-hand sides is increasing with respect to \(N_{1}\), since \(\prod_{i\geq 2}N_{i}>\sum_{i\geq 2}N_{i}\) by the first inequality. It is also increasing in the other \(N_{i}\)'s; for example, we can see this is true for \(N_{2}\) since
\[\frac{5}{3}\prod_{i\neq 2}N_{i}\geq 5\sum_{i\neq 2}N_{i}>2\sum_{1\leq i\leq k}N _{i}>\left(\sum_{2\leq i\leq k}N_{i}-2\right)+\sum_{1\leq i\leq k}N_{i}.\]
In the first step, we have applied Eq. (11) but replacing \(N_{1}\) with \(N_{2}\). Again, we may conclude using induction.
**Claim 15**.: _Given \(\mu_{1}^{\bullet}=\frac{N_{1}}{2}\), it is sufficient to consider the case where \(\mu_{i}^{\bullet}=\frac{N_{i}}{2}\) for all \(1\leq i\leq k\)._
Proof.: Starting from any counterexample to (9), we can iteratively change \(\mu_{i}^{\bullet}\) (considered as variables) for \(1\leq i\leq k\) based on the worst case of the linearization (to 0 if the coefficient on the left-hand side is greater and to \(\frac{N_{i}}{2}\) if the coefficient on the right-hand side is greater) to obtain further counterexamples.
To show that it suffices to consider \(\mu_{i}^{\bullet}=\frac{N_{i}}{2}\) for every \(1\leq i\leq k\), we prove that, given \(\mu_{1}^{\bullet}=\frac{N_{1}}{2}\), the coefficient of \(\mu_{2}^{\bullet}\) on the left-hand side in Eq. (9) is not greater than the coefficient on the right-hand side. This then implies that \(\mu_{2}^{\bullet}=\frac{N_{2}}{2}\) is indeed the worst case. For \(3\leq i\leq k\), we can argue in the same fashion.
Assume for sake of contradiction that the coefficient on the left-hand side is greater. Recall that \(\overline{N}_{2}=(k+\mu_{2}^{\bullet})N_{2}.\) After subtracting \(\mu_{2}^{\bullet}(N_{2}+\overline{N}_{2})\) from both sides, the coefficient of \(\mu_{2}^{\bullet}\) on the left is \(\prod N_{i}+1\), while on the right it is \(\sum_{i\neq 2}(N_{i}+\overline{N}_{i})+(1+\sum_{i\neq 2}\mu_{i}^{\bullet})N_{2}\). Thus we must have
\[\prod N_{i}+1>\sum_{i\neq 2}(N_{i}+\overline{N}_{i})+(1+\sum_{i\neq 2}\mu_{i}^{ \bullet})N_{2}\geq\sum_{i\neq 2}(N_{i}+\overline{N}_{i})+(1+\mu_{1}^{ \bullet})N_{2}.\]
Using that
\[(1+\mu_{1}^{\bullet})N_{2}=\left(1+\frac{N_{1}}{2}\right)N_{2}\geq\left(1+ \frac{N_{2}}{2}\right)N_{2}\geq(1+\mu_{2}^{\bullet})N_{2}=N_{2}+\overline{N}_{ 2}-kN_{2}\]
(recall here that we are assuming without loss of generality that \(N_{1}\geq N_{2}\geq\cdots\geq N_{k}\)), this implies that \(\prod N_{i}+1\geq\sum(N_{i}+\overline{N}_{i})-kN_{2}.\) Adding \(kN_{2}\) to both sides and multiplying both sides by \(1+\sum\mu_{i}^{\bullet}\) results in
\[\left(\prod N_{i}+1+kN_{2}\right)(1+\sum\mu_{i}^{\bullet})\geq(1+\sum\mu_{i}^ {\bullet})\sum(N_{i}+\overline{N}_{i})>(1+\sum\mu_{i}^{\bullet}+k)(\prod N_{ i}+1).\]
In the second inequality, we applied (10) which we may do since we began with the assumption that we have a counterexample to (9). After simplification, we get that \(N_{2}(1+\sum\mu_{i}^{\bullet})>\prod N_{i}\), and thus
\[\sum_{i\neq 2}N_{i}>1+\frac{1}{2}\sum N_{i}\geq 1+\sum\mu_{i}^{\bullet}>\prod_{ i\neq 2}N_{i}.\]
Since \(k\geq 6\), this is a clear contradiction to Eq. (11).
Having proven Claim 15, we are left with two cases to consider: \(\mu_{i}^{\bullet}=\frac{N_{i}}{2}\) for all \(1\leq i\leq k\), or \(\mu_{1}^{\bullet}=0\). It is easy to conclude in the former case, except when \(k=6\) and at least \(5\) values \(N_{i}\) are equal to \(2\), which has to be handled separately. See [https://github.com/StijnCambie/AvSubOrder_ktree/blob/main/2M-mu_j_large_case1.mw](https://github.com/StijnCambie/AvSubOrder_ktree/blob/main/2M-mu_j_large_case1.mw) for details.
The final remaining case is when \(\mu_{1}^{\bullet}=0.\) We obtain two new inequalities by multiplying Eq. (12) with \(\frac{k+1}{2}\) and Eq. (11) with \((1+\sum\mu_{i}^{\bullet})\frac{N_{1}}{6}\), and use that \(N_{1}=\max\{N_{i}\}\) and \(\mu_{i}^{\bullet}\leq\frac{N_{i}-1}{2}\).
\[\frac{5(k+1)}{6}(\prod N_{i}+1) \geq\frac{k+1}{2}\left(\sum_{2\leq i\leq k}N_{i}-2\right)\sum_{1 \leq i\leq k}N_{i}\geq(1+\sum\mu_{i}^{\bullet})\sum_{1\leq i\leq k}(1+k)N_{i}, \tag{13}\] \[\frac{\prod N_{i}}{6}(1+\sum\mu_{i}^{\bullet}) \geq(1+\sum\mu_{i}^{\bullet})\frac{N_{1}}{2}\sum_{2\leq i\leq k}N_{ i}\geq(1+\sum\mu_{i}^{\bullet})\sum_{2\leq i\leq k}N_{i}\mu_{i}^{\bullet}. \tag{14}\]
Summing these two inequalities together, we have that Eq. (9) holds as a corollary of
\[(\prod N_{i}+1)(1+\sum\mu_{i}^{\bullet}+k)\geq(1+\sum\mu_{i}^{\bullet})\sum \left(N_{i}(1+k+\mu_{i}^{\bullet})\right).\]
We remark that by considering a suitable \(k\)-broom, one can show that Theorem 4 is sharp, as was also the case for trees.
Conclusion
This paper together with [13, Thm. 18 & 20] answers all of the open questions from [16] except for one, which was stated as rather general and open-ended:
**Problem 1**.: _For a given \(r\)-clique \(R\), \(1\leq r<k\), what is the (local) mean order of all sub-\(k\)-trees containing \(R\)?_
One natural version of this question is to consider the local mean sub-\(k\)-tree order over sub-\(k\)-trees that contain a fixed vertex. In this direction, we prove the following result, which can be considered as another monotonicity result related to [13, Thm. 23].
**Theorem 16**.: _Let \(T\) be a \(k\)-tree, \(k\geq 2\), \(C\) a \(k\)-clique of \(T\), and \(v\) a vertex in \(C\). Then \(\mu(T;v)<\mu(T;C).\)_
Proof.: The statement is trivially true if \(|T|\leq k+1.\) So assume that \(T\) with \(|T|\geq k+2\) is a minimum counterexample to the statement. Recalling the decomposition into trees \(T_{i,j}\) used earlier, note that all sub-\(k\)-trees containing \(v\) and not \(C\) are part of \(T_{i,j}\) for some \(i,j\). Without loss of generality, we can assume that \(\mu(T_{1,1};v)=\max_{i,j}\mu(T_{i,j};v).\) It suffices to prove that \(\mu(T_{1,1};v)<\mu(T;C)\). Taking into account (3), it is sufficient to consider the case where \(C\) is a simplicial \(k\)-clique of \(T\). Let \(u\) be the simplicial vertex of \(B_{1}\). Let \(T^{\prime}=T\setminus\{u\}\) and \(C^{\prime}=B_{1}\setminus\{u\}\).
**Claim 17**.: _We have \(\mu(T^{\prime};C^{\prime})<\mu(T;C)\)._
Proof.: Let \(R=R(T^{\prime};C^{\prime})\) and \(N=N(T^{\prime};C^{\prime}).\) We now need to prove that \(\mu(T;C)=\frac{2R+N+k}{2N+1}>\frac{R}{N}=\mu(T^{\prime};C^{\prime})\), which is equivalent to \(N+k>\frac{R}{N}.\) The latter is immediate since \(N\geq|T^{\prime}|-(k-1)\) and \(\frac{R}{N}=\mu(T^{\prime};C^{\prime})\leq|T^{\prime}|\). \(\diamondsuit\)
Since the sub-\(k\)-trees containing \(v\) are exactly those that contain \(C\), or sub-\(k\)-trees of \(T^{\prime}\) containing \(C^{\prime}\), or \(k\)-cliques within \(B_{1}\) different from \(C\), we conclude that \(\mu(T;v)<\mu(T;C)\).
Luo and Xu [13, Ques. 35] also asked if for a given order, a \(k\)-tree attaining the largest global mean sub-\(k\)-tree order is necessarily a caterpillar-type \(k\)-tree. In contrast with the questions in [16], this question is still open for trees. We prove that the local version (proven in [3, Thm. 3]), which states that for fixed order, the maximum is attained by a broom, is also true for the generalization of \(k\)-trees. This is almost immediate by observations from [16].
**Proposition 18**.: _If a \(k\)-tree \(T\) of order \(n\) and \(k\)-clique \(C\) of \(T\) attain the maximum possible value of \(\mu(T;C),\) then \(T\) has to be a \(k\)-broom with \(C\) being one of its simplicial \(k\)-cliques._
Proof.: Let \(T^{\prime}_{C}\) be the characteristic \(1\)-tree of \(T\) with respect to \(C.\) Then by [16, Lem. 11]\(\mu(T;C)=\mu(T^{\prime}_{C};C)+k-1\). Since \(T^{\prime}_{C}\) is a tree on \(n-(k-1)\) vertices, by [3, Thm. 3], the maximum local mean subtree order is attained by a broom \(B\). Since there is a \(k\)-broom \(T\) with \(C\) being a simplicial \(k\)-clique for which \(T^{\prime}_{C}\cong B\) with \(C\) as root, this maximum can be attained. Reversely, if \(\mu(T^{\prime}_{C};C)\) is maximized, then \(T^{\prime}_{C}\) is a broom where \(C\) is its root (and thus simplicial).
As such, we conclude that the \(k\)-tree variants of many results on the average subtree order for trees are now also proven. The analogue of [10, Ques. (7.5)] can be considered as the only question among them where the answer is slightly different for \(k\)-trees: in contrast with the case of trees (\(k=1\)), the maximum local mean sub-\(k\)-tree order cannot occur in a \(k\)-clique with degree \(2\) when \(k\geq 2\) (with one small exception).
## Acknowledgments
The authors would like to express their gratitude to the American Mathematical Society for making a research visit possible related to the Mathematics Research Community workshop "Trees in Many Contexts". This was supported by the National Science Foundation under Grant Number DMS 1916439. The first author has been supported by internal Funds of KU Leuven (PDM fellowship PDMT1/22/005). The third author is supported by the Swedish research council (VR), grant 2022-04030. The fourth author is supported by the National Institutes of Health (R01GM126554).
Open access statement.For the purpose of open access, a CC BY public copyright license is applied to any Author Accepted Manuscript (AAM) arising from this submission.
|
2309.14352 | Oil Spill Risk Analysis For The NEOM Shoreline | A risk analysis is conducted considering several release sources located
around the NEOM shoreline. The sources are selected close to the coast and in
neighboring regions of high marine traffic. The evolution of oil spills
released by these sources is simulated using the MOHID model, driven by
validated, high-resolution met-ocean fields of the Red Sea. For each source,
simulations are conducted over a 4-week period, starting from first, tenth and
twentieth days of each month, covering five consecutive years. A total of 48
simulations are thus conducted for each source location, adequately reflecting
the variability of met-ocean conditions in the region. The risk associated with
each source is described in terms of amount of oil beached, and by the elapsed
time required for the spilled oil to reach the NEOM coast, extending from the
Gulf of Aqaba in the North to Duba in the South. A finer analysis is performed
by segmenting the NEOM shoreline, based on important coastal development and
installation sites. For each subregion, source and release event considered, a
histogram of the amount of volume beached is generated, also classifying
individual events in terms of the corresponding arrival times. In addition, for
each subregion considered, an inverse analysis is conducted to identify regions
of dependence of the cumulative risk, estimated using the collection of all
sources and events considered. The transport of oil around the NEOM shorelines
is promoted by chaotic circulations and northwest winds in summer, and a
dominant cyclonic eddy in winter. Hence, spills originating from release
sources located close to the NEOM shorelines are characterized by large monthly
variations in arrival times, ranging from less than a week to more than two
weeks. Large variations in the volume fraction of beached oil, ranging from
less then 50\% to more than 80% are reported. | HVR Mittal, Mohamad Abed El Rahman Hammoud, Ana K. Carrasco, Ibrahim Hoteit, Omar Knio | 2023-09-21T06:41:53Z | http://arxiv.org/abs/2309.14352v1 | # Oil spill risk analysis for the NEOM shoreline
###### Abstract
A risk analysis is conducted considering an array of release sources located around the NEOM shoreline. The sources are selected close to the coast and in neighboring regions of high marine traffic. The evolution of oil spills released by these sources is simulated using the MOHID model, driven by validated, high-resolution met-ocean fields of the Red Sea. For each source, simulations are conducted over a 4-week period, starting from first, tenth and twentieth days of each month, covering five consecutive years. A total of 48 simulations are thus conducted for each source location, adequately reflecting the variability of met-ocean conditions in the region. The risk associated with each source is described in terms of amount of oil beached, and by the elapsed time required for the spilled oil to reach the NEOM coast, extending from the Gulf of Aqaba in the North to Duba in the South. To further characterize the impact of individual sources, a finer analysis is performed by segmenting the NEOM shoreline, based on important coastal development and installation sites. For each subregion, source and release event considered, a histogram of the amount of volume beached is generated, also classifying individual events in terms of the corresponding arrival times. In addition, for each subregion considered, an inverse analysis is conducted to identify regions of dependence of the cumulative risk, estimated using the collection of all sources and events considered. The transport of oil around the NEOM shorelines is promoted by chaotic circulations and northwest winds in summer, and a dominant cyclonic eddy in winter. Hence, spills originating from release sources located close to the NEOM shorelines are characterized by large monthly variations in arrival times, ranging from less than a week to more than two weeks. Similarly, large variations in the volume fraction of beached oil, ranging from less then 50% to more than 80% are reported. The results of this study provide key information regarding the location of dominant oil spill risk sources, the severity of the potential release events, as well as the time frames within which mitigation actions may need to deployed.
## Introduction
NEOM is a smart city being developed in the Tabuk province [1], Kingdom of Saudi Arabia. It is situated in the north western part of the Kingdom with miles of Red Sea coastlines. At its northernmost point, it is just 50 kilometers from the Jordanian port of Aqaba. NEOM development plans include establishing modern manufacturing facilities, industrial research and development, in addition to a hydrogen plant, a desalination plant and an international airport (see Fig. 1). Tourism facilities are also being developed along its coastal environment hosting a diverse marine wildlife and coral reserves [1].
With an estimated 6.2 million barrels per day of crude oil and refined petroleum products transported through its main shipping lanes in 2018 [2], the Red Sea is one of the most active waterways in the world [3]. This poses a risk of accidental oil spills that may contribute to marine pollution, disrupting desalination operations, and consequently causing severe economic losses and irreversible damages to the environment [3, 4, 5, 6]. Therefore a comprehensive analysis of risk from accidental oil spill releases on coastal Red Sea regions is of paramount importance, particularly to minimize potential impact to both the environment and industrial activities, and to plan emergency response and mitigation efforts in case of an accident.
Several studies assessed the risk of oil spill accidents for different regions around the world. These encompassed the Mediterranean sea [7, 8, 9, 10, 11], the southern Adriatic and the northern Ionian (SANI) sea [12], Canadian waters [13], Caribbean sea [14], Sicily coasts [15] and Bay of Bengal [16]. A few studies have investigated the risk of oil spills on specific regions of the Red Sea, namely pertinent to the Egyptian coastlines [17], the Bashayer shorelines [18] and the Saudi Arabian-Yemeni coastlines [6]. Perianez [19] presented a Lagrangian model for the whole Red Sea. Mittal et al. [3] provided a broad assessment of oil spill hazards for the whole Red Sea, stemming from its main shipping lane along the longitudinal axis. Pertinent to the risk analysis of oil spills for the NEOM shoreline, a study is still lacking, where existing studies in the literature that focus on NEOM encompass atmospheric conditions and air quality assessment [20], geological assessment [21, 22] and wind energy assessment [23] only.
This study is part of an effort aimed at developing a fundamental understanding of the risk associated by possible oil release sources on the NEOM coastline, and consequently establishing a knowledge base that can assist in the design of efficient strategies safeguard its coastal environment from accidental oil spills. Specifically, a hazard analysis is conducted considering an array of 37 potential release sources located around the NEOM coastline in regions of high marine traffic (see Fig. 1). The risk associated with each source is described by the amount of oil beached following the initial release, and by the elapsed time required for the spilled oil to reach the NEOM coast. The evolution of the oil spill is simulated using the MOHID oil spill model [24, 25, 26, 27]. The model enables realistic, three-dimensional simulations of oil trajectories, accounting for weathering phenomena such as evaporation, dispersion, sedimentation, dissolution, and emulsification. Extensively-validated, high-resolution met-ocean fields [28] of the Red Sea are used to drive the oil spill model. For each release source, simulations are conducted over a 28-day period, starting from the first, tenth and twentieth days of each month, covering five consecutive years ranging from 2013 to 2017. A total of 180 simulations are thus conducted for each source, adequately reflecting the variability of met-ocean conditions in the region. In addition to characterizing the impact of individual sources, the simulation results are analyzed by segmenting the NEOM shoreline, extending from the Gulf of Aquaba in the North to Duba in the South, based on important coastal developments and installations. For each subregion, an inverse analysis is finally conducted to identify regions of dependence of the cumulative risk estimated using the collection of sources considered.
## Methods and Data
### Red Sea Met-Ocean Reanalysis
Met-ocean data are extracted from an extensively-validated reanalysis of the circulation in the Red Sea [28]. The simulated fields have been shown to suitably describe the general oceanic and atmospheric circulations of the Red Sea at the highest available resolution [28, 29, 30, 31]. The zonal and meridional winds were fetched from a 5 km regional atmospheric reanalysis generated using the Weather Research Forecasting (WRF) model assimilating all available regional observations [30, 31]. WRF initial and boundary conditions were acquired from the European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis Interim data [32] (ERA-I). The wave conditions [33] in the Red Sea were reconstructed using the WAVEWATCH III (WWIII) model forced with the aforementioned high-resolution WRF reanalysis winds [34] on a uniform grid of 1 km resolution.
The MIT general circulation model (MITgcm [35]) was implemented to simulate the 3D ocean currents on a grid with 1-km resolution in horizontal planes and 50 vertical layers. The model was forced using the aforementioned high-resolution WRF reanalysis fields and the Copernicus Marine Service Environment Monitoring Service (CMEMS) global ocean reanalysis fields [36] across the open-boundary in the Gulf of Aden at a 6 hourly and 24 hourly temporal frequency, respectively. The resulting MITgcm outputs for the Red Sea have been extensively employed to analyze the general and over-turning circulations [37, 38], internal/baroclinic tides [39], mesoscale eddies characteristics [40], deep-water formation events [41], temperature and salinity budgets [42] as well as the chlorophyll variability [43]. We refer readers to [28] for a more detailed description of the met-ocean conditions.
### Northern Red Sea Circulation
Mesoscale eddies [44, 45] play a dominant role in pollutant transport in the northern Red Sea region. A typical cyclonic eddy dominates the circulation during the winter season, and is characterized by a rotational velocity that are generally larger than that of the background flow [3]. These eddies tend to become more energetic during winter months following the development of intense baroclinic instabilities [44, 46], and they represent the dominant structures except for some strong semi-permanent wind-driven syres that occur in summer [47].
The high mountain ranges on both sides of the Red Sea forces the wind to blow along its axis [48]. During summer seasons, from April till October, a northwest (NW) wind blows along the whole length of The Red sea, with speeds close to 10 ms\({}^{-1}\), and frequently exceeding 15 ms\({}^{-1}\)[34]. During winter, the same northerly wind dominates over the northern part of the basin. The narrower valleys along the eastern coasts of the Red Sea also creates westward blowing jets in the northern part and generally lasts for 2-3 days with a maximum speed up to 15 ms\({}^{-1}\). The wave variability in the Red Sea is naturally associated with the dominant regional wind regimes [48]. Despite the moderate winds, the prolonged duration and long fetch along the whole basin may generate waves as high as 3.5 m. During the summer months, the northwesterly winds prevailing over the whole Red Sea generate mean wave heights of 1 m-1.5 m in the north [48, 49], throughout the year.
### Oil Spill Model
The MOHID oil spill model was adopted to simulate the instantaneous release of oil and its evolution from fixed sources in the northern Red Sea. It relies on a Lagrangian formalism that considers oil as a collection of Lagrangian particles and associates to each particle oil properties and a location [50, 51]. The Lagrangian particles are transported using the met-ocean conditions, and their properties are updated by solving empirical models describing physio-chemical transformations of oil. Typically, these weathering processes result in changes in oil's physical properties and also impact the oil slick's geometry. In the present study, dissolution and sedimentation processes were not considered, thus eliminating their effect on the oil mass balance.
However, evaporation, dispersion and emulsification were accounted for. Specifically, evaporation processes are described by the algorithms of Stiver and Mackay [52], whereas dispersion and emulsification processes are represented using the algorithms by Mackay et al. [53]. Finally, the influence of surface winds on the motion and deformation of the oil slick was incorporated using a wind coefficient of 3% [54].
### Experimental Setup
As briefly discussed below, the present study adapts the setup presented in [3, 55] to the region surrounding NEOM. The computational domain covers the northern Red Sea region, extending across the longitudes 32\({}^{\circ}\) to 37\({}^{\circ}\) and latitudes 25\({}^{\circ}\) to 30\({}^{\circ}\) and up to a depth of approximately 2746 m. The domain is discretized using a computational mesh that is uniform in horizontal planes and non-uniform in the vertical direction. It uses 500 equally-spaced nodes along the longitudinal axis, 500 equally-spaced nodes along the latitudinal axis, and 50 layers in the vertical direction. The horizontal grid resolution is approximately 1 km.
From the met-ocean fields outlined above, the 3D ocean currents, surface winds, wave height and wave period from the years 2013 till 2017 were extracted and used an inputs to drive MOHID. The Lagrangian particle transport model and weathering processes were solved using time steps of size 3600 s and 60 s, respectively.
### Risk Quantification
The risks of individual oil spill sources are quantified in terms the arrival times of oil particles, and the volume fractions of oil beached on the NEOM shorelines. The arrival times represent the minimum traveling time of oil particles from each release source to the NEOM shorelines. For each source, the volume fractions reflect the ratio of oil volume beached to the volume initially released. The arrival times are divided into four classes, namely \(<\) 7 days, 7-14 days, 14-30 days, and \(>\) 30 days (as surrogate for no arrival during the simulation period). Similarly, the volume beached are divided into four classes, namely \(>\) 50% of the initial release, \(25-50\%\), \(<\) 25%, and 0% (when no oil is beached). The results are illustrated using pie charts that depict the frequencies of the classes considered. When generated for individual months of the year, the charts represent the outcome of fifteen experiments, as three simulations per month are performed for the five consecutive years investigated.
A finer analysis is also conducted where, instead of considering the entire NEOM coastline, smaller segments (approximately 25-km wide) are considered around specific sites, namely The Line, Duba, Sharma, Gayal and Magna. For each site, a histogram of the volume fraction is generated showing, for each source and release event considered, the amount of volume beached classified (using colors) in terms of the corresponding arrival time class. The histograms provide key information regarding the severity of the potential release event, and the time frame within which mitigation actions need to be deployed to minimize the impact on coastal areas.
Finally, an aggregate probability of volume beached along a given shoreline (\(p_{i}\)) is computed as:
\[p_{i}=\frac{\sum_{j=1}^{15}\mathcal{V}_{i,j}}{\sum_{k=1}^{37}\sum_{j=1}^{15} \mathcal{V}_{k,j}}, \tag{1}\]
where, \(\mathcal{V}_{i,j}\) is the fraction of volume beached from release location \(i\) for event \(j\), such that the event \(j\) is an enumeration on the release times. The aggregate probability of volume beached measures the contribution of a given release source with respect to all the release sources. This metric allows contrasting sources by ranking release source based on their likely impact on the NEOM shoreline.
## Results and Discussion
### Risk analysis for the NEOM shoreline
Figure 2 and Supplementary Figures S1-S2 illustrate pie charts representing the impact of fifteen release events occurring during the months from January to December. The pie charts depict, for each release source, the travel time needed by the oil particles to reach the NEOM shoreline as well as the volume fraction of oil beached at the end of the simulation period. Figure 3 and Supplementary Figure S3 depict the region of the NEOM shoreline affected by beached oil particles, 7, 14 and 21 days following the release. Particles originating from all release sources are used to generate these contours, thus illustrating the aggregate risk. Release events originating during the months of January, June, and October are used for this purpose.
Figure 2 and Supplementary Figure S1 indicate that spills originating from sources \(S_{35}-S_{37}\), which are located in the narrow Gulf of Aqaba and thus close to the shorelines, are characterized by short arrival times. Within one week from the onset of the spill, entire segments of NEOM shoreline adjoining the Gulf of Aqaba are generally impacted; this occurs for all scenarios except for a few releases occurring during the summer months. In the summer months, the prevailing southwards currents in the Gulf of Aqaba tend to push the oil slicks towards the Tiran and Sanfir islands. Therefore, some segments of shorelines, located north of Magna city, may be shielded. Within one week from the time of the spill, over 50% of the volume
of oil released by sources \((S_{35}-S_{37})\) may generally beach on the NEOM shore. This occurs over the whole year except for the month of June. In June, the volume fraction of oil released from source \(S_{35}\) that beaches on the NEOM shore is less than 25% by the end of the first week, but may rise to around 50% by the end of the third week following the onset of the spill.
The arrival times of oil particles originating from most of the sources in \(S_{4}-S_{8}\) are less than one week during the whole year except during the months of June (except \(S_{7}\)), September and October. The volume fractions of oil beached originating from sources \(S_{4}-S_{8}\) are less than 25% by the end of the first week, but may rise to greater than 50% within two weeks after the onset of the spill, during Jan-May, July and August. These volume fractions are seen to exceed 50% by the end of the first week of the onset during the months of November and December. This transport of spilled oil towards the NEOM shorelines is promoted by a cyclonic eddy that dominates the circulation in the Northern Red Sea region during the winter seasons.[3]
For the majority of release sources \(S_{19}-S_{29}\), located in the open waters and close to the Egyptian coast, the arrival times fall in the interval of two to three weeks from the onset, for the months of November-March and July. By the end of third week after the onset of the spill, the volume fractions of oil originating from these sources remain below 25%. During the remaining months, only a few of the sources \(S_{19}-S_{29}\) could impact the NEOM shorelines. Furthermore, the volume fraction of oil beached is less than 25%, with relatively longer arrival times of around four weeks or no beaching in some scenarios.
For sources \(S_{32}-S_{34}\), which are located in the Gulf of Suez, a measurable impact on the NEOM shoreline is only observed during the months of January-May and July. Beaching of oil originating from \(S_{32}\) is recorded after week one during February, within one-two weeks in March and in May, two-three weeks in January and July. Oil released from \(S_{33}\) impacts the NEOM shorelines within two-four weeks in May and from January-March. For \(S_{33}\), the arrival times fall within two-three weeks in January and three-four weeks in April and May. The volume fraction of oil released by sources \(S_{32}-S_{34}\) and beached on the NEOM shore remains less than 25% by the end of the fourth week, following the onset of the spill.
Figure 3 shows that the NEOM shoreline extending from The Line in the north to Duba in the South is impacted in its entirety during January to May, but during June to December some segments are not significantly impacted. Specifically, by the end of the third week after the onset of the spill, beaching on the shoreline between Sharma and The Line is not predicted during June and from September to October. Additionally, beaching on the shoreline between The Port of NEOM and The Line is not observed from May to September. The energetic meso- and submeso-scale circulations and northwesterly winds in the northern Red Sea region tend to split the oil slicks into different fragments.[3] These fragments are then transported in the opposite directions, towards both the Egyptian and Saudi Arabian shorelines, thereby sparing some segments between The Line and Sharma from beached oil during the months from June-December.
Figure 4 and Supplementary Figure 4 isolate the contributions of release sources \(S_{4}-S_{8}\) which lie inside the NEOM boundary and are closest to its coastline. For these sources, beaching on the shorelines adjoining the Gulf of Aqaba is not observed in June and from August to October. For the remaining months, a measurable impact is observed on the shorelines adjoining the Gulf of Aqaba, from oil particles originating from \(S_{4}\) (January-March and May), \(S_{5}-S_{6}\) (February), \(S_{7}\) (February-May, July and October-December) and \(S_{8}\) (February, March and May). A substantial impact on the NEOM shoreline extending from The Line to Sharma is observed from the oil particles originating from \(S_{4}\) (November), \(S_{5}\) (October-November) and \(S_{7}\) (October and December). Additionally, beaching of oil on the segment extending from the airport to Duba is not observed for \(S_{8}\) during (January-May, November and December) and for \(S_{5}\) during (January-August). Overall, the results indicate that individual sources near the coastal may have severe impacts away from their location, as measured by the volume fraction of oil beached, and their impact may strongly depend on the seasonal variations of meto-cean conditions.
### Risk analysis for specific sites
The risk associated with the individual release sources is now analyzed for specific sites along the Neom coast, namely The Line, Duba, Sharma, Gayal and Magna. Figures 5, 7 and Supplementary Figures S5-S9 plot the histograms of volume fractions for each source and release event considered, showing the amount of volume beached and the corresponding arrival time class (classified using colors), during the months from January to December. Figures 6, 8 and Supplementary Figures S10-S12 depict the (inverse) risk probabilities for each of the specific sites considered. These probabilities characterize the region of dependence of the spill risk, as estimated using Eq. 1.
### The Line
Figure 5 and Supplementary Figure S5 plot histograms of the volume fractions beached at the shorelines of The Line, where predictions from all the release sources and events are classified in terms of the corresponding arrival times. The histograms present a uni-modal distribution of the volume fractions with tails varying from approximately 10% to 80%. The spills originating primarily from sources \(S_{35}-S_{37}\) are characterized by the highest severity (low arrival times and high volume fractions) amongst other sources. During the months of April and from September-December, the volume fraction of oil released from source \(S_{35}\) and beached around The Line may rise to 85% by the end of the first week. The volume fraction of oil released from source \(S_{36}\) and beached around The Line is greater than 60% over the whole year except during April, June and October (greater than 90%). The volume fraction of oil released from source \(S_{37}\) is greater than 50% throughout the year, except
during the months of June and August (around 20%), by the end of first week. The prevailing northwards currents[3] towards The Gulf of Aqaba tend to quickly push oil released from \(S_{8}\) towards The Line; in March, the volume fraction may rise to more than 90%. However, the volume fractions remain less than 50% for the whole year except for March, June and September. The segments around The Line may be weakly affected by oil originating from \(S_{8}\) in June and September. Additional events having early arrival times are associated with \(S_{18}\) and \(S_{24}\), which are located close to the northern tip of The Red Sea between The Gulf of Aqaba and The Gulf of Suez (near Sharm El-Sheikh). Here, the transport of spilled oil towards The Line is promoted by the prevailing coastal currents, which dominate the circulation during the months from December to May. The arrival times fall within one-two weeks during these months. Specifically, the arrival time is less than one week during December, February, and April for \(S_{18}\), and during January and April for \(S_{24}\). Events with short arrival time (less than one week) are also associated with \(S_{5}\) (in March and October) with volume fractions of around 40%. However, very few sources among \(S_{9}-S_{34}\) are characterized by moderate arrival times (two to three weeks), and generally have low severity in terms of amount of beached oil (volume fractions less than 10%).
Figure 6 depicts the seasonal distribution of risk probabilities, estimated using Eq. 1 for oil beached around The Line. Sources \(S_{35}-S_{37}\), located in The Gulf of Aqaba, are responsible for the largest risk. The risk associated with \(S_{36}\) is the highest amongst \(S_{35}-S_{37}\) in spring, summer and autumn seasons, whereas the risk associated with \(S_{35}\) is highest in winter. The risk associated with the remaining sources is appreciably smaller than that observed for \(S_{35}-S_{37}\). In addition, the associated probabilities are very small, except possibly for sources \(S_{5}-S_{8}\) for which appreciable values may occur. Overall, the results of Figures 5 and 6 indicate that for The Line, the risk is primarily dominated by sources located in the Gulf of Aqaba, followed to a lower extent by sources located close to its shoreline.
Spills originating from sources located in the Gulf of Aqaba generally lead to severe events, with a large fraction of the oil released beaching within a short period (< 7 days) from the time of the release. Consistent with the histograms in Figure 5, sources located in the Red Sea and close to the NEOM shoreline may result in severe impact on The Line, but these events have low probability of occurrence, leading to small risk values reported in Figure 6.
### Duba
Figure 7 and Supplementary Figure S6 plot histograms of the volume fractions beached at the shorelines of Duba. In contrast to those corresponding to The Line, the results indicate that the shoreline surrounding Duba is vulnerable to sources located in the entire region facing its coast. This is reflected by the fact that multiple events with severe impacts are observed for sources \(S_{4}-S_{21}\), which are located in the open waters facing the NEOM coast. As expected, sources \(S_{4}-S_{8}\), which lie closest to the NEOM coastline are characterized by higher impacts and shorter arrival times than \(S_{9}-S_{21}\). Overall, sources \(S_{4}-S_{21}\) lead to events of various severity, and the histogram accordingly exhibits a large scatter over the corresponding segment. The Duba region appears to be less susceptible to sources lying in the Gulf of Suez, which are far away from the Duba region, and in the Gulf of Aqaba, except for \(S_{35}\) located at the tip of the Gulf which may result with low probability in a large fraction of oil beached near Duba.
Figure 8 illustrates the seasonal distribution of the aggregate probability of volume beached corresponding to oil spills that affect the Duba shoreline. As opposed to the Line's shoreline, which is primarily affected by the release sources in The Gulf of Aqaba, sources \(S_{4}-S_{12}\) and \(S_{14}-S_{15}\) are characterized by the highest aggregate probabilities of volume beached at the Duba shoreline, throughout the year. The aggregate probability of \(S_{4}\) is the highest in autumn season. Few of the sources located in The Gulf of Aqaba are characterized by insignificant probabilities (< 0.01) in the spring (\(S_{36}\)) and autumn (\(S_{36}-S_{37}\)) seasons. The majority of the sources (\(S_{26}-S_{34}\)) located farther from the Saudi coastline and closer to Egyptian coast or in the Gulf of Suez are characterized by the lowest probabilities throughout the year.
### Magna, Sharma and Gayal
For Magna, Sharma and Gayal, histograms of the volume fractions of oil beached and of risk distributions are shown in Supplementary Figures S7-S8 (Magna), S9-S10 (Sharma) and S11-S12 (Gayal). For the sake of brevity, the main takeway findings are provided in this section.
The plots for Magna indicate similarities to those obtained for The Line, where Magna's shoreline is seen to be predominantly at risk from the release sources in the Gulf of Aqaba. These sources tend to be associated with the highest impact, with short arrival times and large volumes of oil beached. Furthermore, the results corresponding to Sharma and Gayal exhibit key similarities with those obtained for Duba. Specifically, the Sharma and Gayal shorelines are primarily vulnerable to the release sources nearest to the Saudi coast, with decreasing risk from the release sources located far from the Saudi coastline. The Gayal shoreline is generally protected from oil spills, which may be attributed to the nearby islands and the shape of its bay. In contrast, Sharma's coastline is more exposed to oil spills, where more moderate and high severity events are reported from the release sources lying in the first two rows facing the NEOM shoreline.
## Conclusion
We conducted a risk assessment associated with accidental oil spills from fixed sources on the NEOM shoreline, focusing in particular on key sites and installations. For each potential release site, oil spill simulations were conducted over a 28-day period, starting from the first, tenth and twentieth days of each month, over five consecutive years ranging from 2013 to 2017. The simulations were carried out using the MOHID's oil spill model, driven with validated high-resolution met-ocean fields of the Red Sea. The risk associated with each release event was characterized by the minimum travel time for an oil particle to reach the coast, and by the percentage of the total volume of oil released that was beached on the NEOM coast at the end of the simulation period.
The results indicate that spills originating in the Gulf of Aqaba are characterized by short arrival times and high volume fractions, making them the most hazardous to the NEOM shoreline. This occurs throughout the year except for the summer months, when the prevailing southwards currents in the Gulf of Aqaba tend to push the oil slicks towards the Tiran and Sanfir islands, which does not minimize their potential impact because these islands are key sites for tourism. Release sources located in the open water closest to the Saudi Arabian shoreline are generally associated with short arrival times, except during the months of September and October. These release sources impact NEOM's islands and the region connecting Sharma to Duba throughout the year. On the other hand, these release sources have weak impact on the NEOM shoreline lying in the Gulf of Aqaba, between June and December. Release sources located in the Gulf of Suez have a slight impact on the NEOM shoreline during the months of January, February and March. Finally, spills originating from release sources located in the open waters close to the Egyptian coast are characterised by moderate arrival times and low volume fractions, throughout the year.
The shorelines of Magna and The Line are subject to a similar response to the oil spill scenarios considered, where both were vulnerable to the release sources located in the Gulf of Aqaba. Moreover, release events south of Tiran and near Sanafir islands may have a significant impact on The Line's shore, particularly during winter and more so in spring. Duba, Sharma and Gayal's shorelines exhibit similar behavior in response to accidental oil spills from the sources considered. Specifically, release sources lying closest to the Saudi Arabian shoreline have the biggest impact on the shorelines of these sites. The releases are characterized by short arrival times and large fractions of volume beached. The adjacent release sources also exhibit a considerable impact, that is weaker during the Autumn months. These release events are typically associated with medium severity arrival times and fractions of volume beached. Finally, Duba, Sharma and Gayal's shorelines appear to be at low risk from accidental oil spill scenarios originating from release sources near the African shoreline during the summer and autumn seasons.
|
2310.20194 | Evolution of topological charge through chiral anomaly transport | Built upon the state-of-the-art model a multiphase transport (AMPT), we
develop a new module of chiral anomaly transport (CAT), which can trace the
evolution of the initial topological charge of gauge field created through
sphaleron transition at finite temperature and external magnetic field in heavy
ion collisions. The eventual experimental signals of chiral magnetic
effect(CME) can be measured. The CAT explicitly shows the generation and
evolution of the charge separation, and the signals of CME through the CAT are
quantitatively in agreement with the experimental measurements in Au+Au
collision at $\sqrt{s}=200 {\rm GeV}$, and the centrality dependence of the CME
fraction follows that of the fireball temperature. | Zilin Yuan, Anping Huang, Wen-Hao Zhou, Guo-Liang Ma, Mei Huang | 2023-10-31T05:39:38Z | http://arxiv.org/abs/2310.20194v1 | # Evolution of topological charge through chiral anomaly transport
###### Abstract
Built upon the state-of-the-art model a multiphase transport (AMPT), we develop a new module of chiral anomaly transport (CAT), which can trace the evolution of the initial topological charge of gauge field created through sphaleron transition at finite temperature and external magnetic field in heavy ion collisions. The eventual experimental signals of chiral magnetic effect(CME) can be measured. The CAT explicitly shows the generation and evolution of the charge separation, and the signals of CME through the CAT are quantitatively in agreement with the experimental measurements in Au+Au collision at \(\sqrt{s}=200\)GeV, and the centrality dependence of the CME fraction follows that of the fireball temperature.
_Introduction:_ The matter-antimatter asymmetry or baryon number asymmetry of our universe (BAU) is tightly related to the the topological \(\theta\) vacuum configurations of gauge fields, the electroweak baryongenesis through C and CP-violated sphaleron transition [1; 2; 3] is not enough to explain the measurement, which calls for new extension of the additional sources of CP violation.
The \(\theta\) vacuum of quantum chromodynamic dynamics (QCD) gauge field characterized by the integer Chern-Simons number [4], the tunneling transition through instanton or sphaleron across the energy barriers leads to non-conservation of the axial current
\[\partial^{\mu}j_{\mu}^{5}=2\sum_{f}m_{f}\langle\bar{\psi}i\gamma_{5}\psi\rangle _{A}-\frac{N_{f}g^{2}}{16\pi^{2}}F^{a}_{\mu\nu}\tilde{F}^{\mu\nu}_{a}, \tag{1}\]
thus the chirality imbalance between the left-handed and right-handed quarks \(N_{5}=N_{L}-N_{B}=\int d^{4}x\partial_{\mu}j_{5}^{\mu}\)[5; 6; 7; 8]. It has been proposed that local P and CP-odd domains can be formed in heavy-ion collisions or early universe [9; 10; 11; 12]. In non-central heavy ion collisions, the strong magnetic field with the largest strength of about \(10^{14}\) T can be produced, for the system with a net chirality imbalance characterized by \(\mu_{5}\) for massless quarks with charge \(Q_{e}\), there would be a charge current \(\mathbf{J}\) produced along the direction of the magnetic field \(\mathbf{B}\), which is called the Chiral Magnetic Effect (CME) [13]. The CME charge current induces a separation of positively and negatively charged particles perpendicular to the reaction plane. Therefore, the chiral anomaly and CME in the early stage turns into an observable effect of charge separation in the final freeze-out state [14]. An observation of the CME would verify the fundamental property of QCD, which would provide a natural solution to the baryon number asymmetry of our universe.
The main observables of CME is the charge azimuthal two-particle correlation:
\[\gamma=\left\langle\cos(\phi_{\alpha}+\phi_{\beta}-2\Psi_{RP})\right\rangle, \tag{2}\]
where \(\phi_{\alpha},\phi_{\beta},\Psi_{RP}\) denote the azimuthal angles of produced charged particles and the reaction plane, respectively, and \(\alpha\) and \(\beta\) represent either the positive or negative charges. A positive opposite-sign (OS) correlator and a negative same-sign (SS) correlator has been expected to occur for the CME. This feature of CME has been observed by measurements of the correlator \(\gamma\) by the STAR Collaboration for Au + Au collisions at \(\sqrt{S_{NN}}=200\) GeV [15; 16] and by the ALICE Collaboration for Pb + Pb collisions at \(\sqrt{S_{NN}}=2.76\) TeV [17].
Difficulty in detecting CME signals is due, on the one hand, to the complexity of the backgrounds, including elliptic flow [18], resonance decays [19], and local charge conservation [20; 21]. Most of the sign-independent background can be effectively eliminated by calculating the difference between opposite-sign(OS) and same-sign(SS) \(\gamma\):
\[\Delta\gamma=\gamma_{os}-\gamma_{ss}. \tag{3}\]
On the other hand, since the CME signal are mostly generated in the early stage of heavy-ion collisions, the final state interactions attenuate the initial CME signal [22]. Therefore, several hard experimental attempts have been made to detect or even extract the CME signal [23; 24]. For example, recent measurements using the two-plane method [25; 26] have shown that the CME signal contributes no more than 10% to the total observable [27].
From theoretical simulation side, the CME results from UrQMD and HIJING were inconsistent with the STAR experimental measurement [28], and the
anomalous-viscous fluid dynamics (AVFD) model [29; 30; 31] and the AMPT model [22; 32; 33] have been developed to interpret the data. The AMPT model has an advantage to simulate parton evolution and has been widely used in heavy ion collisions. The anomalous-viscous fluid dynamics (AVFD)[30; 31], implements the dynamical CME transport in the realistic environment of a relativistically expanding viscous fluid and provides the quantitative link between the CME transport and experimental signal. However, the AVFD is based on hydrodynamics and does not incorporate the signal from the non-equilibrium stage, during which the magnetic field is at its maximum. It has been summarized in [34], that theoretical uncertainties originate from: the initial distribution of axial charges, the evolution of the magnetic field, the pre-equilibrium dynamics, the hadronic phase and freeze-out conditions. In this work, we provide a theoretical framework called chiral anomaly transport (CAT) to reduce these uncertainties.
_The Chiral Anomaly Transport (CAT) module:_ Built upon the state-of-the-art model AMPT [35], the CAT module can provide a quantitative connection between the initial topological charge and the eventual experimental signal by simulating the chiral anomalous transport in heavy ion collisions. The structure and flow chart of CAT is illustrated in Fig. 1.
AMPT consists of four parts: The initial distribution of quarks is sampled by the HIJING module from mini-jets and excited strings, subsequently, the quark distribution is imported into the ZPC module to simulate quark evolution, then hadrons are produced from freeze-out quarks in the Quark Coalescence module, finally, the hadrons enter the hadron rescattering stage, which is modeled in the ART module, and this ultimately gives rise to the observable signals. For more details, see ref[35; 36].
As shown in Fig. 1, the CAT model which incorporates the chiral kinetic equation, is developed to substitute the ZPC module in the AMPT model. This modification allows for the simulation of the dynamical evolution of chiral quarks in the presence of a strong magnetic field within the expanding fireball. In this model, the program begins with a specified initial condition of quarks distribution and chirality imbalance. The initial quark distribution is generated by the HIJING module. Chiral quarks undergo dynamical evolution within a strong magnetic field, giving rise to a Chiral Magnetic Effect (CME) current. This current, in turn, results in charge separation within the Quark-Gluon Plasma (QGP). Eventually, after undergoing the Quark Coalescence and hadron rescattering stages, this charge separation manifests as a dipole term within the azimuthal angle distribution of positively and negatively charged hadrons. Such a dipole signal can be measured by the difference between same-sign (SS) and opposite-sign (OS) charged hadron pair correlations \(\Delta\gamma=\gamma_{os}-\gamma_{ss}\).
_Chiral kinetic equation:_ The core of CAT is to solve the chiral kinetic equation, for massless particles in magnetic field it takes the following form[37; 38; 39; 40; 41]:
\[\left[\partial_{t}+\hat{\mathbf{x}}\cdot\nabla_{x}+\hat{\mathbf{p}}\cdot \nabla_{p}\right]f_{i}(\mathbf{x},\mathbf{p},t)=C[f_{i}]\, \tag{4}\]
\[\sqrt{G}\hat{\mathbf{x}}=\mathbf{v}+q_{i}\mathbf{B}(\mathbf{v}\cdot\mathbf{b}),\ \sqrt{G}\hat{\mathbf{p}}=q_{i}\mathbf{v}\times\mathbf{B}. \tag{5}\]
where \(\mathbf{x}\) and \(\mathbf{p}\) represent position and momentum vectors respectively, \(q_{i}\) is the quark charge, and \(\mathbf{b}=h\frac{\mathbf{p}}{2|\mathbf{p}|^{3}}\) denotes the Berry curvature, where \(h=\pm 1\) represents the helicity, and \(\mathbf{B}\) stands for the magnetic field. The factor \(\sqrt{G}=1+h\frac{h\mathbf{B}\cdot\mathbf{p}}{2|\mathbf{p}|^{3}}\). In the above, \(f_{i}\) represents the distribution function of a specific quark denoted by \(i=(\mathrm{flavor},h)\), where flavor can take \(u,d,s\). In the CAT module, the distribution function \(f_{i}\) is discretized by the formula of \(N_{i}\) point particles (partons or quarks), i.e \(f_{i}(\mathbf{x},\mathbf{p},t)\approx\sum_{j=1}^{N_{i}}\delta(\mathbf{x}- \mathbf{x}_{j}(t))\delta(\mathbf{p}-\mathbf{p}_{j}(t))\). In the classical method, the properties of a point particle are expressed by its phase space coordinates \((\mathbf{x}_{j},\mathbf{p}_{j})\). However, for chiral fermions, the phase space should be extended to include the helicity or chirality parameter \((\mathbf{x}_{j},\mathbf{p}_{j},h_{j})\). As done in the CAT module, we will employ the particle simulation method to numerically solve the kinetic equation for modeling the evolution of chiral quarks under a strong magnetic field.
CAT solves the kinetic equation Eq.(4) step by step from initialization. In each time interval, CAT firstly selects the eligible partons that have formed but have not yet frozen out, Secondly, CAT updates the position
Figure 1: _(Color online) Illustration of the structure of the chiral anomaly transport model based on AMPT model. The initial nucleons and partons come from HIJING. The main part of CAT corresponding to the parton cascade, which include the magnetic field,the axial charge and the chiral kinetic theory. All freeze-out partons combine to hadrons and the hadron rescattering is based on the ART model. The CME signal is calculated by the final particle._
and momentum of partons based on the equations of motion in Eq.(5). These equations are valid only when \(|p|^{2}\gg eB\). To address this, we set a momentum threshold; when the square of the momentum is smaller than twice the magnetic field strength (i.e. \(|p|^{2}<2eB\)), the quantum correction terms in Eq.(5) are omitted, and the partons behave as classical particles subjected to the Lorentz force. Thirdly, the momentum of partons is updated once again due to collision effects. In CAT, we model binary elastic collisions with a cross-section of \(\sigma=3\) mb. A collision between two partons occurs when their distance is smaller than this cross-section value \(d<\sqrt{\sigma/\pi}\), and their momenta are updated after collision. At the end of each time interval, eligible partons will freeze out, while the remaining partons proceed to the next time interval. This loop continues until it reaches the largest freeze-out time. Afterward, all the partons will be sent to the quark coalescence process.
In this kinetic transport process, the charged chiral particles tend to move along the magnetic field. The direction of motion is decided by their charge and helicity. So that, we expect the charge separation occur in the case of the chiral anomaly. And for the same charge, the helicity will split in space automatically. But noticing that quarks and their anti-quark with the same helicity have opposite chirality and the opposite charges leading to opposite splitting. The possibility of a localized chemical potential and chirality depends on the quark distribution.
_Initial and evolution of magnetic field:_ Following Ref [42], the initial electromagnetic field at \(t_{0}=0\) fm is calculated from spectator protons in HIJING as below:
\[\begin{split} e\mathbf{E}_{0}(r)&=\alpha_{EM}\sum _{n}Z_{n}\frac{R_{n}(1-\mathbf{v}_{n}^{2})}{(\mathbf{R}_{n}^{2}-[\mathbf{v}_{ n}\times\mathbf{R}_{n}]^{2})^{3/2}},\\ e\mathbf{B}_{0}(r)&=\alpha_{EM}\sum_{n}Z_{n}\frac {\mathbf{v}_{n}\times\mathbf{R}_{n}(1-\mathbf{v}_{n}^{2})}{(\mathbf{R}_{n}^{2 }-[\mathbf{v}_{n}\times\mathbf{R}_{n}]^{2})^{3/2}},\end{split} \tag{6}\]
where \(R_{n}=r-r_{n}^{\prime}\) is the relative position vector from a field point r to a source point \(r_{n}^{\prime}\) at the initial time \(t_{0}\), and \(\alpha_{EM}\) is the EM fine-structure constant, defined as \(\alpha_{EM}=e^{2}/4\pi\approx 1/137\). It should be noticed that the direction of magnetic field in CAT is opposite to the y-axis, due to the initial condition of target and projectile nucleons.
The evolution of magnetic field in parton cascade part relates to time [43; 44], as following
\[e\mathbf{B}(t,\mathbf{x})=\frac{e\mathbf{B}(0,\mathbf{x})}{1+(\frac{t}{\tau_ {B}})^{2}}, \tag{7}\]
where \(\tau_{B}=0.4\) fm is an effective lifetime of magnetic field.
_Chirality, helicity and initial axial charge:_ It should be noted that the helicity \(h=\pm\) and chirality \(\chi=R/L\) are the same for quarks but opposite for anti-quarks. Their densities are denoted as follows: \(n_{R}=n_{+},n_{L}=n_{-},\bar{n}_{R}=\bar{n}_{-},\bar{n}_{L}=\bar{n}_{+}\). The net particles density \(n=\left\langle\bar{\psi}\gamma^{0}\psi\right\rangle=(n_{R}-\bar{n}_{R})+(n_{ L}-\bar{n}_{L})\) could be expressed as \(n=(n_{+}-\bar{n}_{-})+(n_{-}-\bar{n}_{+})\). And the net chirality \(n_{5}=\left\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\right\rangle=(n_{R}-\bar{ n}_{R})-(n_{L}-\bar{n}_{L})\) could be expressed as \(n_{5}=(n_{+}+\bar{n}_{+})-(n_{-}+\bar{n}_{-})=\left\langle h\right\rangle\). For convenience, we just consider the helicity of quark and anti-quark in the program. At initial stage, \(N_{+}=(N_{\rm total}+N_{5})/2\) particles are randomly selected as positive helicity, and the remaining \(N_{-}=(N_{\rm total}-N_{5})/2\) particles are assigned as negative helicity, where \(N_{5}=\int d^{3}xn_{5}\) represents the total chirality.
The initial local chirality density \(n_{5}=\mu_{5}^{3}/(3\pi^{2})+(\mu_{5}T^{2})/3\), with \(T\) the local temperature and chiral chemical potential \(\mu_{5}\), can be either positive or negative with equal chance from event to event. \(\mu_{5}\) can be calculated through the sphaleron transition rate \(\mu_{5}=\sqrt{3\pi}\sqrt{\left(320N_{f}^{2}\Gamma_{ss}/T^{2}-T^{2}/3\right)}\)[45]. At finite temperature and under magnetic field, the sphaleron transition rate has the form of \(\Gamma_{ss}(B,T)=\frac{(g_{s}^{2}N_{c})^{2}}{384\sqrt{3}\pi^{5}}\left(eBT^{2}+ 15.9T^{4}\right)\) in Ref.[46], and the chiral chemical potential can be roughly estimated as \(\mu_{5}\sim\left(aT+b\sqrt{eB}\right)\). It should be emphasized that, \(\mu_{5}\) can be positive or negative in local domains created event-by-event through heavy-ion collisions, one can understand \(\mu_{5}\) in the meaning of average variance \(\sqrt{\left\langle\mu_{5}^{2}\right\rangle_{\rm event}}\). Similarly, the chiral charge number \(N_{5}\), in the CAT module, is equivalent to the average variance of chiral charge number for all of the events defined in [31], specifically defined as \(N_{5}=\sqrt{\left\langle N_{5}^{2}\right\rangle_{\rm event}}\), which remains positive and constant for each event.
_Results:_ The background of the CME is known to be proportional to the ratio of elliptic flow to multiplicity, i.e. \(\Delta\gamma\sim-v_{2}/N\). In order to reproduce the reliable background, we first calculate the elliptic flow \(v_{2}\{2\}\) of charged particles using the two-particle correlation method [47]. Fig.2 shows the centrality dependence of the elliptic flow of charged particles within the kinetic cut \(0.15<p_{T}<2\) GeV/c and \(|\eta|<1\), where our definition of centrality is as the same as in experiments by using the reference multiplicity distribution. The CAT result is in good agreement with the STAR data. On the other hand, the \(p_{T}\) distributions of multiplicity are calculated within the pseudorapidity window of \(|\eta|<0.5\). Fig.3 shows the CAT results on the \(p_{T}\) distributions for different centrality bins, in comparison with the STAR data. We find that our results can describe the experimental data well. Therefore, we believe that our CAT model provides a reliable description of the CME background. We also check that if we switch off the magnetic field or \(\mu_{5}\), the elliptic flow and the multiplicity distributions are almost unchanged.
The chiral anomaly and the magnetic field can induce a charge current through the chiral magnetic effect. In Fig.4, we present the results from the CAT simulation with both magnetic field and the initial net chirality, which shows the time evolution of the net-charge density distribution for the charged quarks within the range \(|\eta|<1\) in the transverse plane. This simula
tion demonstrates the generation of the CME-induced charge separation along the direction of the magnetic field due to chiral anomaly transport. Initially, the net-charge density increases and then slowly decreases with the decay of the magnetic field and the expansion of the QGP. We have verified that this charge separation disappears when the chiral magnetic effect cannot occur, which happens when the chiral chemical potential equals zero.
The generation of the chiral magnetic effect contributes to the total two-particle correlation \(\gamma_{total}\). On the other hand, the chiral magnetic effect does not occur in the absence of chiral anomaly. Therefore, only the background \(\gamma_{bkg}\) can be obtained without the chiral anomaly, which is the total background including the flow background, the non-flow background, and the effect of the Lorentz force. Fig. 5 shows the centrality dependences of the opposite-sign correlation \(\gamma_{os}\), the same-sign \(\gamma_{ss}\) correlation and the correlation difference \(\Delta\gamma=\gamma_{os}-\gamma_{ss}\) in Au+Au collisions at 200 GeV, compared with the AMPT model result [22; 32] and STAR data [28]. We observe that the CAT model with the chiral anomaly outperforms the other cases in describing the STAR data. In this optimal case, we take
\[\mu_{5}=(2.1T+\sqrt{eB})\,\mathrm{GeV}, \tag{8}\]
and its magnitude is consistent with the effective chiral chemical potential induced by axial-vector interactions [48; 49]. The chiral anomaly influences both the same sign and opposite sign correlation, with different significance for each centrality. For instance, in the \(20-50\%\) centrality, \(\gamma_{ss}\) increases with \(n_{5}\), while \(\gamma_{os}\) remains almost unchanged. We notice that the correlations are more sensitive to axial anomaly in middle centrality \(20-50\%\), but in centrality \(60-70\%\), the difference is negligible within the error. Compared to the previous
Figure 4: _(Color online) The CAT results on the distributions of net-charge density in the transverse plane at t= 0.3 fm, 0.9 fm, 1.5 fm, 3.0 fm, respectively, during the partonic evolution at b= 8 fm._
Figure 3: _(Color online) The CAT results (curves) on \(p_{T}\) distributions of \(\frac{1}{2\pi p_{T}}\frac{d^{2}N}{dp_{T}dq}\) for multiplicity of charged hadron of \(|\eta|<0.5\) for different centralities in Au+Au collisions at \(200\,\mathrm{GeV}\), compared with the STAR data (circles)._
Figure 2: _(Color online) The CAT result (curve) on the centrality dependence of two-particle \(v_{2}\) for charged hadrons in Au+Au collisions at 200 GeV, compared with the STAR data (circles)._
AMPT study, we observe that the CME leads to nearly zero change of \(\gamma_{ss}\) in the 0-50% centrality and rapidly increase \(\gamma_{os}\) in 50-70%, more consistent with the STAR data. In general, we observe a difference in \(10-70\%\) centrality with and without the chiral anomaly, indicating the presence of the CME signal.
In order to extract the contribution of the CME signal to the total \(\gamma\) correlation, the CME signal fractions is defined as,
\[f_{CME}=\frac{\Delta\gamma_{CME}}{\Delta\gamma_{total}}=\frac{\Delta\gamma_{ total}-\Delta\gamma_{bkg}}{\Delta\gamma_{total}}. \tag{9}\]
Fig. 6 shows the CAT result on the centrality dependence of the CME fraction \(f_{CME}\), compared with the STAR data [27]. The CME fraction decreases from central to peripheral collisions, in good agreement with recent STAR measurement. We observe an increase in CME signal from central to peripheral collisions, but a decrease in CME signal fraction, indicating greater background in peripheral collisions. We also noticed that the tendency of the CME fraction \(f_{CME}\) follows the tendency of centrality dependence of the temperature.
_Conclusion:_ Based on the AMPT model, we develop a new module of CAT to solve the chiral kinetic equation in order to trace the evolution of the initial topological charge under magnetic field, and trace the separation of charged particles induced by CME, thus provides a quantitative connection between the initial topological charge and the eventual experimental signal of CME in heavy ion collisions.
For Au+Au collision at energy 200 GeV, the magnetic field is calculated from HIJING module and the initial chirality imbalance is given as \(\mu_{5}=(2.1T+\sqrt{eB})\,\mathrm{GeV}\), then the CAT module solves the evolution of the chi anomaly with the decay of magnetic field. The CAT simulation shows the development of the CME-induced charge separation along the direction of the magnetic field due to chiral anomaly transport, and the results of OS correlator \(\gamma_{os}\) and SS correlator \(\gamma_{ss}\) as well as their difference \(\Delta\gamma\) are quantitatively in consistent with the STAR measurement. The background of CME \(\gamma_{bkg}\) is regarded as in the case without initial chiral anomaly, and the signal of CME fraction \(f_{CME}\) decreases from central to peripheral collisions, which is in good agreement with recent STAR measurement. It is observed that the OS/SS correlator of the CME signal increases from central to peripheral but the signal of CME fraction decreases from central to peripheral, and the tendency of the CME fraction \(f_{CME}\) follows the tendency of centrality dependence of the temperature.
For next step, we will investigate the evolution of chiral anomaly in isobar collisions and Au+Au collision at low energies.
###### Acknowledgements.
This work is supported in part by the National Natural Science Foundation of China (NSFC) Grant Nos: 12235016, 12221005, 12147150, 12375121, 12205309, 12325507, 12147101 and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No XDB34030000, the start-up funding from University of Chinese Academy of Sciences (UCAS), the Fundamental Research Funds for the Central Universities, and Natural Science Basic Research Program of Shaanxi (Program No. 2023-JC-QN-0267).
Figure 5: (Color online) The CAT results on the centrality dependences of the correlations of opposite-sign \(\gamma_{os}\), same-sign \(\gamma_{ss}\) and the difference \(\Delta\gamma=\gamma_{os}-\gamma_{ss}\) in Au+Au collisions at 200 GeV, compared with the STAR data (stars) and the AMPT result (dot curves), where the different colors represent different correlations. The CAT results are calculated by the CAT with and without chiral chemical potential \(\mu_{5}\), shown by solid and dashed curves, respectively.
Figure 6: The CAT result (circles) on the centrality dependence of the CME fraction \(f_{CME}\), compared with the STAR data (stars) [27] and the centrality dependence of the average temperature of fireball(line). |
2301.00263 | On Bochner's Almost-periodicity criterion | We give an extension of Bochner's criterion for the almost periodic
functions. By using our main result, we extend two results of A. Haraux. The
first is a generalization of Bochner's criterion which is useful for periodic
dynamical systems. The second is a characterization of periodic functions in
term of Bochner's criterion. | Philippe Cieutat | 2022-12-31T18:38:25Z | http://arxiv.org/abs/2301.00263v1 | # On Bochner's Almost-Periodicity Criterion
###### Abstract.
We give an extension of Bochner's criterion for the almost periodic functions. By using our main result, we extend two results of A. Haraux. The first is a generalization of Bochner's criterion which is useful for periodic dynamical systems. The second is a characterization of periodic functions in term of Bochner's criterion.
Universite Paris-Saclay, UVSQ, CNRS, Laboratoire de mathematiques de Versailles, 78000, Versailles, France. E-mail address: philippe.cieutat@uvsq.fr
**2020 Mathematic Subject Classification:** 35B10, 35B40, 42A75, 47H20.
**Keywords:** Bochner almost periodicity, periodic function, almost periodic function, asymptotically almost periodic function, nonlinear semigroup, periodic dynamical system.
## 1. Introduction
The almost periodic functions in the sense of Bohr have been characterized by Bochner by means of a compactness criterion in the space of the bounded and continuous functions [2, 3]. The Bochner's criterion plays an essential role in the theory and in applications. We give a new almost-periodicity criterion for functions with values in a given complete metric space which is useful to study the almost periodicity of solutions of dynamical systems governed by a family of operators with a positive parameter. This criterion is an extension of Bochner's criterion. Then Haraux gave a generalization of Bochner's criterion [9, Theorem 1], called _a simple almost-periodicity criterion_ which is useful for periodic dynamical systems. From our result, we deduce an extension of this criterion. We also obtain an extension of an other result of Haraux which characterizes the periodic functions in terms of the Bochner's criterion [8]. In the same spirit, we treat the asymptotically almost periodic case.
We give a description of this article, the precise definitions will be given in Section 2. Throughout this section \((X,d)\) is a complete metric space. An almost periodic function \(u:\mathbb{R}\to X\) in the sense of Bohr is characterized by the Bochner's criterion which is the following: \(u\)_is bounded and continuous, and from any real sequence of real numbers \((\tau_{n})_{n}\), there exists a subsequence \((\tau_{\phi(n)})_{n}\) such that the sequence of functions \((u(t+\tau_{\phi(n)}))_{n}\) is uniformly convergent on \(\mathbb{R}\)_. In Section 3, we give two extensions of Bochner's criterion. First \(u:\mathbb{R}\to X\) is an almost periodic if and if only if in the Bochner's criterion, we impose that the terms of the sequence of real numbers \((\tau_{n})_{n}\) are all _positive_. Second \(u:\mathbb{R}\to X\) is an almost periodic if and if only if in the Bochner's criterion, the convergence of the subsequence of functions \((u(t+\tau_{\phi(n)}))_{n}\) is uniform only on \([0,+\infty)\). These improvements are useful to study the almost periodicity of solutions of an evolution equation governed by a family of operators with a positive parameter, in particular for a \(C_{0}\)-semigroup of linear operators or more generally, for an autonomous dynamical system (nonlinear semigroup).
From our extension of Bochner's criterion, we give new proofs which are direct and simpler on known results on the almost periodicity of solutions of autonomous dynamic systems.
Haraux gave a generalization of Bochner's criterion the called _a simple almost-periodicity criterion_[9, Theorem 1]. This criterion makes it possible to choose in the Bochner's criterion, the sequence of real numbers \((\tau_{n})_{n}\) in a set of the type \(\omega\mathbb{Z}\) which is very useful for periodic dynamical systems. From our extension of Bochner's criterion, in Section 4, we deduce an improvement of this result. An asymptotically almost periodic function \(u:\mathbb{R}^{+}\to X\) is a perturbation of almost periodic. A such function is characterized by a property of the type of the Bochner's criterion. In the same spirit, we extend this characterization of asymptotically almost periodic functions. Then we apply these results to study the almost periodicity of solutions of periodic dynamical systems.
Bochner's criterion can also be expressed in terms of the relative compactness of the set \(\{u(\cdot+\tau);\tau\in\mathbb{R}\}\) in a suitable set of continuous functions. A periodic function is a special case of almost periodic function. A direct consequence of [8, Proposition 2] given by Haraux characterizes a periodic function in terms of the Bochner's criterion. This characterization is the following: \(u:\mathbb{R}\to X\) is continuous is periodic if and if only if the set \(\{u(\cdot+\tau);\tau\in\mathbb{R}\}\) is compact. In Section 5, By using our improvement of Bochner's criterion, we give an extension of the Haraux's characterization of periodic functions. We will also give a result on asymptotically periodic functions of the type of Haraux result described above. Then we apply these results to study the periodicity of solutions of autonomous dynamical systems.
## 2. Notation
Let us now give some notations, definitions and properties which will be used.
Throughout this section \((X,d)\) is a complete metric space. \(\mathbb{R}\), \(\mathbb{Z}\) and \(\mathbb{N}\) stand respectively for the real numbers, the integers and the natural integers. We denote by \(\mathbb{R}^{+}:=\{t\in\mathbb{R};t\geq 0\}\). Let \(E\) be a topological space. We denote by \(C(E,X)\) the space of all continuous functions from \(E\) into \(X\). When \(J=\mathbb{R}\) or \(J=\mathbb{R}^{+}\), we denote by \(BC(J,X)\) the space of all bounded and continuous functions from \(J\) into \(X\) equipped with the sup-distance, denoted by \(d_{\infty}(u,v):=\sup\limits_{t\in\mathbb{R}}d(u(t),v(t))\) when \(J=\mathbb{R}\) and \(d_{\infty,+}(u,v):=\sup\limits_{t\geq 0}d(u(t),v(t)))\) when \(J=\mathbb{R}^{+}\) for \(u\), \(v\in BC(J,X)\). The metric spaces \((BC(\mathbb{R},X),d_{\infty})\) and \((BC(\mathbb{R}^{+},X),d_{\infty,+}))\) are complete.
We now give some definitions and properties on almost periodic, asymptotically almost periodic functions with values in a given complete metric space.
A subset \(D\) of \(\mathbb{R}\) (respectively of \(\mathbb{R}^{+}\)) is said to be _relatively dense_ if there exists \(\ell>0\) such that \(D\cap[\alpha,\alpha+\ell]\neq\emptyset\) for all \(\alpha\in\mathbb{R}\) (respectively \(\alpha\geq 0\)). A continuous function \(u:\mathbb{R}\to X\) is said to be _almost periodic (in the sense of Bohr)_ if for each \(\varepsilon>0\), the set of \(\varepsilon\)-almost periods: \(\mathcal{P}(u,\varepsilon)=\left\{\tau\in\mathbb{R}\,;\,\sup\limits_{t\in \mathbb{R}}d(u(t+\tau),u(t))\leq\varepsilon\right\}\) is relatively dense in \(\mathbb{R}\). An almost periodic function \(u\) has its range \(u(\mathbb{R})\) relatively compact, that is its closure denoted by \(\operatorname{cl}\left(\operatorname{u}(\mathbb{R})\right)\) is a compact set of \((X,d)\). We denote the space of all such functions by \(AP(\mathbb{R},X)\). It is a closed metric subspace of \((BC(\mathbb{R},X),d_{\infty})\). An almost periodic function \(u\) is _uniformly recurrent_, that is there exists a sequence of real
numbers \((\tau_{n})_{n}\) such that \(\lim_{n\to+\infty}\sup_{t\in\mathbb{R}}d(u(t+\tau_{n}),u(t))=0\) and \(\lim_{n\to+\infty}\tau_{n}=+\infty\). To see that consider the Bohr's definition of \(u\in AP(\mathbb{R},X)\), then the set of \(\frac{1}{n}\)-almost periods satisfies \(\mathcal{P}(u,\frac{1}{n})\cap[n,+\infty)\neq\emptyset\), for each integer \(n>0\). A useful characterization of almost periodic functions was given by Bochner. The Bochner's criterion which may be found in [12, Bochner's theorem, p. 4] in the context of metric spaces. Before to cite this criterion, we need to introduce the translation mapping of a function of \(BC(\mathbb{R},X)\). For \(\tau\in\mathbb{R}\) and \(u\in BC(\mathbb{R},X)\), we define _the translation mapping_\(T_{\tau}u\in BC(\mathbb{R},X)\) by \(T_{\tau}u(t)=u(t+\tau)\) for \(t\in\mathbb{R}\).
**Theorem 2.1** (Bochner's criterion).: _For \(u\in BC(\mathbb{R},X)\), the following statements are equivalent._
**i)**_\(u\in AP(\mathbb{R},X)\)._
**ii)** _The set \(\{T_{\tau}u;\tau\in\mathbb{R}\}\) is relatively compact in \((BC(\mathbb{R},X),d_{\infty})\)._
Haraux gave a generalization of Bochner' criterion the called _a simple almost-periodicity criterion_[9, Theorem 1] which is useful for periodic dynamical systems.
**Theorem 2.2** (Haraux's criterion).: _Let \(D\) be a relatively dense subset of \(\mathbb{R}\). The following statements are equivalent for \(u\in BC(\mathbb{R},X)\)._
**i)**_\(u\in AP(\mathbb{R},X)\)._
**ii)** _The set \(\{T_{\tau}u;\tau\in D\}\) is relatively compact in \((BC(\mathbb{R},X),d_{\infty})\)._
Periodic functions, which are a special case of almost periodic functions, are also characterized in terms of Bochner's criterion. This criterion is a direct consequence of a result of Haraux.
**Theorem 2.3**.: _[_8_, Consequence of Proposition 2]_ _The following statements are equivalent for \(u\in BC(\mathbb{R},X)\)._
**i)**_\(u\) is periodic._
**ii)** _The set \(\{T_{\tau}u;\tau\in\mathbb{R}\}\) is a compact set of \((BC(\mathbb{R},X),d_{\infty})\)._
For some preliminary results on almost periodic functions with values in a given complete metric space, we refer to the book of Levitan-Zhikov [12] and in the special case of Banach spaces to the book of Amerio-Prouse [1].
The notion of asymptotic almost periodicity was first introduced by Frechet [6] in 1941 in the case where \(X=\mathbb{C}\). A continuous function \(u:\mathbb{R}^{+}\to X\) is said to be _asymptotically almost periodic_ if there exists \(v\in AP(\mathbb{R},X)\) such that \(\lim_{t\to\infty}d(u(t),v(t))=0\). An asymptotic almost periodic function \(u\) has its range \(u(\mathbb{R}^{+})\) relatively compact. We denote the space of all such functions by \(AAP(\mathbb{R}^{+},X)\). It is a closed metric subspace of \((BC(\mathbb{R}^{+},X),d_{\infty,+})\). An asymptotic almost periodicity function \(u:\mathbb{R}^{+}\to X\) is characterized by \(u\in APP(\mathbb{R}^{+},X)\)_if and only if \(u\in C(\mathbb{R}^{+},X)\) and for each \(\varepsilon>0\), there exists \(M\geq 0\) such that the \(\left\{\tau\geq 0\,;\,\sup_{t\geq M}d(u(t+\tau),u(t))\leq\varepsilon\right\}\) is relatively dense in \(\mathbb{R}^{+}\)_[15, Theorems 1.3]. In the context of metric spaces, Ruess and Summers give a characterization of asymptotically almost periodic functions in the spirit of Bochner's criterion.
To prove this characterization, Ruess and Summers use results from the paper [16] by the same authors. For \(\tau\geq 0\) and \(u\in BC(\mathbb{R}^{+},X)\), we define _the translation mapping_\(T_{\tau}^{+}u\in BC(\mathbb{R}^{+},X)\) by \(T_{\tau}^{+}u(t)=u(t+\tau)\) for \(t\geq 0\).
**Theorem 2.4**.: _[_15_, a part of Theorems 1.2 & 1.3]_ _Let \((X,d)\) be a complete metric space. For \(u\in BC(\mathbb{R}^{+},X)\), the following statements are equivalent._
**i)**_\(u\in AAP(\mathbb{R}^{+},X)\)._
**ii)**_The set \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\)._
For some preliminary results on asymptotically almost periodic functions, we refer to the book of Yoshizawa [17] in the case where \(X\) is a finite dimensional space, to the book of Zaidman [18] where \(X\) is a Babach space and to Ruess and Summers [14, 15, 16] in the general case: \(X\) is a complete metric space.
## 3. An improvement of Bochner's criterion
An almost periodic function is characterized by the Bochner's criterion, recalled in Section 2. Our main result is an extension of Bochner's criterion. Then we deduce new proofs which are direct and simpler on known results on the solutions of autonomous dynamic systems. Before to state our extension of Bochner's criterion, we need to introduce the restriction operator \(R:BC(\mathbb{R},X)\to BC(\mathbb{R}^{+},X)\) defined by \(R(u)(t):=u(t)\) for \(t\geq 0\) and \(u\in BC(\mathbb{R},X)\).
**Theorem 3.1**.: _Let \((X,d)\) be a complete metric space. For \(u\in BC(\mathbb{R},X)\) the following statements are equivalent._
**i)**_\(u\in AP(\mathbb{R},X)\)._
**ii)**_The set \(\{T_{\tau}u;\tau\geq 0\}\) is relatively compact in \((BC(\mathbb{R},X),d_{\infty})\)._
**iii)**_The set \(\{R(T_{\tau}u);\tau\in\mathbb{R}\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\)._
In our results, the compactness and the relative compactness of a set often intervene. To prove them, we will often use the following result whose proof is obvious. Recall that a set \(A\) of a metric space \((E,d)\) is relatively compact if its closure denoted by \(\operatorname{cl}\left(\text{A}\right)\) is a compact set of \((E,d)\).
**Lemma 3.2**.: _Let \(E\) be a set, \((G_{1},d_{1})\) and \((G_{2},d_{2})\) be two metric spaces. Let \(u:E\to G_{1}\) and \(v:E\to G_{2}\) be two functions. Assume there exists \(M>0\) such that_
\[\forall x_{1},x_{2}\in E,\quad d_{1}(u(x_{1}),u(x_{2}))\leq Md_{2}(v(x_{1}),v( x_{2})).\]
_Then the following statements hold._
**i)** _If the metric space \((G_{1},d_{1})\) is complete and \(v(E)\) is relatively compact in \((G_{2},d_{2})\), then \(u(E)\) is relatively compact in \((G_{1},d_{1})\)._
**ii)** _If \(v(E)\) is a compact set of \((G_{2},d_{2})\), then \(u(E)\) is a compact set of \((G_{1},d_{1})\)._
Proof of Theorem 3.1.: **i)**\(\Longrightarrow\)**iii)**. It is obvious by using the Bochner's criterion and the continuity of the restriction operator \(R\).
**iii)**\(\Longrightarrow\)**ii)**. The set \(u(\mathbb{R})=\{R(T_{\tau}u)(0);\tau\in\mathbb{R}\}\) is relatively compact in \(X\) as the range of \(\{R(T_{\tau}u);\tau\in\mathbb{R}\}\) by the continuous evaluation map at \(0\) from \(BC(\mathbb{R}^{+},X)\) into \(X\). By assumption, \(\mathcal{H}:=\operatorname{cl}\left(\{\text{R}(\text{T}_{t}\text{u})\,;\, \text{t}\in\mathbb{R}\}\right)\) is a compact set of \((BC(\mathbb{R}^{+},X),d_{\infty,+})\).
For all \(\tau\geq 0\), we define \(\phi_{\tau}:\mathcal{H}\to X\) by \(\phi_{\tau}(h)=h(\tau)\). The functions \(\phi_{\tau}\) are \(1\)-Lipschitz continuous and for each \(t\in\mathbb{R}\), the set \(\{\phi_{\tau}(R(T_{t}u))=u(\tau+t)\,;\,\tau\geq 0\}\) is included in the relatively compact set \(u(\mathbb{R})\). By density of \(\{R(T_{t}u)\,;\,t\in\mathbb{R}\}\) in \(\mathcal{H}\) and the continuity of \(\phi_{\tau}\), it follows that \(\{\phi_{\tau}(h)\,;\,\tau\geq 0\}\) is relatively compact in \(X\) for each \(h\in\mathcal{H}\). According to Arzela-Ascoli's theorem [11, Theorem 3.1, p. 57], the set \(\{\phi_{\tau}\,;\,\tau\geq 0\}\) is relatively compact in \(C(\mathcal{H},X)\) equipped with the sup-norm denoted by \(d_{C}\). From the density of \(\{R(T_{t}u)\,;\,t\in\mathbb{R}\}\) in \(\mathcal{H}\) and the continuity of \(\phi_{\tau}\), we deduce that for \(\tau_{1}\) and \(\tau_{2}\geq 0\), \(\sup\limits_{h\in\mathcal{H}}d(\phi_{\tau_{1}}(h),\phi_{\tau_{2}}(h))= \sup\limits_{t\in\mathbb{R}}d\left(\phi_{\tau_{1}}(R(T_{t}u)),\phi_{\tau_{2}} (R(T_{t}u))\right)=\sup\limits_{t\in\mathbb{R}}d\left(u(\tau_{1}+t),u(\tau_{2} +t)\right)=\sup\limits_{t\in\mathbb{R}}d\left(T_{\tau_{1}}u(t),T_{\tau_{2}}u(t)\right)\), then \(d_{C}(\phi_{\tau_{1}},\phi_{\tau_{2}})=d_{\infty}\left(T_{\tau_{1}}u,T_{\tau_ {2}}u\right)\). From Lemma 3.2, it follows that \(\{T_{\tau}u;\tau\geq 0\}\) is relatively compact in the complete metric space \((BC(\mathbb{R},X),d_{\infty})\) since \(\{\phi_{\tau}\,;\,\tau\geq 0\}\) is also one in \((C(\mathcal{H},X),d_{C})\).
**ii) \(\Longrightarrow\) i)**. For \(\tau_{1}\), \(\tau_{2}\geq 0\), \(d_{\infty}(T_{\tau_{1}}u,T_{\tau_{2}}u):=\sup\limits_{t\in\mathbb{R}}d(u(\tau_ {1}+t),u(\tau_{2}+t))\). Replacing \(t\) by \(t-\tau_{1}-\tau_{2}\) in the upper bound, we get \(d_{\infty}(T_{\tau_{1}}u,T_{\tau_{2}}u)=d_{\infty}(T_{-\tau_{1}}u,T_{-\tau_{2 }}u)\). Then the set \(\{T_{\tau}u;\tau\leq 0\}=\{T_{-\tau}u;\tau\geq 0\}\) is relatively compact in \(BC(\mathbb{R},X)\) since \(\{T_{\tau}u;\tau\geq 0\}\) is also one. Therefore the set \(\{T_{\tau}u;\tau\in\mathbb{R}\}\) is relatively compact in \(BC(\mathbb{R},X)\) as the union of two relatively compact sets in \(BC(\mathbb{R},X)\). According to Bochner's criterion, \(u\in AP(\mathbb{R},X)\).
The connection between the almost periodicity of a solution of a dynamical system and its stability is well known (see the monograph by Nemytskii & Stepanov [13, Ch. 5]. This weakened form of Bochner's criterion: Theorem 3.1 makes it possible to obtain direct and simpler proofs on these questions. Let us start by recalling some definitions on dynamical systems.
A _dynamical system_ or _nonlinear semigroup_ on a complete metric space \((X,d)\) is a one parameter family \(\left(S(t)\right)_{t\geq 0}\) of maps from \(X\) into itself such that **i)**\(S(t)\in C(X,X)\) for all \(t\geq 0\), **ii)**\(S(0)x=x\) for all \(x\in X\), **iii)**\(S(t+s)=S(t)\circ S(s)\) for all \(s,t\geq 0\) and **iv)** the mapping \(S(\cdot)x\in C([0,+\infty),X)\) for all \(x\in X\).
For each \(x\in X\), _the positive trajectory of \(x\)_ is the map \(S(\cdot)x:\mathbb{R}^{+}\to X\). A function \(u:\mathbb{R}\to X\) is called _a complete trajectory_ if we have \(u(t+\tau)=S(\tau)u(t)\), for all \(t\in\mathbb{R}\) and \(\tau\geq 0\).
We will need a notion of Lagrange-type stability to ensure that a solution with a relatively compact range is almost periodic. Recall that \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on a compact set \(K\) of \(X\), if forall \(\varepsilon>0\), there exists \(\delta>0\), such that
\[\forall x_{1},x_{2}\in K,d(x_{1},x_{2})\leq\delta\implies\sup\limits_{t\geq 0 }d(x_{1},x_{2})\leq\varepsilon.\]
Using Theorem 3.1, we give a new proof which is direct and simpler of the following result which can be found in [10, Theorem 4.3.2, p. 51] or partly in [12, Markov's theorem, p. 10].
**Corollary 3.3**.: _Let \(\left(S(t)\right)_{t\geq 0}\) be a dynamical system on a complete metric space \((X,d)\) and \(u\) be a complete trajectory such that \(u(\mathbb{R})\) is relatively compact. Then \(u\) is almost periodic if and only if \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(cl\left(u(\mathbb{R})\right)\) the closure of \(u(\mathbb{R})\)._
Proof.: Let us denote the compact set \(K:=\operatorname{cl}\left(u(\mathbb{R})\right)\). It follows by density of \(u(\mathbb{R})\) in \(K\) and the continuity of \(S(t)\), that \(\{S(t)x;t\geq 0\}\subset K\) for each \(x\in K\). According
to Arzela-Ascoli's theorem, \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(K\) if and only if \(\left(S(t)\right)_{t\geq 0}\) is relatively compact in \(C(K,X)\). From Theorem 3.1, we have \(u\in AP(\mathbb{R},X)\) if and only if \(\left\{T_{\tau}u;\tau\geq 0\right\}\) is relatively compact in \(BC(\mathbb{R},X)\). Then it remains to prove that \(\left(S(t)\right)_{t\geq 0}\) is relatively compact in \(C(K,X)\) equipped with the sup-norm if and only if \(\left\{T_{\tau}u;\tau\geq 0\right\}\) is relatively compact in \(\left(BC(\mathbb{R},X),d_{\infty}\right)\). This results from the following equalities, for \(\tau_{1}\) and \(\tau_{2}\geq 0\), \(\sup\limits_{t\in\mathbb{R}}d\left(T_{\tau_{1}}u(t),T_{\tau_{2}}u(t)\right)= \sup\limits_{t\in\mathbb{R}}d\left(S(\tau_{1})u(t),S(\tau_{2})u(t)\right)= \sup\limits_{x\in K}d\left(S(\tau_{1})x,S(\tau_{2})x\right)\) and Lemma 3.2.
_Remark 3.4_.: **a)** The condition of equicontinuity required by Corollary 3.3 is satisfied by a bounded dynamical system : \(d\left(S(t)x_{1},S(t)x_{2}\right)\leq Md\left(x_{1},x_{2}\right)\) for some \(M\geq 1\) and in particular for a \(C_{0}\) semigroup of contractions. In this case, the almost periodicity of a complete trajectory \(u\) having a relatively compact range results from Corollary 3.3. We can also obtain this result with the implication iii) \(\Longrightarrow\) i) of Theorem 3.1 and the inequality \(\sup\limits_{t\geq 0}d(R(T_{\tau_{1}}u)(t),R(T_{\tau_{2}}u)(t))=\sup\limits_{t \geq 0}d\left(S(t)u(\tau_{1}),S(t)u(\tau_{2})\leq Md(u(\tau_{1}),u(\tau_{2})\right)\) for \(\tau_{1}\), \(\tau_{2}\in\mathbb{R}\).
**b)** For a bounded \(C_{0}\)-semigroup \(\left(S(t)\right)_{t\geq 0}\), the main result of Zaidman [19] asserts that a positive trajectory \(u\) with relatively compact range satisfies a condition called _the generalized normality property in Bochner's sense_, without concluding that \(u\) is almost periodic. This condition is nothing but hypothesis iii) of Theorem 3.1, so \(u\) is almost periodic.
Using Theorem 2.4, we give a new proof which is direct and simpler of the following result which can be found in [15, Theorem 2.2, p. 149].
**Corollary 3.5**.: _Let \(\left(S(t)\right)_{t\geq 0}\) be a dynamical system on a complete metric space \(\left(X,d\right)\) and \(u\) be a positive trajectory such that \(u(\mathbb{R}^{+})\) is relatively compact. Then \(u\) is asymptotically almost periodic if and only if \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(cl\left(u(\mathbb{R}^{+})\right)\)._
Proof.: The proof is analogous to that of Corollary 3.3, using Corollary 2.4 instead of Theorem 3.1 and by replacing \(\mathbb{R}\) by \(\mathbb{R}^{+}\) and \(AP(\mathbb{R},X)\) by \(AAP(\mathbb{R}^{+},X)\).
## 4. An improvement of Haraux's criterion
Haraux gave a generalization of Bochner'scriterion [9, Theorem 1], the called _a simple almost-periodicity criterion_ which is useful for periodic dynamical systems. From our main result, Theorem 3.1, we deduce an extension of the Haraux's criterion, recalled in Section 2. In the same spirit, we extend the well-known characterization of asymptotically almost periodic functions. To end this section, we give an exemple of application on a periodic dynamical system.
We give an extension of the Haraux's criterion (see Theorem 2.2). Recall that we denote by \(R\) the restriction operator \(R:BC(\mathbb{R},X)\to BC(\mathbb{R}^{+},X)\) defined by \(R(u)(t):=u(t)\) for \(t\geq 0\) and \(u\in BC(\mathbb{R},X)\).
**Corollary 4.1**.: _Let \(\left(X,d\right)\) be a complete metric space. For \(u\in BC(\mathbb{R},X)\) the following statements are equivalent._
**i)**_\(u\in AP(\mathbb{R},X)\)._
**ii)** _The set \(\left\{T_{\tau}u;\tau\in D\right\}\) is relatively compact in \(\left(BC(\mathbb{R},X),d_{\infty}\right)\) where \(D\) be a relatively dense subset of \(\mathbb{R}^{+}\)._
**iii)** _The set \(\{R(T_{\tau}u);\tau\in D\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\) where \(D\) be a relatively dense subset of \(\mathbb{R}\)._
_Remark 4.2_.: Our main result, Theorem 3.1 is obviously a particular case of Corollary 4.1. But to present our results, it was easier to start with Theorem 3.1. To prove Corollary 4.1, we use Haraux's criterion and Theorem 3.1.
Proof of Corollary 4.1. **i) \(\Longrightarrow\) iii)**. It is a consequence of Theorem 3.1.
**iii) \(\Longrightarrow\) ii)**. To establish this implication, using Theorem 3.1, it suffices to show that assertion iii) implies that \(\{R(T_{\tau}u);\tau\in\mathbb{R}\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\). The proof of this last implication is a slight adaptation of those of those of the Haraux's criterion given in [9, Theorem 1]. A similar proof will be detailed in the following result as there will be technical issues. To demonstrate that \(\{R(T_{\tau}u);\tau\in\mathbb{R}\}\) is relatively compact, it suffices in the proof of ii) \(\Longrightarrow\) i) of Corollary 4.3 to take \(\ell=0\) and replace \(\{T_{\tau}^{+}u;\tau\in D\}\) by \(\{R(T_{\tau}u);\tau\in D\}\).
**ii) \(\Longrightarrow\) i)**. For \(\tau_{1}\), \(\tau_{2}\geq 0\), \(d_{\infty}(T_{\tau_{1}}u,T_{\tau_{2}}u)=\sup\limits_{t\in\mathbb{R}}d(u(\tau_{ 1}+t),u(\tau_{2}+t))\). Replacing \(t\) by \(t-\tau_{1}-\tau_{2}\) in the upper bound, we get \(d_{\infty}(T_{\tau_{1}}u,T_{\tau_{2}}u)=d_{\infty}(T_{-\tau_{1}}u,T_{-\tau_{2} }u)\). Then the set \(\{T_{\tau}u;\tau\in-D\}=\{T_{-\tau}u;\tau\in D\}\) is relatively compact in \(BC(\mathbb{R},X)\) since \(\{T_{\tau}u;\tau\in D\}\) is also one. Therefore the set \(\{T_{\tau}u;\tau\in D\cup(-D)\}\) is relatively compact in \(BC(\mathbb{R},X)\). Moreover \(D\cup(-D)\) is a relatively dense subset of \(\mathbb{R}\). According to Haraux's criterion, we have \(u\in AP(\mathbb{R},X)\).
We extend Theorem 2.4, the well-known characterization of asymptotically almost periodic functions. For \(\tau\in\mathbb{R}^{+}\) and \(u\in BC(\mathbb{R}^{+},X)\), we define _the translation mapping_\(T_{\tau}^{+}u\in BC(\mathbb{R}^{+},X)\) by \(T_{\tau}^{+}u(t)=u(t+\tau)\) for \(t\geq 0\).
**Corollary 4.3**.: _Let \((X,d)\) be a complete metric space and let \(D\) be a relatively dense subset of \(\mathbb{R}^{+}\). For \(u\in BC(\mathbb{R}^{+},X)\) the following statements are equivalent._
**i)**_\(u\in AAP(\mathbb{R}^{+},X)\)._
**ii)** _The set \(\{T_{\tau}^{+}u;\tau\in D\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\)._
_Remark 4.4_.: To establish implication ii) \(\Longrightarrow\) i), by using Theorem 2.4, it suffices to prove that assertion ii) implies that \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\). The proof of this last implication is an adaptation of those of the Haraux's criterion. But contrary to the proof of implication iii) \(\Longrightarrow\) ii) in Corollary 4.1, there are technical issues. These technical difficulties come from the fact that when \(D\) is a relatively dense subset in \(\mathbb{R}^{+}\), the sets \(D\) and \([t-\ell,t]\) can be disjoint for some \(0\leq t\leq\ell\). For this reason we give the complete proof of this implication.
Proof of Corollary 4.3. **i) \(\Longrightarrow\) ii)**. It is a consequence of Theorem 2.4.
**ii) \(\Longrightarrow\) i)**. We will prove that assumption ii) implies \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\), then we conclude by using Theorem 2.4. The subset \(D\) being relatively dense in \(\mathbb{R}^{+}\), there exists \(\ell>0\) such that \(D\cap[\alpha,\alpha+\ell]\neq\emptyset\) for all \(\alpha\geq 0\).
\(\bullet\)_We prove that \(u\) is uniformly continuous on \([\ell,+\infty)\)._ Let us fix \(\varepsilon>0\). By assumption the set \(\{T_{\tau}^{+}u;\tau\in D\}\) is in particular relatively compact in \(C([0,2\ell],X)\), then it is uniformly equicontinuous on \([0,2\ell]\), that is there exists \(\delta>0\) such that
\[s_{1},s_{2}\in[0,2l],\quad|s_{1}-s_{2}|\leq\delta\implies\sup\limits_{\tau\in D }d(u(s_{1}+\tau),u(s_{2}+\tau))\leq\varepsilon. \tag{4.1}\]
Let \(t_{1}\), \(t_{2}\) be two real numbers such that \(t_{1}\), \(t_{2}\geq\ell\) and \(|t_{1}-t_{2}|\leq\delta\). We can assume without loss of generality that \(\ell\leq t_{1}\leq t_{2}\leq t_{1}+\ell\). We have \(D\cap[t_{1}-\ell,t_{1}]\neq\emptyset\) since \(t_{1}-\ell\geq 0\), then there exists \(\tau\in D\) such that \(0\leq t_{1}-\tau\leq t_{2}-\tau\leq 2l\). Taking account (4.1), we deduce that \(d(u(t_{1}),u(t_{2}))=d(u((t_{1}-\tau)+\tau),u((t_{2}-\tau)+\tau))\leq\varepsilon\). Hence \(u\) is uniformly continuous on \([\ell,+\infty)\).
\(\bullet\)_We prove that \(\{T_{\tau}^{+}u;\tau\geq\ell\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\)._ Let \((t_{n})_{n}\) be a sequence of real numbers such that \(t_{n}\geq\ell\). We have \(D\cap[t_{n}-\ell,t_{n}]\neq\emptyset\) for each \(n\in\mathbb{N}\), since \(t_{n}-\ell\geq 0\), then there exist \(\tau_{n}\in D\) and \(\sigma_{n}\in[0,l]\) such that \(t_{n}=\tau_{n}+\sigma_{n}\). By compactness of the sequences \((\sigma_{n})_{n}\) in \([0,\ell]\) and \((T_{\tau_{n}}^{+}u)_{n}\) in \(BC(\mathbb{R}^{+},X)\), it follows that \(\lim\limits_{n\to+\infty}\sigma_{n}=\sigma\) and \(\lim\limits_{n\to+\infty}\sup\limits_{t\geq 0}d(u(t+\tau_{n}),v(t))=0\) (up to a subsequence). From the following inequality
\[\sup\limits_{t\geq 0}d(u(t_{n}+t),v(\sigma+t))\leq\sup\limits_{t\geq 0}d(u( \tau_{n}+\sigma_{n}+t),u(\tau_{n}+\sigma+t)+\sup\limits_{t\geq 0}d(u(\tau_{n}+t),v(t))\]
and the uniform continuity of \(u\), we deduce that \(\lim\limits_{n\to+\infty}\sup\limits_{t\geq 0}d(u(t_{n}+t),v(\sigma+t))=0\). Then \(\{T_{\tau}^{+}u;\tau\geq\ell\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\).
\(\bullet\)_We prove that \(u\in AAP(\mathbb{R}^{+},X)\)._ The function \(u\) is uniformly continuous on \(\mathbb{R}^{+}\), since \(u\) is continuous on \([0,\ell]\) and uniformly continuous on \([\ell,+\infty)\). Then the map \(\hat{u}:\mathbb{R}^{+}\to BC(\mathbb{R}^{+},X)\) defined by \(\hat{u}(\tau)=T_{\tau}^{+}u\) for \(\tau\geq 0\) is continuous, consequently the set \(\{T_{\tau}^{+}u;0\leq\tau\leq\ell\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\). The set \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\) as the union of two relatively compact sets. According to Theorem 2.4, \(u\in AAP(\mathbb{R},X)\).
Using corollaries 4.1 and 4.3, we give a proof which is direct and simpler of the following result which can be found in [7, 9, 10]. Before we recall some definitions on process.
A _process_ on a complete metric space \((X,d)\) according to Dafermos [4] is a two parameter family \(U(t,\tau)\) of maps from \(X\) into itself defined for \((t,\tau)\in\mathbb{R}\times\mathbb{R}^{+}\) and such that **i)**\(U(t,0)x=x\) for all \((t,x)\in\mathbb{R}\times X\), **ii)**\(U(t,\sigma+\tau)=U(t+\sigma,\tau)\circ U(t,\sigma)\) for all \((t,\sigma,\tau)\in\mathbb{R}\times\mathbb{R}^{+}\times\mathbb{R}^{+}\) and **iii)** the mapping \(U(t,\cdot)x\in C([0,+\infty),X)\) for all \((t,x)\in\mathbb{R}\times X\).
For each \(x\in X\), _the positive trajectory starting of \(x\)_ is the map \(U(0,\cdot)x:\mathbb{R}^{+}\to X\). A function \(u:\mathbb{R}\to X\) is called _a complete trajectory_ if we have \(u(t+\tau)=U(t,\tau)u(t)\) for all \((t,\tau)\in\mathbb{R}\times\mathbb{R}^{+}\).
A process \(U\) is said \(\omega\)_-periodic_\((\omega>0)\) if \(U(t+\omega,\tau)=U(t,\tau)\) for all \((t,\tau)\in\mathbb{R}\times\mathbb{R}^{+}\).
A process \(U\) is said _bounded_ if we have for some \(M\geq 1\) for all \((\tau,x_{1},x_{2})\in\mathbb{R}^{+}\times X\times X\)\(d\left(U(0,\tau)x_{1},U(0,\tau)x_{2}\right)\leq Md\left(x_{1},x_{2}\right)\).
**Corollary 4.5**.: [7, 9], [10, Theoreme 6.4.6, p. 84] _Let U be a \(\omega\)-periodic process on a complete metric space \((X,d)\). If \(U\) is bounded, then the following statements hold._
**i)** _If \(u\) is a complete trajectory of \(U\) such that \(u(-\omega\mathbb{N})\) is relatively compact, then \(u\) is almost periodic._
**ii)** _If \(u\) is a positive trajectory of \(U\) such that \(u(\omega\mathbb{N})\) is relatively compact, then \(u\) is asymptotically almost periodic._
Proof.: **i)** The process \(U\) is \(\omega\)-periodic, then we have \(u(n\omega)=U(-m\omega,(n+m)\omega)u(-m\omega)=U(0,(n+m)\omega)u(-m\omega)\) for all \(n\), \(m\in\mathbb{N}\). From the boundedness assumption on \(U\), we
deduce that \(d\left(u(n\omega),u(m\omega)\right)\leq Md\left(u(-m\omega),u(-n\omega)\right)\), then \(u(\omega\mathbb{N})\) is relatively compact since \(u(-\omega\mathbb{N})\) is also one, therefore \(u(\omega\mathbb{Z})\) is relatively compact. From assumptions on the process \(U\), it follows that for all \(n\), \(m\in\mathbb{Z}\),
\[\sup_{\tau\geq 0}d\left(u(\tau+n\omega),u(\tau+m\omega)\right)\leq Md\left(u(n \omega),u(m\omega)\right). \tag{4.2}\]
From Lemma 3.2, \(\{R(T_{n\omega}u);n\in\mathbb{Z}\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\) since \(u(\omega\mathbb{Z})\) is also one in \((X,d)\). We conclude with Corollary 4.1 by setting \(D=\omega\mathbb{Z}\).
**ii)** For all \(n\), \(m\in\mathbb{N}\), (4.2) holds on the positive trajectory \(u\), then from Lemma 3.2, \(\{T_{n\omega}^{+}u;n\in\mathbb{N}\}\) is relatively compact in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\) since \(u(\omega\mathbb{N})\) is also one in \((X,d)\). We conclude with Corollary 4.3 by setting \(D=\omega\mathbb{N}\).
## 5. Bochner's criterion in the periodic case
Periodic functions are a special case of almost periodic functions. Haraux gave a characterization of periodic functions in terms of Bochner's criterion which is recalled in Section 2. This criterion is a direct consequence of [8, Proposition 2]. Haraux established a general result [8, Theoreme 1] implying as a special case a characterization of periodic functions and the fact that any compact trajectory of a one-parameter continuous group is automatically periodic.
In this section, we give an extension of this characterization of periodic functions in the spirit of the main result of this article. We also treat the asymptotically periodic case. Then we apply these results to study the periodicity of solutions of dynamical systems.
Recall that we denote by \(R\) the restriction operator \(R:BC(\mathbb{R},X)\to BC(\mathbb{R}^{+},X)\) defined by \(R(u)(t):=u(t)\) for \(t\geq 0\) and \(u\in BC(\mathbb{R},X)\).
**Corollary 5.1**.: _Let \((X,d)\) be a complete metric space. For \(u\in BC(\mathbb{R},X)\) the following statements are equivalent._
**i)** _The function \(u\) is \(\omega\)-periodic (\(\omega>0\))._
**ii)** _The set \(\{T_{\tau}u;\tau\geq 0\}\) is a compact set of \((BC(\mathbb{R},X),d_{\infty})\)._
**iii)** _The set \(\{R(T_{\tau}u);\tau\in\mathbb{R}\}\) is a compact set of \((BC(\mathbb{R}^{+},X),d_{\infty,+})\)._
Proof.: **i) \(\Longrightarrow\) ii)**. From assumption, it follows that the function \(\tau\to T_{\tau}u\) from \(\mathbb{R}\) into \(BC(\mathbb{R},X)\) is continuous and \(\omega\)-periodic. Then the \(\{T_{\tau}u;\tau\geq 0\}=\{T_{\tau}u;0\leq\tau\leq\omega\}\) is a compact set of \((BC(\mathbb{R},X),d_{\infty})\) as the range of a compact set by a continuous map.
**ii) \(\Longrightarrow\) i)**. For \(\tau_{1}\), \(\tau_{2}\in\mathbb{R}\), \(d_{\infty}(T_{\tau_{1}}u,T_{\tau_{2}}u):=\sup_{t\in\mathbb{R}}d(u(\tau_{1}+t),u(\tau_{2}+t))\), we get \(d_{\infty}(T_{\tau_{1}}u,T_{\tau_{2}}u)=d_{\infty}(T_{-\tau_{1}}u,T_{-\tau_{2} }u)\). Then the set \(\{T_{\tau}u;\tau\leq 0\}=\{T_{-\tau}u;\tau\geq 0\}\) is compact in \(BC(\mathbb{R},X)\) since \(\{T_{\tau}u;\tau\geq 0\}\) is also one. Therefore the set \(\{T_{\tau}u;\tau\in\mathbb{R}\}\) is a compact set of \(BC(\mathbb{R},X)\) as the union of two compact sets in \(BC(\mathbb{R},X)\). According to Theorem 2.3, \(u\) is periodic.
**i) \(\Longrightarrow\) iii)**. It is obvious by using Theorem 2.3 and the continuity of the restriction operator \(R\).
**iii) \(\Longrightarrow\) i)**. By using Theorem 2.3, we have to prove that \(\mathcal{K}:=\{T_{\tau}u;\tau\in\mathbb{R}\}\) is a compact set of \((BC(\mathbb{R},X),d_{\infty})\). As consequence of Theorem 3.1 and Bohner's criterion, the set \(\mathcal{K}\) is relatively compact in \((BC(\mathbb{R},X),d_{\infty})\) and the function \(u\) is almost periodic.
It remains to prove that \(\mathcal{K}\) is closed in \((BC(\mathbb{R},X),d_{\infty})\). Let \((\tau_{n})_{n}\) be a sequence of real numbers such that \(\lim\limits_{n\to+\infty}d_{\infty}(T_{\tau_{n}}u,v)=0\). Let us prove that \(v=T_{\tau}u\) for some \(\tau\in\mathbb{R}\). By continuity of the operator \(R\), we have \(\lim\limits_{n\to+\infty}d_{\infty,+}(R(T_{\tau_{n}}u),R(v))=0\). By assumption, the set \(\{R(T_{\tau}u);\tau\in\mathbb{R}\}\) is in particular closed in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\), then \(R(v)=R(T_{\tau}u)\) for some \(\tau\in\mathbb{R}\), that is
\[\forall t\geq 0,\qquad v(t)=T_{\tau}u(t). \tag{5.1}\]
We have to prove that (5.1) holds on the whole real line. The function \(T_{\tau}u\) is almost periodic as translation of an almost periodic function and \(v\) is also one as uniform limit on \(\mathbb{R}\) of almost periodic functions. Let us denote by \(\phi:\mathbb{R}\to\mathbb{R}\) the function defined by \(\phi(t):=d(T_{\tau}u(t),v(t))\). The function \(\phi\) is almost periodic [12, Property 4, p. 3 & 7, p.6]. An almost periodic function is uniformly recurrent, then there exists a sequence of real numbers such that \(\lim\limits_{n\to+\infty}\tau_{n}=+\infty\) and \(\lim\limits_{n\to+\infty}\phi(t+\tau_{n})=\phi(t)\) for all \(t\in\mathbb{R}\). From (5.1), it follows \(\phi(t)=0\) for all \(t\geq 0\), so we deduce that \(\phi(t)=\lim\limits_{n\to+\infty}\phi(t+\tau_{n})=0\) for all \(t\in\mathbb{R}\). Then \(v(t)=T_{\tau}u(t)\) for all \(t\in\mathbb{R}\). This ends the proof.
According Theorem 2.4, if the set \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is relatively compact in \(BC(\mathbb{R}^{+},X)\), then the function u is asymptotically almost periodic. We now give an answer to the question what can be said about the function \(u\) when \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is a compact set of \(BC(\mathbb{R}^{+},X)\). For \(u\in BC(\mathbb{R}^{+},X)\), we say that \(u\) is \(\omega\)_-periodic (\(\omega>0\)) on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\)_ if \(u(t+\omega)=u(t)\) for all \(t\geq t_{0}\).
**Corollary 5.2**.: _Let \((X,d)\) be a complete metric space. For \(u\in BC(\mathbb{R}^{+},X)\) the following statements are equivalent._
**i)** _There exists \(t_{0}\geq 0\) such that \(u\) is \(\omega\)-periodic on \([t_{0},+\infty)\)._
**ii)** _The set \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is a compact set of \((BC(\mathbb{R}^{+},X),d_{\infty,+})\)._
_Remark 5.3_.: Let \(u\) be a function which satisfies condition i) of Corollary 5.2.
**i)** Let us denote by \(v\in C(\mathbb{R},X)\) the \(\omega\)-periodic function satisfying \(u(t)=v(t)\) for \(t\geq t_{0}\). A such function \(v\) exists and is unique, \(v\) is defined by \(v(t)=u(t-[\frac{t-t_{0}}{\omega}]\omega)\) where \([\frac{t-t_{0}}{\omega}]\) denotes the integer part of \(\frac{t-t_{0}}{\omega}\).
**ii)** The function \(u\) is a special case of asymptotic almost periodic function where the almost periodic function \(v\) is periodic and \(d(u(t),v(t))=0\) for \(t\geq t_{0}\).
Proof of Corollary 5.2.: **i)\(\implies\)ii)**. Let us denote by \(v\) the function defined in Remark 5.3. By Corollary 5.1 and the periodicity of \(v\), we have \(\{R(T_{\tau}v);\tau\geq t_{0}\}=\{R(T_{\tau}v);\tau\in\mathbb{R}\}\) is a compact set of \((BC(\mathbb{R}^{+},X),d_{\infty,+})\).
First, we have \(T_{\tau}^{+}u=R(T_{\tau}v)\) for \(\tau\geq t_{0}\), then \(\{T_{\tau}^{+}u;\tau\geq t_{0}\}\) is a compact set.
Second, the function \(u\) is uniformly continuous on \(\mathbb{R}^{+}\), then the function from \(\mathbb{R}^{+}\) to \(BC(\mathbb{R}^{+},X)\) defined by \(\tau\to T_{\tau}^{+}u\) is continuous. Then the set \(\{T_{\tau}^{+}u;0\leq\tau\leq t_{0}\}\) is compact.
Therefore the set \(\{T_{\tau}^{+}u;\tau\geq 0\}\) is a compact set of \(BC(\mathbb{R}^{+},X)\) as the union of two compact set.
**ii)\(\implies\)i)**. As consequence of Theorem 2.4, the function \(u\) is asymptotically almost periodic, that is \(\lim\limits_{t\to\infty}d(u(t),v(t))=0\) for some \(v\in AP(\mathbb{R},X)\). An almost periodic
function is uniformly recurrent, then there exists a sequence of real numbers \((t_{n})_{n}\) such that \(\lim\limits_{n\to+\infty}t_{n}=+\infty\) and \(\lim\limits_{n\to+\infty}v(t+t_{n})=v(t)\) for all \(t\in\mathbb{R}\). We deduce that
\[\forall t\in\mathbb{R},\qquad\lim\limits_{n\to+\infty}u(t+t_{n})=v(t). \tag{5.2}\]
_First we prove that \(v\) is periodic._ For \(t\in\mathbb{R}\), \(\tau_{1}\), \(\tau_{2}\geq 0\), we have for \(n\) enough large \(d(u(t+t_{n}+\tau_{1}),u(t+t_{n}+\tau_{2}))\leq\sup\limits_{s\geq 0}d(u(s+\tau_{1}),u(s+\tau_{2}))\). From (5.2), it follows that \(\sup\limits_{t\in\mathbb{R}}d(v(t+\tau_{1}),v(t+\tau_{2}))\leq\sup\limits_{s \geq 0}d(u(s+\tau_{1}),u(s+\tau_{2}))\) for each \(\tau_{1}\) and \(\tau_{2}\geq 0\). According to Lemma 3.2, \(\{T_{\tau}v\,;\,\tau\geq 0\}\) is a compact set of \((BC(\mathbb{R},X),d_{\infty})\) since \(\{T_{\tau}^{+}u\,;\,\tau\geq 0\}\) is also one in \((BC(\mathbb{R}^{+},X),d_{\infty,+})\). As consequence of Corollary 5.1, the function \(v\) is periodic.
_Second we prove that: \(\exists t_{0}\geq 0\) such that \(\forall t\geq 0\), \(v(t)=u(t+t_{0})\)._ By compactness of \(\{T_{\tau}^{+}u;\tau\geq 0\}\), there exists a subsequence \((T_{t_{\phi(n)}}^{+}u)_{n}\) such that \(\lim\limits_{n\to+\infty}d_{\infty,+}(T_{t_{\phi(n)}}^{+}u,T_{t_{0}}^{+}u)=0\) for some \(t_{0}\geq 0\). From (5.2) we deduce that \(R(v)=T_{t_{0}}^{+}u\), that is \(v(t)=u(t+t_{0})\) for all \(t\geq 0\).
Then \(u(t)=v(t-t_{0})\) for each \(t\geq t_{0}\) where the function \(v(\cdot-t_{0})\) is periodic on \(\mathbb{R}\).
Now we give an example of application on dynamical systems of Corollary of 5.1 and Corollary of 5.2. For the definition of a dynamical system, see above Corollary 3.3 in Section 3.
**Corollary 5.4**.: _Let \(\left(S(t)\right)_{t\geq 0}\) be a dynamical system on a complete metric space \((X,d)\)._
**i)** _If \(u\) is a positive trajectory, then \(u\) is periodic on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\) if and only if \(u(\mathbb{R}^{+})\) is a compact set and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(u(\mathbb{R}^{+})\)._
**ii)** _If \(u\) is a complete trajectory, then \(u\) is periodic if and only if \(u(\mathbb{R})\) is a compact set and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(u(\mathbb{R})\)._
**iii)** _There exists a complete trajectory which is periodic if and only if there exists a positive trajectory \(u\) such that \(u(\mathbb{R}^{+})\) is a compact set and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(u(\mathbb{R}^{+})\)._
_Remark 5.5_.: Thus under the assumption of equicontinuity, a complete trajectory of a dynamical system with a compact range is necessarily periodic, although there are almost periodic functions with a compact range, which are not periodic. An example of such function is given by Haraux in [8].
Proof of Corollary 5.4.: **i)** Remark that if \(u\) is a positive trajectory which is periodic on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\), then first \(u(\mathbb{R}^{+})\) is compact and second \(u\in AAP(\mathbb{R}^{+},X)\) (see Remark 5.3). As consequence of Corollary 3.5, the set \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(u(\mathbb{R}^{+})\). Reciprocally assume the positive trajectory \(u\) is such that \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on the compact set \(u(\mathbb{R}^{+})\). It remains to prove that the positive trajectory \(u\) is periodic on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\). For each \(x\in u(\mathbb{R}^{+})\), the map \(S(\cdot)x\) is continuous and satisfies \(S(t)x\in u(\mathbb{R}^{+})\) for each \(t\geq 0\). Then the map \(S(\cdot)x\) is bounded and continuous, so the map \(\Phi:u(\mathbb{R}^{+})\to BC(\mathbb{R}^{+},X)\) with \(\Phi(x)=S(\cdot)x\) is well-defined. The continuity of \(\Phi\) results of the equicontinuity of \(\left(S(t)\right)_{t\geq 0}\) on \(u(\mathbb{R}^{+})\). Then the set \(\Phi(u(\mathbb{R}^{+}))=\left\{\Phi(u(\tau))\,;\,\tau\geq 0\right\}\) is a compact of \(BC(\mathbb{R}^{+},X)\). Moreover \(\Phi(u(\tau))(t)=S(t)u(\tau)=u(t+\tau)\) for \(t\) and \(\tau\geq 0\)
then \(\Phi(u(\tau))=T_{\tau}^{+}u\), so \(\left\{T_{\tau}^{+}u\,;\,\tau\geq 0\right\}\) is a compact set of \(BC(\mathbb{R}^{+},X)\). According to Corollary 5.2, the function \(u\) is periodic on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\).
**ii)** The proof of ii) is similar to that of i) by using Corollary 3.3 instead of 3.5, Corollary 5.1 instead of Corollary 5.2 and by replacing the map \(\Phi:u(\mathbb{R}^{+})\to BC(\mathbb{R}^{+},X)\) with \(\Phi(x)=S(\cdot)x\) by the map \(\Phi:u(\mathbb{R})\to BC(\mathbb{R}^{+},X)\). This permits to prove that the set \(\left\{\Phi(u(\tau))=R(T_{\tau}u);\tau\in\mathbb{R}\right\}\) is a compact set of \((BC(\mathbb{R}^{+},X),d_{\infty,+})\).
**iii)** If \(v\) is a complete trajectory which is periodic, then \(v(\mathbb{R})\) is compact and according to ii), \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(v(\mathbb{R})\). So the restriction \(u\) of \(v\) on \(\mathbb{R}^{+}\) is a positive trajectory such that \(u(\mathbb{R}^{+})=v(\mathbb{R}^{+})=v(\mathbb{R})\) since \(v\) is periodic, then \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on the compact set \(u(\mathbb{R}^{+})\). Reciprocally, assume that \(u\) is a positive trajectory such that \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on the compact set \(u(\mathbb{R}^{+})\). According to i), \(u\) is \(\omega\)-periodic on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\). Let us denote by \(v\) the function defined in Remark 5.3. For \(t\geq s\), there exists \(n_{0}\in\mathbb{N}\) such that \(s+n_{0}\omega\geq t_{0}\). The function \(v\) is \(\omega\)-periodic and \(u\) is a positive trajectory satisfying \(u(\tau)=v(\tau)\) for \(\tau\geq t_{0}\), then \(v(t)=v(t+n_{0}\omega)=u(t+n_{0}\omega)=T(t-s)u(s+n_{0}\omega)=T(t-s)v(s+n_{0} \omega)=T(t-s)v(s)\) for \(t\in\mathbb{R}\) and \(n\) enough large. Then \(v\) is a periodic complete trajectory.
_Remark 5.6_.: Under i) of Corollary 5.4, one can have \(t_{0}>0\), that is the positive trajectory \(u\) is not the restriction of a periodic complete trajectory. For example, consider the bounded dynamical system \(\left(S(t)\right)_{t\geq 0}\) on \(L^{1}(0,1)\) defined by
\[(S(t)x)(s)=\left\{\begin{array}{ll}x(s-t)&\mbox{if}\ \ \ t<s<1\\ \\ 0&\mbox{if}\ \ 0<s<t\end{array}\right.\]
for \(x\in L^{1}(0,1)\) and \(0<t<1\). For \(t\geq 1\), we set \(S(t)=0\). Then all positive trajectories have a compact range and the alone complete trajectory is the null function. Thus all positive trajectories are not the restriction of a periodic complete trajectory except the null function.
Not all dynamical systems have this pathology, some systems are such that if two positive trajectories have the same value at the same time, then they are equal. If we consider such systems, we get more refined results from Corollary 5.4.
A dynamical system \(\left(S(t)\right)_{t\geq 0}\) has the _backward uniqueness property_ if any two positive trajectories having the same value at \(t=t_{0}\geq 0\) coincide for any other \(t\geq 0\). This property is equivalent to \(S(t)\in C(X,X)\) is injective for each \(t\geq 0\). We say that _a positive trajectory \(u\) is extendable to a periodic complete trajectory_, if there exists a periodic complete trajectory such that its restriction on \(\mathbb{R}^{+}\) is \(u\).
**Corollary 5.7**.: _Let \(\left(S(t)\right)_{t\geq 0}\) be a dynamical system on a complete metric space \((X,d)\). Assume that \(\left(S(t)\right)_{t\geq 0}\) has the backward uniqueness property._
**i)** _If \(u\) is a positive trajectory, then \(u\) is periodic on \(\mathbb{R}^{+}\) if and only if \(u(\mathbb{R}^{+})\) is a compact set and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(u(\mathbb{R}^{+})\). In this case the positive trajectory \(u\) is extendable to a periodic complete trajectory \(v\)._
**ii)** _If \(v\) is a complete trajectory, then \(v\) is periodic if and only if \(v(\mathbb{R}^{+})\) is a compact set and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(v(\mathbb{R}^{+})\)._
Proof.: **i)** The direct implication results of i) of Corollary 5.4. For the reciprocal implication we use i) of Corollary 5.4. Then the positive trajectory \(u\) is \(\omega\)-periodic on \([t_{0},+\infty)\) for some \(t_{0}\geq 0\). Let us denote by \(v\in C(\mathbb{R},X)\) the \(\omega\)-periodic function satisfying \(u(t)=v(t)\) for \(t\geq t_{0}\) (see Remark 5.3). The restriction of \(v\) on \(\mathbb{R}^{+}\) and \(u\) are two positive trajectories having the same value at \(t=t_{0}\) (\(t_{0}\geq 0\)). From the backward uniqueness property, we have \(u(t)=v(t)\) for \(t\geq 0\), then \(u\) is periodic on \(\mathbb{R}^{+}\). By build, \(v\) is periodic and as in the proof of iii) of Corollary 5.4, we deduce that \(v\) is a complete trajectory.
**ii)** The direct implication results of ii) Corollary 5.4, since \(v(\mathbb{R}^{+})=v(\mathbb{R})\). For the reciprocal implication, we consider \(v\) a complete trajectory such that \(v(\mathbb{R}^{+})\) is a compact set and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(v(\mathbb{R}^{+})\). Then the restriction \(u\) of the complete trajectory \(v\) on \(\mathbb{R}^{+}\) is a positive trajectory such that \(u(\mathbb{R}^{+})\) is compact and \(\left(S(t)\right)_{t\geq 0}\) is equicontinuous on \(u(\mathbb{R}^{+})\). According to i), \(u\) is \(\omega\)-periodic on \(\mathbb{R}^{+}\). Let us denote by \(w\in C(\mathbb{R},X)\) the \(\omega\)-periodic function satisfying \(u(t)=w(t)\) for \(t\geq 0\). As in proof of iii) Corollary 5.4, we deduce that \(w\) is a complete trajectory. Fix \(T>0\). The two maps \(\tilde{v}\) and \(\tilde{w}:\mathbb{R}^{+}\to X\) defined by \(\tilde{v}=v(\cdot,-T)\) and \(\tilde{w}=w(\cdot,-T)\) are two positive trajectories having the same value at \(t=T\). From the backward uniqueness property, we have \(\tilde{v}=\tilde{w}\), that is \(v(t)=w(t)\) for \(t\geq-T\). Since \(T\) is arbitrary, then \(v(t)=w(t)\) for each \(t\in\mathbb{R}\) where \(w\) is a periodic complete trajectory. This proves that \(v\) is a periodic complete trajectory.
|
2309.03148 | Data-Driven Neural Polar Codes for Unknown Channels With and Without
Memory | In this work, a novel data-driven methodology for designing polar codes for
channels with and without memory is proposed. The methodology is suitable for
the case where the channel is given as a "black-box" and the designer has
access to the channel for generating observations of its inputs and outputs,
but does not have access to the explicit channel model. The proposed method
leverages the structure of the successive cancellation (SC) decoder to devise a
neural SC (NSC) decoder. The NSC decoder uses neural networks (NNs) to replace
the core elements of the original SC decoder, the check-node, the bit-node and
the soft decision. Along with the NSC, we devise additional NN that embeds the
channel outputs into the input space of the SC decoder. The proposed method is
supported by theoretical guarantees that include the consistency of the NSC.
Also, the NSC has computational complexity that does not grow with the channel
memory size. This sets its main advantage over successive cancellation trellis
(SCT) decoder for finite state channels (FSCs) that has complexity of
$O(|\mathcal{S}|^3 N\log N)$, where $|\mathcal{S}|$ denotes the number of
channel states. We demonstrate the performance of the proposed algorithms on
memoryless channels and on channels with memory. The empirical results are
compared with the optimal polar decoder, given by the SC and SCT decoders. We
further show that our algorithms are applicable for the case where there SC and
SCT decoders are not applicable. | Ziv Aharoni, Bashar Huleihel, Henry D. Pfister, Haim H. Permuter | 2023-09-06T16:44:08Z | http://arxiv.org/abs/2309.03148v1 | # Data-Driven Neural Polar Codes for Unknown Channels With and Without Memory
###### Abstract
In this work, a novel data-driven methodology for designing polar codes for channels with and without memory is proposed. The methodology is suitable for the case where the channel is given as a "black-box" and the designer has access to the channel for generating observations of its inputs and outputs, but does not have access to the explicit channel model. The proposed method leverages the structure of the successive cancellation (SC) decoder to devise a neural SC (NSC) decoder. The NSC decoder uses neural networks (NNs) to replace the core elements of the original SC decoder, the check-node, the bit-node and the soft decision. Along with the NSC, we devise additional NN that embeds the channel outputs into the input space of the SC decoder. The proposed method is supported by theoretical guarantees that include the consistency of the NSC. Also, the NSC has computational complexity that does not grow with the channel memory size. This sets its main advantage over successive cancellation trellis (SCT) decoder for finite state channels (FSCs) that has complexity of \(O(|\mathcal{S}|^{3}N\log N)\), where \(|\mathcal{S}|\) denotes the number of channel states. We demonstrate the performance of the proposed algorithms on memoryless channels and on channels with memory. The empirical results are compared with the optimal polar decoder, given by the SC and SCT decoders. We further show that our algorithms are applicable for the case where there SC and SCT decoders are not applicable.
Channels with memory, data-driven, neural polar decoder, polar codes.
## I Introduction
Polar codes allow the construction of capacity-achieving codes for symmetric binary-input memoryless channels [2]. When given \(N\) independent copies of a binary discrete memoryless channel (DMC) \(W\), successive cancellation (SC) decoding induces a new set of \(N\) binary effective channels \(W_{N}^{(i)}\). Channel polarization is the phenomenon whereby, for \(N\) sufficiently large, almost all of the effective bit channels \(W_{N}^{(i)}\) have capacities close to 0 or 1. Specifically, the fraction of channels with capacity close to 1 approaches l(\(W\)) and the fraction of channels with capacity close to 0 approaches \(1-\)l(\(W\)), where l(\(W\)) is the channel's symmetric capacity. The construction of polar codes involves choosing which rows to keep from the square generator matrix given by Arikan's transform [2, Section VII]. The encoding and decoding procedures are performed by recursive formulas whose computational complexity is \(O(N\log N)\).
Polar codes can also be applied to finite state channels (FSCs). Arikan's transform also polarizes the bit channels \(W_{N}^{(i)}\) in the presence of memory [3], and thus the encoding algorithm is the same as if the channel is memoryless. However, the decoding algorithm needs to be updated since the derivation of the SC decoder in [2] relies on the memoryless property. To account for the channel memory, the channel outputs are represented by a trellis, whose nodes capture the information of the channel's memory. This trellis was embedded into the SC decoding algorithm to yield the successive cancellation trellis (SCT) decoding algorithm [4, 5].
However, the SCT decoder is only applicable when the channel model is known and when the channel's state alphabet size is finite and relatively small. For FSCs, the computational complexity of the SCT decoder is \(O(|\mathcal{S}|^{3}N\log N)\), where \(|\mathcal{S}|\) is the number of channel states. For Markov channels where the set of channel states is not finite, the SCT decoder is not applicable without quantization of its states. With quantization, there may be a strong tension between the computational complexity and the error introduced by quantization. Additionally, the SCT decoder cannot be used for an unknown channel with memory without first estimating the channel as it requires an explicit channel model,.
The SCT decoder can also be applied to a larger class of channels (e.g., insertion and deletion channels) where, given the the channel output sequence \(Y^{N}\), a trellis can be constructed to efficiently represent \(P_{X^{N},Y^{N}}\)[6]. In that case, the decoding complexity is upper bounded by \(O(M^{3}N\log N)\), where \(M\) is the maximum number of states in any trellis stage. If \(M\) grows
linearly with \(N\), then the complexity of the decoder may grow very rapidly (e.g., \(\geq N^{4}\)) and is dominated by the number of trellis states rather than the block length.
### _Contribution_
We propose a novel methodology for data-driven polar decoders. The methodology treats the channel as a "black-box" used to generate samples of input-output pairs without an access to the channel's explicit model. It dissects the polar decoder into two separate components. The first is the sufficient statistic of the channel outputs, that we denote by \(E\). The function \(E:\mathcal{Y}\rightarrow\mathcal{E}\) embeds the channel outputs into a latent space \(\mathcal{E}\subset\mathbb{R}^{d}\). The embeddings \(e\in\mathcal{E}\) are then used as the inputs of the second component - a neural SC (NSC) decoder.
The NSC uses three neural networks (NNs) that replace the three core elements of the SC decoder: the check-node, the bit-node and the soft decision operations. The parameters of the embedding function \(E\), and the NSC parameters are determined in a training phase, in which the mutual information (MI) of the effective channels \(W_{N}^{(i)}\) is estimated. The training of the embedding and the NSC are performed in two alternative ways. The first trains the embedding and the NSC jointly. The second determines the parameters of the embedding \(E\) using neural estimation methods [7, 8, 9], and then, determines the parameters of the NSC while the parameters of \(E\) are fixed. After the training phase, the set of "clean" effective channels are determined by a Monte Carlo (MC) evaluation of the MI of the effective bit channels to complete the code design.
The NSC decoder is a consistent estimator of an optimal polar decoder. Specifically, for FSCs, the NSC decoder provides a consistent estimator of the mutual information of the effective bit channels. We also show its computational complexity, and show it does not grow with the channel memory. This is a main advantage over the SCT decoder whose computational complexity grows cubicly with the channel memory size. To complete the methodology, we provide an extension of the NSC for a stationary input distribution. This involves using the Honda-Yamamoto scheme [10] and incorporating it into our algorithms.
### _Related work_
The usage of NNs for polar codes design were considered in the past. In [11], NNs were used to decrease the decoding latency by designing a NN decoder that decodes multiple symbols at once. Other instances used NNs to aid existing algorithms, such as [12, 13, 14]. The paper [15] presents KO codes, a family of deep-learning driven codes Reed-Muller and Polar codes on the additive white Gaussian noise (AWGN) channel. KO codes are related to the methods proposed here in the sense that we also leverage the structure of the Arikan's transform to design efficient decoders. However, we do not change Arikan's transform, and we consider channels with memory. To the best of our knowledge, there is no instance of a data-driven polar code design for channels with memory. This work aims to address this gap by developing the necessary algorithms for this task.
### _Organization_
The paper is organized as follows. Section II defines the notation and gives the necessary background on polar codes. Section III presents the methodology for data-driven polar code design for memoryless channels. Section IV extends the methodology for the case where the channel has memory. Section V extends the algorithms in Sections III and IV to stationary input distributions. Section VI presented the numerical results. The proofs of the paper appear in Section VII. Section VIII includes conclusions and future goals.
## II Notations and Preliminaries
Throughout this paper, we denote by \((\Omega,\mathcal{F},\mathbb{P})\) the underlying probability space on which all random variables are defined, with \(\mathbb{E}\) denoting expectation. Random variables (RVs) are denoted by capital letters and their realizations will be denoted by lower-case letters, e.g. \(X\) and \(x\), respectively. Calligraphic letters denote sets, e.g. \(\mathcal{X}\). We use the notation \(X^{n}\) to denote the RV \((X_{1},X_{2},\ldots,X_{n})\) and \(x^{n}\) to denote its realization. The probability \(\Pr[X=x]\) is denoted by \(P_{X}(x)\). Stochastic processes are denoted by blackboard bold letters, e.g., \(\mathbb{X}:=(X_{i})_{i\in\mathbb{N}}\). An \(n\)-coordinate projection of \(\mathbb{P}\) is denoted by \(P_{X^{n}Y^{n}}:=\mathbb{P}\big{|}_{\sigma(X^{n},Y^{n})}\), where \(\sigma(X^{n},Y^{n})\) is the \(\sigma\)-algebra generated by \((X^{n},Y^{n})\). We denote by \([N]\) the set of integers \(\{1,\ldots,N\}\).
The MI between two RVs \(X,Y\) is denoted by \(\mathsf{l}\,(X;Y)\). The directed information (DI) between \(X^{n}\) and \(Y^{n}\) is defined as \(\mathsf{l}(X^{n}\to Y^{n})=\sum_{i=1}^{n}\mathsf{l}(X^{i};Y_{i}|Y^{i-1})\)[16]. For two distributions \(P,Q\), the cross entropy (CE) is denoted by \(h_{\mathsf{CE}}\,(P,Q)\), the entropy is denoted by \(\mathsf{H}\,(P)\) and the Kullback Leibler (KL) divergence is denoted by \(\mathsf{D_{KL}}\,(P\|Q)\). The notation \(P\ll Q\) indicates that \(P\) is absolutely continuous with respect to (w.r.t.) \(Q\).
The tuple \(\big{(}W_{Y|X},\mathcal{X},\mathcal{Y}\big{)}\) defines a memoryless channel with input alphabet \(\mathcal{X}\), output alphabet \(\mathcal{Y}\) and a transition kernel \(W_{Y|X}\). Throughout the paper we assume that \(\mathcal{X}=\{0,1\}\). For a memoryless channel, we denote its input distribution
by \(P_{X}=P_{X_{i}}\) for all \(i\in\mathbb{Z}\). The tuple \(\left(W_{Y\parallel X},\mathcal{X},\mathcal{Y}\right)\) defines a time invariant channel with memory, where \(W_{Y\parallel X}=\left\{W_{Y_{0}|Y^{-1}_{-i+1},X^{0}_{0^{i+1}}}\right\}_{i\in \mathbb{N}}\). The term \(W_{Y^{N}\parallel X^{N}}=\prod_{i=1}^{N}W_{Y_{0}|Y^{-1}_{-i+1},X^{0}_{-i+1}}\) denotes the probability of observing \(Y^{N}\) causally conditioned on \(X^{N}\)[17]. The symmetric capacity of a channel is denoted by \(\mathsf{l}\left(W\right)\). We denote by \(\mathcal{D}_{M,N}=\left\{x_{j,i},y_{j,i}\right\}_{j\in[M],i\in[N]}\sim P_{X^{ MN}}\otimes W_{Y^{MN}\parallel X^{MN}}\) a finite sample of inputs-outputs pairs of \(M\) consecutive blocks of \(N\) symbols, where \(x_{j,i},y_{j,i}\) denotes the \(i\)-th input and output of the \(j\)-th block. The term \(\mathcal{D}_{MN}\) denotes the same sample after its concatenation into one long sequence of inputs and outputs pairs.
### _Finite State Channels_
A FSC is defined by the tuple \(\left(\mathcal{X},\mathcal{Y},\mathcal{S},P_{S^{\prime},Y|X,S}\right)\), where \(X\) is the channel input, \(Y\) is the channel output, \(S\) is the channel state at the beginning of the transmission, and \(S^{\prime}\) is the channel state at the end of the transmission. The cardinalities \(\mathcal{X},\mathcal{S}\) are assumed to be finite. At each time \(t\), the channel has the Markov property, that is, \(P_{S_{t},Y_{t}|X^{t},S^{t-1},Y^{t-1}}=P_{Y_{t},S_{t}|X_{t},S_{t-1}}\). A FSC is called indecomposable if for every \(\varepsilon>0\) there exists an \(n_{0}\in\mathbb{N}\) such that for \(n\geq n_{0}\) we have \(\sup_{s_{0},s_{0}^{\prime},s_{n}\in\mathcal{S},x^{n}\in\mathcal{X}^{n}}|P_{S_ {n}|X^{n},S_{0}}\left(s_{n}|x^{n},s_{0}\right)-P_{S_{n}|X^{n},S_{0}}\left(s_{n} |x^{n},s_{0}^{\prime}\right)|<\varepsilon\).
### _Polar Codes for Symmetric Channels_
Let \(G_{N}=B_{N}F^{\otimes n}\) be Arikan's polar transform with the generator matrix for block length \(N=2^{n}\) for \(n\in\mathbb{N}\). The matrix \(B_{N}\) is the permutation matrix called bit-reversal and is given by the recursive relation \(B_{N}=R_{N}(I_{2}\otimes B_{\frac{N}{2}})\) starting from \(B_{2}=I_{2}\). The term \(I_{N}\) denotes the identity matrix of size \(N\) and \(R_{N}\) denotes a permutation matrix called reverse-shuffle [2]. The term \(A\otimes B\) denotes the Kronecker product of \(A\) and \(B\) when \(A,B\) are matrices, and it denotes a tensor product whenever \(A,B\) are distributions. The term \(A^{\otimes N}:=A\otimes A\otimes\cdots\otimes A\) denotes an application of the \(\otimes\) operator \(N\) times.
We define a polar code by the tuple \(\left(\mathcal{X},\mathcal{Y},W,E^{W},F,G,H\right)\) that contains the channel \(W\), the channels embedding \(E^{W}\) and the core components of the SC decoder, \(F,G,H\). We define the effective bit channels by the tuple \(\left(W_{N}^{(i)},\mathcal{X},\mathcal{X}^{i-1}\times\mathcal{Y}^{N}\right)\) for all \(i\in[N]\). The term \(E^{W}:\mathcal{Y}\rightarrow\mathcal{E}\) denotes the channel embedding, where \(\mathcal{E}\subset\mathbb{R}^{d}\). It is also referred in the literature by the term channel statistics, but here for our purposes, we alternatively choose to call it the channel embedding. For example, for a memoryless channel \(W:=W_{Y|X}\), a valid choice of \(E^{W}\), as used in the remainder of this paper, is given by the following:
\[E^{W}(y)=\log\frac{W\left(y|1\right)}{W\left(y|0\right)}+\log\frac{P_{X}\left( 1\right)}{P_{X}\left(0\right)}, \tag{1}\]
where the second term in the right-hand-side (RHS) cancels out in the case where \(P_{X}\) is uniform.
The functions \(F:\mathcal{E}\times\mathcal{E}\rightarrow\mathcal{E}\), \(G:\mathcal{E}\times\mathcal{E}\times\mathcal{X}\rightarrow\mathcal{E}\) denote the check-node and bit-node operations, respectively. We denote by \(H:\mathcal{E}\rightarrow[0,1]\) a mapping of the embedding into a probability value, i.e. a soft decision. For the choice of \(E^{W}\) in Equation (1), \(F,G,H\) are given by
\[F(e_{1},e_{2})=2\tanh^{-1}\left(\tanh\frac{e_{1}}{2}\tanh\frac{e _{2}}{2}\right),\] \[G(e_{1},e_{2},u)=e_{2}+(-1)^{u}e_{1},\] \[H(e_{1})=\sigma(e_{1}), \tag{2}\]
where \(\sigma(x)=\frac{1}{1+e^{-x}}\) is the logistic function and \(e_{1},e_{2}\in\mathcal{E},u\in\mathcal{X}\). For this choice, the hard decision rule \(h:[0,1]\rightarrow\mathcal{X}\) is the round function \(h(l)=\mathbb{I}_{l>0.5}\), where \(\mathbb{I}\) is the indicator function. Applying SC decoding on the channel outputs yields an estimate of the transmitted bits and their corresponding posterior distribution [2]. Specifically, after observing \(y^{N}\), SC decoding performs the map \((y^{N},f^{N})\mapsto\left\{\hat{u}_{i},P_{U_{i}|U^{i-1},Y^{N}}\left(1|\hat{u}^{i -1},y^{N}\right)\right\}_{i\in[N]}\), where \(f^{N}\) are the frozen bits that are shared between the encoder and the decoder. i.e. \(f_{i}\in\{0,1\}\) if \(i\in[N]\) is frozen, and \(f_{i}=0.5\)1 if \(i\) is an information bit. This mapping is denote by
Footnote 1: The value \(0.5\) is chosen arbitrarily to indicate that the bit needs to be decoded.
\[\left\{\hat{u}_{i},P_{U_{i}|U^{i-1},Y^{N}}\left(1|\hat{u}^{i-1},y^{N}\right) \right\}_{i\in[N]}=\mathsf{SC}_{\mathsf{decode}}\left(y^{N},f^{N}\right). \tag{3}\]
For more details on SC decoding, the reader may refer to [2, Section VIII].
For the case where \(P_{X}\) is a uniform independently identically distributed (i.i.d.) distribution, we denote by
\[\mathcal{A}=\mathsf{SC}_{\mathsf{design}}\left(\mathcal{D}_{M,N},k,E^{W},F,G,H\right)\]
the procedure of finding the set of good channels \(\mathcal{A}\subset[N]\) with \(|\mathcal{A}|=k\) over the sample \(\mathcal{D}_{M,N}\) with a SC decoder that uses \(E^{W}\), \(F\), \(G\), \(H\) as its elementary operations. This amounts to applying \(\mathsf{SC}_{\mathsf{decode}}\) on the \(M\) blocks in \(\mathcal{D}_{M,N}\). In the design phase,
we assume that both \(u^{n}\) and \(y^{n}\) are known to the decoder, and therefore \(f^{N}=u^{N}\) in the design phase. Each application of SC decoding yields in \(\left\{P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{j,1}^{i-1},y_{j,1}^{N}\right)\right\}_{j \in[M],i\in[N]}\) (note that the conditioning is over the true bits \(u^{N}\)). For each \(i\in[N]\), we compute the empirical average
\[\widehat{1}\left(W_{N}^{(i)}\right)=1+\sum_{j=1}^{M}\log P_{U_{i}|U^{i-1},Y^{N }}\left(u_{j,i}|u_{j,1}^{i-1},y_{j,1}^{N}\right) \tag{4}\]
to estimate the MI of the effective bit channels. Note that (4) follows due to the fact that \(\mathsf{H}\left(U_{i}|U^{i-1}\right)=1\) and the second term is an estimate of \(\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)\) by the law of large numbers. This estimate is used to complete the polar code design by choosing \(\mathcal{A}\subset[N]\) with the highest values of \(\left\{\widehat{1}\left(W_{N}^{(i)}\right)\right\}_{i\in[N]}\).
### _Neural Networks and Universal Approximations_
The class of shallow NNs with fixed input and output dimensions is defined as follows [18].
**Definition 1** (NN function class).: For the ReLU activation function \(\sigma_{\mathsf{R}}(x)=\max(x,0)\) and \(d_{i},d_{o}\in\mathbb{N}\), define the class of neural networks with \(k\in\mathbb{N}\) neurons as:
\[\mathcal{G}_{\mathsf{NN}}^{(d_{i},k,d_{o})}:=\left\{g:\mathbb{R}^{d_{i}}\to \mathbb{R}^{d_{o}}:g(x)=\sum_{j=1}^{k}\beta_{j}\sigma_{\mathsf{R}}(\mathrm{W}_ {j}x+b_{j}),\ x\in\mathbb{R}^{d_{i}}\right\}, \tag{5}\]
where \(\sigma_{\mathsf{R}}\) acts component-wise, \(\beta_{j}\in\mathbb{R},\mathrm{W}_{j}\in\mathbb{R}^{d_{o}\times d_{i}}\) and \(b_{j}\in\mathbb{R}^{d_{o}}\) are the parameters of \(g\in\mathcal{G}_{\mathsf{NN}}^{(d_{i},k,d_{o})}\). Then, the class of NNs with input and output dimensions \((d_{i},d_{o})\) is given by
\[\mathcal{G}_{\mathsf{NN}}^{(d_{i},d_{o})}:=\bigcup_{k\in\mathbb{N}}\mathcal{G }_{\mathsf{NN}}^{(d_{i},k,d_{o})}, \tag{6}\]
and the class of NNs is given by \(\mathcal{G}_{\mathsf{NN}}:=\bigcup_{d_{i},d_{o}\in\mathbb{N}}\mathcal{G}_{ \mathsf{NN}}^{(d_{i},d_{o})}\).
NNs form a universal approximation class under mild smoothness conditions [19]. The following theorem specifies the conditions for which NNs are universal approximators [18, Corollary 1].
**Theorem 1** (Universal approximation of NNs).: _Let \(\mathsf{C}(\mathcal{X},\mathcal{Y})\) be the class continuous functions \(f:\mathcal{X}\to\mathcal{Y}\) where \(\mathcal{X}\subset\mathbb{R}^{d_{i}}\) is compact and \(\mathcal{Y}\subseteq\mathbb{R}^{d_{o}}\). Then, the class of NNs \(\mathcal{G}_{\mathsf{NN}}^{(d_{i},d_{o})}\) is dense in \(\mathsf{C}(\mathcal{X},\mathcal{Y})\), i.e., for every \(f\in\mathsf{C}(\mathcal{X},\mathcal{Y})\) and \(\varepsilon>0\), there exist \(g\in\mathcal{G}_{\mathsf{NN}}^{(d_{i},d_{o})}\) such that \(\|f-g\|_{\infty}\leq\epsilon\)._
## III Data-Driven Polar Codes for Memoryless Channels
This section focuses on designing data-driven polar codes for memoryless channels, serving as a foundational step towards addressing channels with memory. Although the algorithm presented is intended as a precursor for channels with memory--as detailed in Section IV--its derivation here is influenced by the eventual data-driven polar code for channels with memory. Consequently, the algorithm is not primarily aimed at enhancing existing methods for memoryless channels. Instead, its design illuminates the components essential for the data-driven polar decoder discussed in Section IV.
Let \(W:=W_{Y|X}\) be a binary-input memoryless channel. Consider \(\mathcal{D}_{MN}\sim\left(P_{X}\otimes W_{Y|X}\right)^{\otimes MN}\) as a finite sample of its input-output pairs, with \(P_{X}\left(0\right)=P_{X}\left(1\right)=0.5\). The SC decoding algorithm transforms the channel embedding, as detailed in (1), into the effective bit channels embedding using recursive formulas from [2, Prop. 3]. Notably, while the SC decoder necessitates the explicit channel embedding \(E^{W}\), the channel transition kernel remains unknown in data-driven scenarios. To tackle this challenge, we utilize the mutual information neural estimator (MINE) algorithm [20] to estimate both the channel embedding and its maximum achievable rate. Given \(\mathcal{D}_{MN}\), the MINE algorithm approximates \(\mathsf{l}\left(X;Y\right)\) using the Donsker Varadhan (DV) variational formula for KL divergences. This approximation results in an estimation of the symmetric capacity (owing to the uniformity of \(P_{X}\)) as
\[\mathsf{l}_{\Phi}\left(\mathcal{D}_{MN}\right)=\max_{\phi\in\Phi}\frac{1}{MN} \sum_{i=1}^{MN}T_{\phi}\left(x_{i},y_{i}\right)-\log\frac{1}{MN}\sum_{i=1}^{ MN}e^{T_{\phi}\left(x_{i},\tilde{y}_{i}\right)}, \tag{7}\]
where \(\tilde{y}^{N}\) represents a random shuffle of \(y^{N}\), \(T_{\Phi}\) is the estimated maximizer from the DV formula, and \(\Phi\) is a compact parameter space for the NN. We represent the MINE algorithm as \(T_{\Phi}=\mathsf{MINE}\) (\(\mathcal{D}_{MN}\)).
The optimal solution of the DV formula is given by \(T^{*}(x,y)=\log\frac{W(y|x)}{\frac{1}{2}W(y|0)+\frac{1}{2}W(y|1)}+c\) for \(c\in\mathbb{R}\)[21]. This connects \(T^{*}\) and \(E^{W}\) through the relation
\[E^{W}(y)=T^{*}(1,y)-T^{*}(0,y). \tag{8}\]
Therefore, when the statistics of the channel are not known, the MINE algorithm's output is used as a proxy for \(E^{W}(y)\) via Eq. (8), i.e. \(E^{W}_{\Phi}\left(y\right)=T_{\Phi}\left(1,y\right)-T_{\Phi}\left(0,y\right)\). This process is outlined in Algorithm 1. The following theorem states that Algorithm 1 induces a consistent estimate of \(\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)\) for memoryless channels.
**Theorem 2** (Successive Cancellation Decoding with MINE for Memoryless Channels).: _Let \(\mathcal{D}_{M,N}\sim\left(P_{X}\otimes W_{Y|X}\right)^{\otimes MN}\). where \(N=2^{n},\ M,n\in\mathbb{N}\). Let \(u_{j,i}=(x_{j,1}^{N}G_{N})_{i}\). Let \(E_{\Phi}\) as defined in Equation (8). Then, for every \(\varepsilon>0\) there exists \(p\in\mathbb{N}\), compact \(\Phi\subset\mathbb{R}^{p}\) and \(m\in\mathbb{N}\) such that for \(M>m\) and \(i\in[N]\), \(\mathbb{P}-a.s.\)_
\[\left|\mathsf{H}_{\Phi}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H}\left(U_ {i}|U^{i-1},Y^{N}\right)\right|<\varepsilon, \tag{9}\]
_where_
\[\mathsf{H}_{\Phi}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)=-\frac{1}{M}\sum_{j=1}^ {M}\log P_{U_{i}|U^{i-1},Y^{N}}^{\Phi}\left(u_{j,i}|u_{j,1}^{i-1},y_{j,1}^{N} \right), \tag{10}\]
_and \(P_{U_{i}|U^{i-1}Y^{N}}^{\Phi}\left(u_{j,i}|u_{j,1}^{i-1},y_{j,1}^{N}\right)\) is obtained by applying SC decoding (using \(F,G,H\) as defined in Equation 2) with inputs \(\left\{E^{W}_{\Phi}\left(y_{j,i}\right)\right\}_{i=1}^{N}\) instead of \(\left\{E^{W}\left(y_{j,i}\right)\right\}_{i=1}^{N}\)._
The proof of Theorem 2 follows by two arguments. The first is the consistency of the MINE algorithm [20, Theorem 2]. The second exploits the continuity of the SC decoder to deduce the consistency of \(\mathsf{H}_{\Phi}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)\). The proof is given in Section VII.
**Remark 1**.: The condition stated in Equation (9) is sufficient to ensure that Algorithm 1 produces a polar code such that for any \(N\in\mathbb{N}\) there exists \(M\in\mathbb{N}\) for which the block error rate is \(O(2^{N^{-\beta}})\), where \(\beta<\frac{1}{2}\), as the analytic polar code in [22]. The law of the large numbers implies that \(\lim_{M\to\infty}\mathsf{H}_{\Phi}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)=h_{ \mathsf{CE}}\left(P_{U_{i}|U^{i-1},Y^{N}},P_{U_{i}|U^{i-1},Y^{N}}^{\Phi}\right)\). This is equivalent to
\[\lim_{M\to\infty}\mathsf{H}_{\Phi}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{ H}\left(U_{i}|U^{i-1},Y^{N}\right)=\mathsf{D}_{\mathsf{KL}}\left(P_{U_{i}|U^{i-1},Y^{N }}\|P_{U_{i}|U^{i-1},Y^{N}}^{\Phi}|P_{U^{i-1},Y^{N}}\right).\]
Theorem 2 suggests that \(\lim_{M\to\infty}\mathsf{D}_{\mathsf{KL}}\left(P_{U_{i}|U^{i-1},Y^{N}}\|P_{U_ {i}|U^{i-1},Y^{N}}^{\Phi}|P_{U^{i-1},Y^{N}}\right)=0\) which implies that \(P_{U_{i}|U^{i-1},Y^{N}}\) and \(P_{U_{i}|U^{i-1},Y^{N}}^{\Phi}\) are equal almost everywhere. Thus, their corresponding Bhattacharyya parameters are equal and consequently have the same block error rate [2, Theorem 3].
## IV Data-Driven Polar codes for Channels with Memory
This section presents the data-driven methodology for the estimation of a neural polar decoder for channels with memory. In this case, both the channel embedding and the polar decoder needs to be estimated. The section starts with the NSC decoder's definition, and then presents an algorithm that optimizes the NSC decoder and the channel embedding jointly. Next, it presents neural estimation methods that allow the estimation of the channel embedding independently from the NSC decoder. It is concluded by the theoretical guarantees of the NSC.
### _Neural Successive Cancellation Decoder_
The following definition defines the NSC decoder on the basis of the SC decoder, as appears in [2]. Specifically, it uses the structure of the SC decoder and replaces its elementary operations by NNs.
**Definition 2** (Neural Successive Cancellation Decoder).: Let \(E_{\phi}^{W}\in\mathcal{G}_{\mathsf{NN}}\) be a channel embedding satisfying \(E_{\phi}^{W}:\mathcal{Y}\rightarrow\mathcal{E}\), \(\mathcal{E}\subset\mathbb{R}^{d}\). A NSC is defined by \(F_{\theta_{1}},G_{\theta_{2}},H_{\theta_{3}}\in\mathcal{G}_{\mathsf{NN}}\) with parameters \(\theta=\{\theta_{1},\theta_{2},\theta_{3}\}\), \(\theta\in\Theta\). The NNs satisfy:
* \(F_{\theta}:\mathcal{E}\times\mathcal{E}\rightarrow\mathcal{E}\) is the check-node NN.
* \(G_{\theta}:\mathcal{E}\times\mathcal{E}\times\mathcal{X}\rightarrow\mathcal{E}\) is the bit-node NN.
* \(H_{\theta}:\mathcal{E}\rightarrow[0,1]\) is the soft decision NN.
Application of SC decoding, as defined in Equation (3), with the functions \(E_{\phi},F_{\theta},G_{\theta},H_{\theta}\)2 yields \(P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\), that is an estimate of \(P_{U_{i}|U^{i-1},Y^{N}}\). Let
Footnote 2: instead of \(E^{W},F,G,H\) as defined in Equations (1),(2).
\[\mathsf{H}_{\phi,\theta}\left(U_{i}|U^{i-1},Y^{N}\right)=h_{\mathsf{CE}}\left( P_{U_{i}|U^{i-1},Y^{N}},P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\right) \tag{11}\]
be the CE between \(P_{U_{i}|U^{i-1},Y^{N}}\) and \(P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\).
The goal of the reminder of this section is to describe how to train the parameters of the NSC. Training the NSC amounts into optimizing \(\phi,\theta\) such that the symmetric capacities of the effective bit channels \(\mathsf{l}\left(W_{N}^{(i)}\right)\) are estimated. It follows that
\[\mathsf{l}\left(W_{N}^{(i)}\right)=1-\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right), \tag{12}\]
due to \(\mathsf{H}\left(U_{i}|U^{i-1}\right)=1\) since \(U_{i}\overset{iid}{\sim}\mathsf{Ber}\left(0.5\right)\). Hence, we set the goal of estimating \(P_{U_{i}|U^{i-1},Y^{N}}\) as the goal needed to identify the clean effective bit channels. This implies that minimizing CE between \(P_{U_{i}|U^{i-1},Y^{N}}\) and \(P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\) is a valid objective for the optimization of \(\phi,\theta\). However, in the data-driven scenario, the true distribution \(P_{U_{i}|U^{i-1},Y^{N}}\) is not known and therefore the CE is computed by the negative-log-loss, which is computed exclusively by \(P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\). The following definition presents the objective for the optimization of the NSC parameters \(\theta\) and the channel embedding \(\phi\).
**Definition 3** (Optimization Objective of the NSC).: Let \(\mathcal{D}_{M,N}\sim P_{X^{\otimes MN}}\otimes W_{Y^{MN}\|X^{MN}}\), where \(N\) is the block length and \(M\in\mathbb{N}\) is the number of blocks. Let \(u_{j,i}=(x_{j,1}^{N}G_{N})_{i}\). Then, for all \(i\in[N]\)
\[\mathsf{H}_{\Phi,\Theta}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)=\min_{\theta\in \Theta,\phi\in\Phi}\left\{-\frac{1}{M}\sum_{j=1}^{M}\log P_{U_{i}|U^{i-1},Y^{N} }^{\phi,\theta}\left(u_{j,i}|u_{j,1}^{i-1},y_{j,1}^{N}\right)\right\} \tag{13}\]
is the objective for training the NSC.
The explicit computation of \(P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\) uses the recursive structure of the SC decoder. For each block \(j\in[M]\) in \(\mathcal{D}_{M,N}\), the channel inputs and outputs \(x_{j,1}^{N},y_{j,1}^{N}\) are selected and \(u_{j,1}^{N}\) is computed by \(u_{j,1}^{N}=x_{j,1}^{N}G_{N}\). For simplicity, we neglect the index of the block and focus on a single block, i.e. we simplify the notation into \(x_{1}^{N},y_{1}^{N},u_{1}^{N}\). Let \(e_{l,i}\) denotes the embedding
Figure 1: A visualization of Algorithm 2 for \(N=4\). \(L_{\mathsf{ce}}^{\theta}(e,u)\) denotes a single cross-entropy term, and the overall training loss \(\mathsf{L}\) is calculated as the sum of all terms shown in the figure.
of the \(i\)-th bit at the \(l\)-th decoding depth and \(e_{l,1}^{N}\) denotes all the embedding at the \(l\)-th decoding depth. E.g. \(e_{0,i}\) denotes the embedding of \(X_{i}\) and \(e_{\log_{2}(N),i}\) denotes the embedding of \(U_{i}\). Accordingly, \(P_{U_{i}|U^{l-1},Y^{N}}^{\phi,\theta}\left(1|u^{i-1},y^{N}\right)=H_{\theta} \left(e_{\log_{2}(N),i}\right)\) by applying the soft-decision NN.
After observing \(y^{N}\), the channel embedding are computed by \(e_{0,i}=E_{\phi}\left(y_{i}\right)\). In the next step, the frozen bits \(f_{1}^{N}=u_{1}^{N}\) and \(e_{0,1}^{N}\) are used to compute the loss of the NSC, as appears in Definition 3. The loss computation is performed by a recursive function that is based on the recursion of the SC decoder. Specifically, instead of decoding bits at each leaf of the recursion, here a loss term of entire loss is accumulated. The recursion starts with a loss accumulator initiated by \(\mathsf{L}=0\). Then, the NSC decoder starts the recursive computation of the effective bit channels, exactly as in SC decoding, until reaching the first leaf of the recursion. Upon reaching the first leaf, the first loss term of the NSC is accumulated into \(\mathsf{L}\). Precisely, a loss term \(L_{\mathsf{ce}}^{\theta}(e_{\log_{2}(N),1},u_{1})\) is computed via the formula of the binary CE
\[L_{\mathsf{ce}}^{\theta}(e,u)=-u\log\left(H_{\theta}\left(e\right)\right)-(1-u )\log\left(1-H_{\theta}\left(e\right)\right). \tag{14}\]
That is the binary CE between \(P_{U_{1}|Y^{N}}\left(\cdot|y^{N}\right)\) and \(P_{U_{1}|Y^{N}}^{\phi,\theta}\left(\cdot|y^{N}\right)\). In the same manner, at each leaf of the recursion, additional loss term is accumulated to \(\mathsf{L}\). That is, each time reaching a leaf, \(\mathsf{L}\) is updated according to the following rule
\[\mathsf{L}=\mathsf{L}+L_{\mathsf{ce}}^{\theta}(e_{\log_{2}(N),i},u_{i}),\ \ i \in[N]. \tag{15}\]
```
\(N=\text{dim}\left(\mathbf{u}\right)\) if\(N=1\)then \(\mathsf{L}=\mathsf{L}+L_{\mathsf{ce}}^{\theta}(\mathbf{e},\mathbf{u})\)\(\triangleright\) Loss at a leaf return\(\mathsf{L},\mathbf{u}\) endif Split e into even and odd indices \(\mathbf{e}_{e},\mathbf{e}_{o}\)\(\triangleright\) Check-node \(\mathbf{e}_{\mathsf{C}}=F_{\theta}\left(\mathbf{e}_{e},\mathbf{e}_{o}\right)\)\(\triangleright\) Bit-node \(\mathbf{L},\mathbf{v}_{1}=\mathsf{NSCLoss}\left(\mathbf{e}_{\mathsf{C}},\mathbf{u}_{1}^{N /2},\mathsf{L}\right)\)\(\triangleright\) Bit in current depth return\(\mathsf{L},\mathbf{v}\)
```
**Algorithm 2**\(\mathsf{NSCLoss}(\mathbf{e},\mathbf{u},\mathsf{L})\)
```
\(N=2^{n_{\mathsf{t}}}\) for\(k=1\) to N\({}_{\text{iters}}\)do Sample \(x_{1}^{N},y_{1}^{N}\sim\mathcal{D}_{M,N}\) \(u_{1}^{N}=x_{1}^{N}G_{N}\) Compute \(e_{0,1}^{N}\) by \(e_{0,i}=E_{\phi}\left(y_{i}\right)\) Compute \(\mathsf{L}\) by applying \(\mathsf{NSCLoss}\left(e_{0,1}^{N},u_{1}^{N},0\right)\) Minimize \(\mathsf{L}\) w.r.t. \(E_{\phi}^{W},F_{\theta},G_{\theta},H_{\theta}\) endfor \(\mathcal{A}=\mathsf{SC}_{\mathsf{design}}\left(\mathcal{D}_{M,N},k,E_{\phi}^ {W},F_{\theta},G_{\theta},H_{\theta}\right)\)
```
**Algorithm 3** Data-driven polar code for channels with memory
In addition, we make the algorithm more robust by accumulating the loss incurred by bits in intermediate decoding depths of \(0,1,\ldots,\log N-1\). i.e. the loss accumulates \(N(\log_{2}N+1)\) terms that correspond to all the bits in \(\log_{2}N+1\) decoding depths, and for all \(N\) bits per each stage. This is depicted in Figure 1 and Algorithm 2.
The complete algorithm is given by the following steps, as given in Algorithm 3. First, the parameters of \(E_{\phi}^{W},F_{\theta},G_{\theta},H_{\theta}\) are initialized and the training block length is determined by \(N=2^{n_{\mathsf{t}}}\). Then, every iteration \(x_{1}^{N},y_{1}^{N}\) are drawn from \(\mathcal{D}_{M,N}\) and \(u_{1}^{N}\) is computed by \(u_{1}^{N}=x_{1}^{N}G_{N}\). Next, the channel embedding are computed by \(e_{0,i}=E_{\phi}\left(y_{i}\right)\) for all \(i\in[N]\). At this stage the loss is computed by computing \(\mathsf{NSCLoss}\left(e_{0,1}^{N},u_{1}^{N},0\right)\), as given in Algorithm 2. The loss \(\mathsf{L}\) is minimized using stochastic gradient descent (SGD) over the parameters \(\phi,\theta\). This procedure repeats for a predetermined amount of steps, or until the CE stops improving.
### _Channel Embedding Estimation via Neural Estimation Methods_
This section claims that neural estimation methods for the estimation of the DI, as presented in [7, 8], may be used for the estimation of the channel embedding function independently from the NSC. The motivation for independent estimation of the channel embedding is demonstrated by memoryless channels. In this case, once the channel embedding is chosen, e.g. the log likelihood ratio (LLR), the corresponding SC decoder is compatible for all channels; the only thing that should be computed is the channel LLRs. Thus, in Section III the channel embedding are estimated via the MINE Algorithm, and the SC decoder
is identical for all channels. In the same manner, it is desirable to find algorithms for the estimation of the channel embedding of channels with memory, such that is would be compatible with a single NSC decoder.
Thus, this section identifies that the output of directed information neural estimator (DINE) algorithm may be used to construct channel embedding for channels with memory, in the same manner as the MINE is used for memoryless channels. Section IV-B1 provides a brief background on the DINE algorithm. Section IV-B2 shows that the DINE algorithm is a sufficient statistic of the channel outputs \(Y^{N}\) for the estimation of \(U^{N}\). Section IV-B3 shows how to extract the channel embedding from the DINE model.
#### Iv-B1 Estimating the Capacity of Channels with Memory
Let \(W:=W_{Y^{MN}\|X^{MN}}\) be a binary-input channel with memory and let \(\mathcal{D}_{MN}\sim\left(P_{X^{MN}}\otimes W_{Y^{N}\|X^{N}}\right)\) be a finite sample of its inputs-outputs pairs. The DINE algorithm [7, 8] estimates the DI rate from \(\mathbb{X}\) to \(\mathbb{Y}\) using the following formula:
\[\mathsf{I}_{\Psi}\left(\mathbb{X}\rightarrow\mathbb{Y}\right) =\max_{\psi\in\Psi_{XY}}\Bigg{\{}\frac{1}{MN}\sum_{i=1}^{MN}T_{ \psi}\left(x_{i},y_{i}|x^{i-1},y^{i-1}\right)-\log\frac{1}{MN}\sum_{i=1}^{MN}e^ {T_{\psi}\left(x_{i},z_{i}|x^{i-1},y^{i-1}\right)}\Bigg{\}}\] \[-\max_{\psi\in\Psi_{Y}}\Bigg{\{}\frac{1}{MN}\sum_{i=1}^{MN}T_{ \psi}\left(y_{i}|y^{i-1}\right)-\log\frac{1}{MN}\sum_{i=1}^{MN}e^{T_{\psi} \left(z_{i}|y^{i-1}\right)}\Bigg{\}}, \tag{16}\]
where \(T_{\psi}\in\mathcal{G}_{\mathsf{RNN}}\), the space of recurrent neural networks (RNNs) whose parameter space is \(\Psi\). The RVs \(Z^{N}\) are auxiliary i.i.d. RVs distributed on \(\mathcal{Y}^{N}\) and independent of \(X^{N},Y^{N}\), that are used for the estimation of the DI as presented in [7]. The estimated maximizers of the first and second terms are denoted by \(T_{\Psi_{XY}}\) and \(T_{\Psi_{Y}}\), respectively.
#### Iv-B2 DINE gives Sufficient Statistics of the Channel Outputs
The optimal maximizers of the \(i\)-th argument in first term in Equation (16) is given by \(T_{i}^{*}=\log\frac{P_{Y_{i}|Y^{i-1},X^{i}}}{P_{Z}}+c\) for \(c\in\mathbb{R}\) and \(i\in\mathbb{N}\). For fixed \(y^{N}\), we define a new RV \(T_{y^{i}}^{*}:\mathcal{X}^{i}\rightarrow\mathbb{R}\) by
\[T_{y^{i}}^{*}\left(x^{i}\right)=\log\frac{P_{Y_{i}|Y^{i-1},X^{i}}\left(y_{i}|y ^{i-1},x^{i}\right)}{P_{Z}\left(y_{i}\right)}. \tag{17}\]
The following theorem states that \(T^{N}\triangleq\left\{T_{Y^{i}}^{*}\right\}_{i=1}^{N}\) is a sufficient statistic of \(Y^{N}\) for the estimation \(U^{N}\).
**Theorem 3**.: _Let \(X^{N},Y^{N}\sim P_{X^{N}}\otimes W_{Y^{N}\|X^{N}}\) and \(P_{Z}\) such that \(P_{Y}\ll P_{Z}\). Then \(T^{N}\), as defined in Equation (17), satisfies_
\[U^{N}-Y^{N}-T^{N}, \tag{18}\] \[U^{N}-T^{N}-Y^{N}. \tag{19}\]
The proof of Theorem 3 is given in Section VII. The main steps of the proof are to express \(P_{X^{N},Y^{N}}\) in terms of \(T^{N}\) and use the well-known Fisher-Neyman Factorization theorem.
#### Iv-B3 Obtaining the parametric channel embedding from the DINE model
Theorem 3 suggests that DI estimation is an appropriate objective for the construction of the channel embedding of \(Y^{N}\) needed for the NSC decoder. However, the evaluation of \(T^{N}\) for all \(x^{N}\in\mathcal{X}^{N}\) involves an exponential number of computations. To overcome this, recall that according to Equation (16), \(T_{\Psi_{XY}}\) is approximated by a RNN that contains a sequence of layers. Therefore, we design \(T_{\psi_{XY}}\) to process \(x^{N},e^{N}\) instead of processing \(x^{N},y^{N}\). Specifically, we denote this construction by \(T_{\psi_{XY},\phi}\left(x^{i},y^{i}\right)=\widetilde{T}_{\Psi_{XY}}\left(x^{i },e^{i}\right)\), where
\[e_{i}=E_{\phi}^{W}\left(y_{i}\right),\;E_{\phi}^{W}:\mathcal{Y}\rightarrow \mathbb{R}^{d} \tag{20}\]
is an embedding of \(y_{i}\).
With this parameterization, after applying [8, Algorithm 1] we obtain \(T_{\Psi_{XY},\Phi}\), that contains \(E_{\Phi}^{W}\) as its intermediate layer. Since \(T_{\Psi_{XY},\Phi}\) is composed of sequential layers, any intermediate layer of \(T_{\psi_{XY},\phi}\) must preserve the information that flows to its outputs. Therefore, we choose \(E_{\Phi}^{W}\) to be the channel embedding required for the NSC decoder. For this choice, the parameters of the channel embedding are fixed and the NSC can be optimized without the optimization over \(\Phi\). Specifically, the minimization in Definition 3 is perform exclusively over \(\Theta\).
### _Consistency_
The next theorem shows the consistency of the NSC for channels with memory. It demonstrates that, for FSCs, Algorithm 3 yields a consistent estimator of the SC polar decoder.
**Theorem 4** (Successive Cancellation Decoding of Time Invariant Channels (restated)).: _Let \(\mathbb{X},\mathbb{Y}\) be the inputs and outputs of an indecomposable FSC as given in Section II-A. Let \(\mathcal{D}_{M,N}\sim P_{X^{MN}}\otimes W_{Y^{MN}\|X^{MN}}\), where \(N=2^{n},\;M,n\in\mathbb{N}\). Let
\(u_{j,i}=(x_{j,1}^{N}G_{N})_{i}\). Then, for every \(\varepsilon>0\) there exists \(p\in\mathbb{N}\), compact \(\Phi,\Theta\in\mathbb{R}^{p}\) and \(m\in\mathbb{N}\) such that for \(M>m\) and \(i\in[N]\), \(\mathbb{P}-a.s\)._
\[\left|\mathsf{H}_{\Phi,\Theta}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H} \left(U_{i}|U^{i-1},Y^{N}\right)\right|<\varepsilon. \tag{21}\]
Theorem 4 concludes that there exists NNs that approximate the SC elementary operations with an arbitrary precision. It also indicates that these operations do not depend on the specific block, or the specific symbol location inside the block. i.e. the same NNs, \(E_{\phi},F_{\theta},G_{\theta},H_{\theta}\), may be used for all decoding stages and for all bits inside each decoding stage.
The proof starts with identifying that the structure of \(P_{U_{i}|U^{i-1},Y^{N}}^{\phi,\theta}\left(1|u^{i-1},y^{N}\right)\) is induced by the structure of the SC decoder and that it contains \(4\) unique operations, \(E_{\phi}^{W},F_{\theta},G_{\theta},H_{\theta}\), that operate on channel embedding in \(\mathbb{R}^{d}\). It continues with an approximation step, in which \(P_{U_{i}|U^{i-1},Y^{N}}\left(1|u^{i-1},y^{N}\right)\) is parameterized by NN via the universal approximation theorem of NNs [19]. Then, an estimation step follows, in which expected values are estimated by empirical means via the uniform law of large numbers for stationary and ergodic processes [23]. The full proof is given in Section VII.
### _Computational Complexity_
The following theorem examines the computational complexity of the NSC decoder for the case where \(E_{\phi},F_{\theta},G_{\theta},H_{\theta}\) are NNs with \(k\) hidden units and the embedding space satisfies \(\mathcal{E}\subset\mathbb{R}^{d}\).
**Theorem 5** (Computational Complexity of the NSC).: _Let \(E_{\phi}\in\mathcal{G}_{\mathsf{NN}}^{(1,k,d)},F_{\theta}\in\mathcal{G}_{ \mathsf{NN}}^{(2d,k,d)},G_{\theta}\in\mathcal{G}_{\mathsf{NN}}^{(2d+1,k,d)},H _{\theta}\in\mathcal{G}_{\mathsf{NN}}^{(d,k,1)}\). Then, the computational complexity of NSC decoding is \(O\left(kdN\log_{2}N\right)\)._
Proof.: According to [2, Section VIII], the recursive formulas of the SC decoder have complexity of \(O(N\log N)\). In [2, Section VIII] the decoding operations have complexity of \(O(1)\) and therefore do not affect the overall complexity. Here, we consider decoding operations that are given by NNs with input dimension at most \(2d+1\), \(k\) hidden units and output dimension of at most \(d\). The complexity of such NN is \(O(kd)\) that yields in the overall complexity of the NSC to be \(O(kdN\log N)\)
The only difference between Theorem 5 and [2, Theorem 5] is that the NN computation complexity is given explicitly by \(kd\) (even though it could have been neglected as it does not depend on \(N\) or the channel's state space). The goal of Theorem 5 is to compare the NSC decoder with SCT decoder. Recall that the computational complexity of the SCT decoder is \(O\left(|\mathcal{S}|^{3}N\log N\right)\). This sets a main advantage of the NSC decoder - its computational complexity does not grow with the memory size of the channel.
## V Extension to Asymmetric Input Distributions via Honda-Yamamoto Scheme
This section describes how to extend the data-driven polar decoder to the case where the input distribution is not necessarily symmetric. It starts with a brief description of the Honda-Yamamoto scheme [10]. Then, it extends our methods to accommodate asymmetric input distributions by incorporating this scheme. In the case where the channel is memoryless, the extension is straight-forward and it is described in Section V-B. For the case where the channel has memory, the extension entails applying the NSC twice, as done in the Honda-Yamamoto scheme. This is described in Section V-C.
### _Honda-Yamamoto Scheme for Asymmetric Channels_
The Honda-Yamamoto scheme [10] generalizes polar coding for asymmetric input distributions. Here, the polar decoder is applied twice; first, before observing the channel outputs and second, after observing the channel outputs. An equivalent interpretation is that the first application of SC decoding is done on a different channel whose outputs are independent of its inputs. Indeed, in this case, as given in Equation (1), the first term of the RHS cancels out, and it follows that the channel embedding are constant for all \(y\in\mathcal{Y}\). Thus, for the first application of SC decoding, we denote the constant _input embedding_ by \(E^{X}\) (rather than \(E^{W}\)). The second application of SC decoding follows the same procedure as in the case for symmetric channels.
Accordingly, a polar decoder with non symmetric input distribution is defined by the tuple \(\left(\mathcal{X},\mathcal{Y},W,E^{X},E^{W},F,G,H\right)\). Here, we add the input embedding \(E^{X}\) to the definition, where \(E_{X}(y)\) is constant for all \(y\in\mathcal{Y}\). An important observation is that the functions \(F,G,H\) are independent of the channel, i.e. both application of SC decoding (before and after observing the channel outputs) share the same functions \(F,G,H\).
Given a finite sample \(\mathcal{D}_{M,N}\), we denote by
\[\mathcal{A}=\mathsf{SC}_{\mathsf{design-HY}}\left(\mathcal{D}_{M,N},k,E^{X},E^ {W},F,G,H\right)\]
the procedure of finding the set of good channels \(\mathcal{A}\subset[N]\) with \(|\mathcal{A}|=k\) over the sample \(\mathcal{D}_{M,N}\) with a SC decoder that uses \(E^{X}\), \(E^{W}\), \(F\), \(G\), \(H\) as its elementary operations. Specifically, for each block \(j\in[M]\), SC decoding is applied twice. First, \(E^{X}\) is used to compute the channel embedding; it yields in the computation of \(\left\{P_{U_{i}|U^{i-1}}\left(1|u_{j,1}^{i-1}\right)\right\}_{j\in[M],i\in[N]}\). Next, \(E^{W}\) is used to compute the channel embedding; it yields in the computation of \(\left\{P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{j,1}^{i-1},y_{j,1}^{N}\right)\right\} _{j\in[M],i\in[N]}\). For each \(i\in[N]\), we compute the empirical average
\[\widehat{1}\Big{(}W_{N}^{(i)}\Big{)}=-\frac{1}{M}\sum_{j=1}^{M}\log P_{U_{i}| U^{i-1}}\left(u_{j,i}|u_{j,1}^{i-1}\right)+\frac{1}{M}\sum_{j=1}^{M}\log P_{U_{i}| U^{i-1},Y^{N}}\left(u_{j,i}|u_{j,1}^{i-1},y_{j,1}^{N}\right)\]
to estimate the MI of the effective bit channels. This estimate is used to complete the polar code design by choosing \(\mathcal{A}\subset[N]\) with the highest values of \(\left\{\widehat{1}\Big{(}W_{N}^{(i)}\Big{)}\right\}_{i\in[N]}\).
### _Memoryless Channels_
The case of asymmetric input distributions is similar to the case of polar code design for uniform input distributions. We deal with non-uniform input distributions by applying the Honda-Yamamoto scheme [10]. Consider \(W:=W_{Y|X}\) be a binary-input memoryless channel and \(P_{X}\) an i.i.d. and non-uniform input distribution3. Accordingly, given \(\mathcal{D}_{MN}\), also here, once \(T_{\Phi}\) is estimated via the MINE algorithm, it is used as a proxy of the channel embedding \(E^{W}\) by the formula
Footnote 3: In the case of memoryless channels it is sufficient to consider an i.i.d. input distribution since it achieves capacity [24].
\[E_{\Phi}^{W}\left(y\right)=T_{\Phi}\left(1,y\right)-T_{\Phi}\left(0,y\right)+ \log\frac{P_{X}\left(1\right)}{P_{X}\left(0\right)} \tag{22}\]
and the Honda-Yamamoto scheme is applied "as-is". Specifically, for this case, Algorithm 1 is applied with the only exception that the polar code design \(\mathsf{SC}_{\mathsf{design}}\) is replaced by \(\mathsf{SC}_{\mathsf{design}-\mathsf{HY}}\).
### _Channels with Memory_
This section considers two issues. The first is the choice of an input distribution. This is addressed by employing algorithms for capacity estimation [7, 8]. The second issue addresses the construction of a NSC decoder that is tailored for stationary input distributions.
For the choice of the input distribution, we employ recent method for the optimization of the DINE as presented in [8]. Therein, the authors provide an reinforcement learning (RL) algorithm that uses DINE to estimate capacity achieving input distributions. The input distribution is approximated with an RNN with parameter space denoted by \(\Pi\). Let \(P_{X}^{\pi}\) be the estimated capacity achieving input distribution. Thus, by application of [8, Algorithm 1], we obtain a model of \(P_{X}^{\pi}\) from which we are able to sample observations of the channel inputs.
Extension of Algorithm 3 to \(P_{X^{N}}\) (that is not uniform and i.i.d.) involves introducing additional parameters, that we denote by \(\phi_{2}\in\Phi\). Accordingly, we denote the set of the channel embedding by \(\phi=\{\phi_{1},\phi_{2}\}\), where \(\phi_{1}\) denotes the parameters
of \(E^{X}\) and \(\phi_{2}\) are the parameters of \(E^{W}\). We define \(E^{X}_{\phi}:\mathcal{Y}\rightarrow\mathbb{R}^{d}\) as a constant RV that satisfies \(E^{X}_{\phi}\left(y\right)=e_{X}\in\mathbb{R}^{d}\) for all \(y\in\mathcal{Y}\). Accordingly, the NSC in this case is defined by \(E^{X}_{\phi},E^{W}_{\phi},F_{\theta},G_{\theta},F_{\theta}\). Thus, Algorithm 3 needs to be updated in order to optimize \(E^{X}_{\phi}\) as well. This is addressed by first applying the NSC with inputs \(e^{N}_{X}\) to compute \(P^{\phi,\theta}_{U_{i}|U^{i-1}}\), where \(e^{N}_{X}\in\mathbb{R}^{d\times N}\) is a matrix whose columns are duplicates of \(e_{X}\). Second, the NSC is applied with \(e^{N}_{Y}\) to compute \(P^{\phi,\theta}_{U_{i}|U^{i-1},Y^{N}}\), where \(e^{N}_{Y}\in\mathbb{R}^{d\times N}\) is a matrix whose \(i\)-th column is \(E^{W}_{\phi}\left(y_{i}\right)\).
The training procedure admits the following steps. First, the channel inputs and outputs are sampled by \(x^{N},y^{N}\sim P^{\pi}_{X^{N}}\otimes W_{Y^{N}\|X^{N}}\). Then, the values of \(u^{N}=x^{N}G_{N}\) are computed, and form the labels of the algorithm. Next, the channel statistics \(e^{N}_{Y}\) are computed and the input statistics are duplicated to obtain \(e^{N}_{X}\). The next step is to apply the NSC-Train procedure twice, i.e.
\[\mathsf{L}_{X} =\mathsf{NSCTrain}(e^{N}_{X},u^{N},0) \tag{23}\] \[\mathsf{L}_{Y} =\mathsf{NSCTrain}(e^{N}_{Y},u^{N},0), \tag{24}\]
which are minimized via SGD. This procedure is depicted in Algorithm 4.
## VI Experiments
This section presents experiments on memoryless channels and channels with memory. The experiments demonstrate the performance of the proposed algorithms for symmetric and non-symmetric input distributions. Section VI-A considers memoryless channels, and Section VI-B considers channels with memory. Each section includes the results for both cases where the input distribution is uniform or not. All the polar codes in this sections, unless specified otherwise, are designed with rate \(R=0.25\).
### _Memoryless channels_
The following experiments test the proposed methodology to design polar codes for various memoryless channels. To demonstrate our algorithm, we conduct our experiments on both symmetric and non-symmetric binary-input discrete memoryless channel (B-DMC). The binary symmetric channel (BSC) and the AWGN channels are chosen as instances of symmetric channels. A non symmetric binary erasure channel (BEC), as defined in [25], is chosen as an instance of an asymmetric B-DMCs. To validate our numerical results, we compare our algorithms with the SC decoder that provides the optimal decoding rule under the framework of polar codes.
The BSC channel is defined by \(W(y|x)=p\mathbb{I}[y\neq x]+(1-p)\mathbb{I}[y=x]\); here, we choose \(p=0.1\). The AWGN channel is defined by the following relation \(Y=X+N\), where \(X\) is the channel input, \(Y\) is the channel output and \(N\sim\mathcal{N}(0,\sigma^{2})\) is an i.i.d. Gaussian noise. In our experiments \(\sigma^{2}=0.5\). The non-symmetric BEC is defined by two erasures probabilities, \(\epsilon_{0},\epsilon_{1}\), namely the probabilities for an erasure of the "\(0\)" symbol and the "\(1\)" symbol, respectively. Accordingly, \(W(x|x)=1-\epsilon_{x},\ W(?|x)=\epsilon_{x}\) for \(x\in\{0,1\}\). Similar to [25], we choose \(\epsilon_{0}=0.4,\epsilon_{1}=0.8159\).
Figure 2: The left figure compares the bit error rate (BER) incurred by Algorithm 1 and the SC decoder on a BSC with parameter \(0.1\). The right figure compares the BER incurred by Algorithm 1 and the SC decoder on a AWGN channel. The curves labeled by \(E^{W},E^{W}_{\Phi}\) correspond to the analytic and estimated channel embedding, respectively.
Figure 2 shows the application of Algorithm 1 on the BSC and the AWGN channels. It reports the obtained BERs by Algorithm 1 in comparison with the SC decoder. Figure 3 illustrates two comparisons. The first compares the BERs obtained via the extension of Algorithm 1 to the Honda-Yamamoto scheme, as described in Section V-B, and by the optimal decoding rule of the Honda-Yamamoto scheme. It also compares the BERs obtained by a symmetric input distribution with the capacity achieving input distribution. In opposite to what we expected, better BERs were obtained via a symmetric input distribution. The reason for this stems from the polarization of the source \(U^{N}\), which has negative effect in short blocks.
### _Channels with Memory_
The experiments here demonstrate the performance of the NSC decoder on various channels. First, having a sample \(\mathcal{D}_{M,N}\) does not indicate if it is drawn from channel with or without memory. As memoryless channels are a special case of channels with memory, we start by testing the NSC on memoryless channels. The experiments proceed to FSCs for which there exists an analytic polar decoder that is given by the SCT decoder [5]. Recall that the computational complexity of the SCT decoder is \(O(|\mathcal{S}|^{3}N\log N)\); therefore, we evaluate our algorithms on channels with a small state space and on channels with a large state space, i.e. \(|\mathcal{S}|^{3}\gg N\log N\). The last experiments test the NSC decoder on channels with infinite state space for which an optimal decision rule is intractable.
As instances of channels with memory, we choose the Ising channel [26] and the inter-symbol interference (ISI) channel [24], respectively. These channels belong to the family of FSCs, and therefore, their optimal decoding rule is given by the SCT decoder. We also tested our methodology on channels with continuous state space for which the SCT decoder can not be applied. As an instance of such channels, we choose the moving average additive Gaussian noise (MA-AGN) channel. The
Figure 4: The figures compares the BER incurred by Algorithm 3 and the SCT decoder on a Ising, Trapdoor, ISI channel with \(m=2\) and the MA-AGN channel. All figures compare with the ground truth given by the SCT decoder, except of the MA-AGN channel for which there is no known optimal polar decoder.
Figure 3: The figure compares the BERs incurred on the asymmetric BEC; both compare the results for \(P_{X}(1)=0.5\) and \(P_{X}(1)=\frac{9}{16}\) (capacity achieving input distribution). The left figure compares the BERs incurred by applying the memoryless algorithm (Algorithm 1) for both choices of \(P_{X}\). The right figure is the ground truth of the left figure by using \(E_{W}\) instead of \(E_{W}^{\Phi}\).
Ising channel [26] is defined by \(Y=X\) or \(Y=S\) with equal probability, and \(S^{\prime}=X\), where \(X\) is the channel input, \(Y\) is the channel output, \(S\) is the channel states in the beginning of the transmission and \(S^{\prime}\) is the channel's state in the end of the transmission. The interference channel is defined by the formula \(Y_{t}=\sum_{i=0}^{m}h_{i}X_{t-i}+Z_{i}\), where \(X_{t},Y_{t}\) are the channel input and output at time \(t\), \(\{h_{i}\}_{i=1}^{m}\) are the interference parameters and \(Z_{i}\overset{iid}{\sim}\mathcal{N}(0,\sigma^{2})\). In our experiment we set \(h_{i}=0.9^{i}\) and \(\sigma^{2}=0.5\). The MA-AGN channel is given by \(Y_{t}=X_{t}+\tilde{Z}_{t}\), \(\tilde{Z}_{t}=Z_{t}+\alpha Z_{t-1}\), where \(\alpha\in\mathbb{R}\) and \(Z_{t}\overset{iid}{\sim}\mathcal{N}(0,\sigma^{2})\).
Figure 4 compares the BERs attained by Algorithm 3 vs. the optimal decoding rule for the AWGN, Ising, ISI with \(m=2\) and the MA-AGN channel with \(\alpha=0.9\). For the last channel, we illustrate the incurred BERs without any comparison, since, as far as we know, there is no available decoding rule for channel with continuous state space. Figure 5 illustrates the the the BERs attained by Algorithm 4 on the Ising channel with a stationary (non-symmetric) input distribution, the convergence of Algorithm 3 when applied on the Ising channel. It also illustrates the BERs incurred for a varying values of \(n_{\text{t}}\), the block length in training. It is clear that increasing \(n_{\text{t}}\) produce better estimation results. Table (b)b depicts the decoding complexity of the SCT and the NSC decoders for the ISI channel.
## VII Proofs
This section provides the proofs for the main theorems of the paper.
### _Proof of Theorem 2_
The proof relies on the consistency of the MINE algorithm [20, Theorem 2]. Specifically, for all \(\varepsilon>0\) there exists \(m>0\) such that for all \(M>m\)\(\mathbb{P}-a.s\).
\[\left|\left|\begin{array}{c|c}m&\text{ SCT}&\text{NSC}\\ \hline 1&O(2^{3}N\log N)&O(800N\log N)\\ 2&O(2^{6}N\log N)&O(800N\log N)\\ 3&O(2^{9}N\log N)&O(800N\log N)\\ 4&O(2^{12}N\log N)&O(800N\log N)\\ 5&O(2^{15}N\log N)&O(800N\log N)\\ 6&O(2^{18}N\log N)&O(800N\log N)\\ \hline\end{array}\right.\]
Figure 5: The left figure illustrates the performance of Algorithm 4 on the Ising channel in comparison with the ground truth given by the Honda-Yamamoto scheme
where \(\mathsf{I}_{\Phi}\left(\mathcal{D}_{M}\right)=\max_{\phi\in\Phi}\frac{1}{M}\sum_{ i=1}^{M}T_{\phi}\left(x_{i},y_{i}\right)-\log\frac{1}{M}\sum_{i=1}^{M}e^{T_{\phi} \left(x_{i},\bar{y}_{i}\right)}\), as defined in Equation (7). Recall that \(T_{\Phi}\) denotes the maximizer of the MINE algorithm and \(T^{*}=\log\frac{P_{X,Y}}{P_{X}\otimes P_{Y}}\).
The next argument claims that for all \(\varepsilon^{\prime}>0\) there exists \(\varepsilon>0\) such that \(\mathbb{P}-a.s.\)\(|T^{*}-T_{\Phi}|<\varepsilon^{\prime}\). For this we define the Gibbs density \(G_{\Phi}\triangleq\frac{P_{X}\otimes P_{Y}e^{T_{\Phi}}}{\mathbb{E}_{P_{X} \otimes P_{Y}}\left[e^{T_{\Phi}}\right]}\). First note that \(\mathbb{E}_{P_{X,Y}}\left[G_{\Phi}\right]=1\) and \(G_{\Phi}\geq 0\) and therefore is a valid density. For \(P\triangleq P_{X,Y}\) and \(Q\triangleq P_{X}\otimes P_{Y}\), we observe
\[\mathsf{D}_{\mathsf{KL}}\left(P\|G_{\Phi}\right) =\mathbb{E}_{P}\left[\log\frac{P}{\frac{Qe^{T_{\Phi}}}{\mathbb{E} _{Q}\left[e^{T_{\Phi}}\right]}}\right] \tag{25}\] \[=\mathbb{E}_{P}\left[\log\frac{P}{Q}-\log\frac{e^{T_{\Phi}}}{ \mathbb{E}_{Q}\left[e^{T_{\Phi}}\right]}\right]\] (26) \[=\mathsf{I}\left(X;Y\right)-\left(\mathbb{E}_{P}\left[T_{\Phi} \right]-\log\mathbb{E}_{Q}\left[e^{T_{\Phi}}\right]\right)\geq 0, \tag{27}\]
with equality if and only if \(P=G_{\Phi}\) almost everywhere. From the consistency of MINE we have that \(\mathsf{D}_{\mathsf{KL}}\left(P\|G_{\Phi}\right)<\varepsilon\). Since the mappings \(T_{\Phi}\mapsto G_{\Phi}\) and \(G_{\Phi}\mapsto\mathsf{D}_{\mathsf{KL}}\left(P\|G_{\Phi}\right)\) are continuous, we conclude that \(\mathsf{D}_{\mathsf{KL}}\left(P\|G_{\Phi}\right)<\varepsilon\) implies \(\mathbb{P}-a.s.\)\(|T^{*}-T_{\Phi}|<\varepsilon^{\prime}\).
The last step is to observe that the mapping of the SC decoder is continuous. Specifically, the mapping \(E_{\Phi}^{W}=T_{\Phi}\left(1,\cdot\right)-T_{\Phi}\left(0,\cdot\right)\) is continuous and \(F,G,H\), as defined in Equation (2), are continuous, and therefore for all \(\varepsilon^{\prime\prime}>0\) there exists \(\varepsilon^{\prime}>0\) such that \(\left|\mathsf{H}_{\Phi}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H}\left(U_{ i}|U^{i-1},Y^{N}\right)\right|<\varepsilon^{\prime\prime}\), which concludes the proof.
### _Proof of Theorem 4_
The proof starts with identifying the structure of the NSC. Specifically, it concludes there exists recursive formulas for SC decoding of the polar code, for which the channel embedding belong to a \(d\)-dimensional Euclidean space, for some \(d\in\mathbb{N}\). First, we define the channel embedding, as defined by the authors of [5].
**Definition 4** (Channel Embedding).: Let \(W\) be a FSC. Then, \(W_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right)\) is defined by
\[W_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right)=\left\{(s_{0},s_{N},u_{i})\in \mathcal{S}\times\mathcal{S}\times\mathcal{X}\ :\ \ P_{Y_{1}^{N},U_{1}^{i},S_{N}|S_{0}}\left(y_{1}^{N},u_{1}^{i},s_{N}|s_{0}\right) \right\}. \tag{28}\]
The channel embedding is a set of all the joint probabilities of \(y_{1}^{N},u_{1}^{i}\) given every initial state and final state of the channel. The channel embedding \(W_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right)\) is a finite set if \(|\mathcal{X}|,|\mathcal{S}|<\infty\). It is a deterministic function of \(Y_{1}^{N},U_{1}^{i-1}\) and it represented by a vector whose dimension is \(|\mathcal{S}|^{2}(|\mathcal{X}|-1)\); its entries correspond to all possible values of \(U_{i},S_{0},S_{N}\). Herein, the dimension of the vector is denoted by \(|\mathcal{S}|^{2}\) due to the assumption that \(|\mathcal{X}|=2\).
The next lemma shows the structure of the polar decoder, for which the channel embeddings are represented in \(\mathbb{R}^{d}\) for any \(d\in\mathbb{N}\), instead of \(\mathbb{R}^{|\mathcal{S}|^{2}}\) as in the case of the SCT decoder.
**Lemma 1** (Structure).: _Fix \(n\in\mathbb{N}\) and set \(N=2^{n},i\in[N]\). Let \(\mathcal{E}\subset\mathbb{R}^{d}\) for \(d\in\mathbb{N}\). Then, for all \(i\in[N]\) the functions \(P_{U_{i}|U_{1}^{i-1},Y_{1}^{N}}\) may be computed using four distinct functions \(E:\mathcal{Y}\rightarrow\mathcal{E},F:\mathcal{E}\times\mathcal{E}\rightarrow \mathcal{E},G:\mathcal{E}\times\mathcal{E}\times\mathcal{X}\rightarrow\mathcal{E },H:\mathcal{E}\rightarrow[0,1]\). The functions are given by_
\[E_{1}^{(1)}\left(y_{i}\right) =E\left(y_{i}\right) \tag{29}\] \[E_{2N}^{(2i-1)}\left(y_{1}^{2N},u_{1}^{2i-2}\right) =F\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1,e}^{2i-2}\oplus u_{1,o}^{2i -2}\right),E_{N}^{(i)}\left(y_{N+1}^{2N},u_{1,e}^{2i-2}\right)\right)\] (30) \[E_{2N}^{(2i)}\left(y_{1}^{2N},u_{1}^{2i-1}\right) =G\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1,e}^{2i-2}\oplus u_{1,o}^{2i -2}\right),E_{N}^{(i)}\left(y_{N+1}^{2N},u_{1,e}^{2i-2}\right),u_{2i-1}\right)\] (31) \[P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{1}^{i-1},y_{1}^{N}\right) =H\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right)\right). \tag{32}\]
Let \(E_{\Phi}^{W}\) and \(F_{\theta},G_{\theta},H_{\theta}\), as defined in Definition 2, be NNs. For simplicity we denote \(\Theta^{\prime}=\Theta\times\Phi\). For every choice of \(\theta\in\Theta^{\prime}\), there is an induced distribution denoted by \(P_{U_{i}|U^{i-1},Y^{N}}^{\theta}\); we denote by \(P_{\theta}\left(U^{i-1},Y^{N}\right)\triangleq P_{U_{i}|U^{i-1},Y^{N}}^{\theta} \left(1|U^{i-1},Y^{N}\right)\). The following lemma states that there exists \(\theta\in\Theta^{\prime}\) that can approximate \(\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)\) with an arbitrary precision.
**Lemma 2** (Approximation).: _Let \(\varepsilon>0\). There exists a NN \(P_{\theta}:\mathcal{X}^{i-1}\times\mathcal{Y}^{N}\rightarrow[0,1]\), with parameters \(\theta\) in some compact space \(\Theta^{\prime}\subset\mathbb{R}^{p}\), \(p\in\mathbb{N}\), such that_
\[\left|\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H}_{\Theta^{\prime}} \left(U_{i}|U^{i-1},Y^{N}\right)\Big{|}<\varepsilon, \tag{33}\]
_where_
\[\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N}\right)=\min_{\theta\in \Theta^{\prime}}\mathbb{E}\left[-U_{i}\log P_{\theta}\left(U^{i},Y^{N}\right)- \left(1-U_{i}\right)\log\left(1-P_{\theta}\left(U^{i},Y^{N}\right)\right) \right]. \tag{34}\]
The third lemma guarantees the almost sure convergence of the empirical average to the expected value when the number of samples goes to infinity.
**Lemma 3** (Estimation).: _Let \(\varepsilon>0\). Let \(P_{\theta}:\mathcal{X}^{i-1}\times\mathcal{Y}^{N}\rightarrow[0,1]\), with parameters \(\theta\) in some compact space \(\Theta^{\prime}\subset\mathbb{R}^{p}\), \(p\in\mathbb{N}\). Then, there exists \(m\in\mathbb{N}\) such that for all \(M>m\), \(\mathbb{P}\) a.s.,_
\[\left|\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H} _{\Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)\Big{|}<\varepsilon, \tag{35}\]
_where_
\[\mathsf{H}_{\Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)=\min_{ \theta\in\Theta^{\prime}}\frac{1}{M}\sum_{j=1}^{M}\left\{-U_{i}\log P_{\theta }\left(U^{i},Y^{N}\right)-\left(1-U_{i}\right)\log\left(1-P_{\theta}\left(U^{ i},Y^{N}\right)\right)\right\}. \tag{36}\]
The proof is concluded by the combination of Lemmas 2 and 3. Specifically, we use the lemmas to claim that there exist \(\Theta^{\prime}\in\mathbb{R}^{p}\) and \(m\in\mathbb{N}\) such the Equations (35) and (33) hold with \(\frac{\varepsilon}{2}\). By the triangle inequality, for all \(M>m\), \(\mathbb{P}-a.s\).
\[\left|\mathsf{H}_{\Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N} \right)-\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)\right|\] \[\leq\left|\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N} \right)-\mathsf{H}_{\Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N}\right) \right|+\left|\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N}\right)- \mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)\right|\] \[<\varepsilon, \tag{37}\]
which concludes the proof.
#### Iii-C1 Proof of Lemma 1
We derive the structure of \(P_{U_{i}|U^{i-1},Y^{N}}\) by utilizing the mechanism of the SC decoder. The following derivation is similar to the one in the proof of [5, Theorem 2]. According to [5, Theorem 2], the channel embedding is given by
\[E_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right)=\left\{P_{Y_{1}^{N},U_{1}^{i},S_ {N}|S_{0}}\left(y_{1}^{N},u_{1}^{i},s_{N}|s_{0}\right)\right\}_{(s_{0},s_{N}, u_{i})\in\mathcal{S}\times\mathcal{S}\times\mathcal{X}}. \tag{38}\]
For every \(s_{0},s_{N},u_{i}\in\mathcal{S}\times\mathcal{S}\times\mathcal{X}\), it was shown in [5] that the recursive formulas are given by
\[P_{Y_{1}^{2N},U_{1}^{2i-1},S_{2N}|S_{0}}\left(y_{1}^{2N},u_{1}^{ 2i-1},s_{2N}|s_{0}\right)\] \[=\sum_{s_{N}^{\prime}}\sum_{u_{2i}}P_{Y_{1}^{N},U_{1}^{i},S_{N}|S_ {0}}\left(y_{1}^{N},u_{1,e}^{2i}\oplus u_{1,o}^{2i},s_{N}^{\prime}|s_{0}\right) \cdot P_{Y_{1}^{N},U_{1}^{i},S_{N}|S_{0}}\left(y_{N+1}^{2N},u_{1,e}^{2i},s_{2N}| s_{N}^{\prime}\right), \tag{39}\] \[P_{Y_{1}^{2N},U_{1}^{2i},S_{2N}|S_{0}}\left(y_{1}^{2N},u_{1}^{2i},s_{2N}|s_{0}\right)\] \[=\sum_{s_{N}^{\prime}}P_{Y_{1}^{N},U_{1}^{i},S_{N}|S_{0}}\left(y_ {1}^{N},u_{1,e}^{2i}\oplus u_{1,o}^{2i},s_{N}^{\prime}|s_{0}\right)\cdot P_{Y_{ 1}^{N},U_{1}^{i},S_{N}|S_{0}}\left(y_{N+1}^{2N},u_{1,e}^{2i},s_{2N}|s_{N}^{ \prime}\right). \tag{40}\]
The soft decision is given by
\[P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{1}^{i-1},y^{N}\right)=\sigma\left(\log\frac{ \sum_{s_{N},s_{0}}P_{U_{i},U^{i-1},Y^{N},S_{N}|S_{0}}\left(1,u_{1}^{i-1},y^{N},s_{N}|s_{0}\right)P_{S_{0}}(s_{0})}{\sum_{s_{N},s_{0}}P_{U_{i},U^{i-1},Y^{N},S_{N}|S_{0}}\left(0,u_{1}^{i-1},y^{N},s_{N}|s_{0}\right)P_{S_{0}}(s_{0})} \right), \tag{41}\]
where \(\sigma\) denotes the logistic (sigmoid) function. Equations (39),(40),(41) can be rewritten in a more compact form in terms of \(W_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right)\), i.e.
\[E_{2N}^{(2i-1)}\left(y_{1}^{2N},u_{1}^{2i-2}\right) =\widetilde{F}\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1,e}^{2i-2} \oplus u_{1,o}^{2i-2}\right),E_{N}^{(i)}\left(y_{N+1}^{2N},u_{1,e}^{2i-2} \right)\right) \tag{42}\] \[E_{2N}^{(2i)}\left(y_{1}^{2N},u_{1}^{2i-1}\right) =\widetilde{G}\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1,e}^{2i-2} \oplus u_{1,o}^{2i-2}\right),E_{N}^{(i)}\left(y_{N+1}^{2N},u_{1,e}^{2i-2} \right),u_{2i-1}\right)\] (43) \[P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{1}^{i-1},y_{1}^{N}\right) =\widetilde{H}\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1}^{i-1}\right) \right), \tag{44}\]
where \(\widetilde{F}\) denotes the check-node, \(\widetilde{G}\) denotes the bit-node and \(\widetilde{H}\) denotes the soft decision. Since \(W_{N}^{(i)}\in\mathbb{R}^{|\mathcal{S}|^{2}}\), we can find a bijection between \(\mathbb{R}^{|\mathcal{S}|^{2}}\) and \(\mathbb{R}^{d}\) for any \(d\in\mathbb{N}\) (since \(|\mathbb{R}^{d}|=|\mathbb{R}|\)). We denote this mapping by \(K:\mathbb{R}^{|\mathcal{S}|^{2}}\rightarrow\mathbb{R}^{d}\), and its inverse by \(K^{-1}\). Using \(K\), we may rewrite the equations in the following way
\[E_{1}^{(1)}\left(y_{i}\right)=K\left(W_{1}^{(1)}\left(y_{i}\right)\right) \tag{45}\]
\[E_{2N}^{(2i-1)}\left(y_{1}^{2N},u_{1}^{2i-2}\right) =K\left(\widetilde{F}\left(K^{-1}\left(E_{N}^{(i)}\left(y_{1}^{N},u_ {1,e}^{2i-2}\oplus u_{1,o}^{2i-2}\right)\right),K^{-1}\left(E_{N}^{(i)}\left(y_{ N+1}^{2N},u_{1,e}^{2i-2}\right)\right)\right)\right) \tag{46}\] \[E_{2N}^{(2i)}\left(y_{1}^{2N},u_{1}^{2i-1}\right) =K\left(\widetilde{G}\left(K^{-1}\left(E_{N}^{(i)}\left(y_{1}^{N},u _{1,e}^{2i-2}\oplus u_{1,o}^{2i-2}\right)\right),K^{-1}\left(E_{N}^{(i)}\left(y _{N+1}^{2N},u_{1,e}^{2i-2}\right)\right),u_{2i-1}\right)\right)\] (47) \[P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{1}^{i-1},y_{1}^{N}\right) =\widetilde{H}\left(K^{-1}\left(E_{N}^{(i)}\left(y_{1}^{N},u_{1}^ {i-1}\right)\right)\right). \tag{48}\]
We denote by \(E=K^{-1}\circ W_{1}^{(1)}:\mathcal{Y}\rightarrow\mathcal{E}\) the channel embedding, and Equations (46), (47), (48), define \(F,G,H\), as requested. Thus, we showed that \(P_{U_{i}|U^{i-1},Y^{N}}\left(1|u_{1}^{i-1},y^{N}\right)\) is computed by four distinct functions, which concludes the proof.
#### Iv-B2 Proof of Lemma 2
Let \(L^{*}=-\log P_{U_{i}|U^{i-1},Y^{N}}\) and \(L_{\theta}=-U_{i}\log P_{\theta}\left(U^{i},Y^{N}\right)-\left(1-U_{i}\right) \log\left(1-P_{\theta}\left(U^{i},Y^{N}\right)\right)\). By definition, \(L_{\theta}\) is bounded by a constant \(R^{\prime}\). By construction \(\mathbb{E}\left[L^{*}\right]=\mathsf{H}\left(U_{i}|U^{i-1},Y^{N}\right)\). Fix \(\varepsilon>0\), and assume \(L^{*}\) is bounded by a constant \(R^{\prime\prime}>0\). Let \(R=\max\{R^{\prime},R^{\prime\prime}\}\). By the universal approximation theorem [18], there exists \(\hat{\theta}\in\Theta^{\prime}\) with \(L_{\hat{\theta}}\leq R\) such that
\[\left|L^{*}-L_{\hat{\theta}}\right|<\frac{\varepsilon}{2}. \tag{49}\]
For the case where \(L^{*}\) is unbounded, we first note that \(L^{*}\) is integrable since
\[\mathbb{E}\left[\left|L^{*}\right|\right]=\mathbb{E}\left[-\log P_{U_{i}|U^{i -1},Y^{N}}\right]\leq H(U_{i})\leq 1.\]
Therefore, by the dominated convergence theorem, there exists \(R>0\) such that
\[\left|L^{*}\mathbb{I}_{L^{*}>R}\right|<\frac{\varepsilon}{2}. \tag{50}\]
Combining Equations (49), (50) and the triangle inequality yields
\[\mathbb{E}\left[\left|L^{*}-L_{\hat{\theta}}\right|\right]\leq\mathbb{E}\left[ \left|L^{*}-L_{\hat{\theta}}\right|\mathbb{I}_{L^{*}\leq R}\right]+\mathbb{E} \left[\left|L_{\hat{\theta}}\right|\mathbb{I}_{L^{*}>R}\right]+\mathbb{E} \left[\left|L^{*}\right|\mathbb{I}_{L^{*}>R}\right]\leq\frac{\varepsilon}{2}+0+ \frac{\varepsilon}{2}=\varepsilon.\]
This implies that the difference between \(\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N}\right),\mathsf{H}_{ \Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)\) is at most \(\varepsilon\).
#### Iv-B3 Proof of Lemma 3
According to the triangle inequality
\[\left|\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H}_ {\Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)\Big{|}\leq\sup_{\theta \in\Theta^{\prime}}\Big{|}\mathbb{E}\left[L_{\theta}\right]-\frac{1}{M}\sum_{ j=1}^{M}L_{\theta}\Big{|}.\]
Since \(\Theta^{\prime}\) is compact and \(L_{\theta}\) is continuous, the family of functions \(L_{\theta}\) satisfy the uniform law of large numbers for stationary and ergodic processes [23, Theorem 3.1]. Therefore given \(\varepsilon>0\), there exists \(m\in\mathbb{N}\) such that for all \(M>m\)\(\mathbb{P}-a.s.\)
\[\left|\mathsf{H}_{\Theta^{\prime}}\left(U_{i}|U^{i-1},Y^{N}\right)-\mathsf{H}_ {\Theta^{\prime}}^{M}\left(U_{i}|U^{i-1},Y^{N}\right)\Big{|}<\frac{ \varepsilon}{2}.\]
### _Proof of Theorem 3_
The first Markov relation is straightforward as \(T^{N}\) is a function of \(Y^{N}\) and \(U^{N}=X^{N}G_{N}\). The second Markov relation is derived by showing that \(T^{N}\) is a sufficient statistic of \(Y^{N}\) for the estimation of \(X^{N}\), or equivalently, \(I\left(X^{N};Y^{N}\right)=I\left(X^{N};T^{N}\right)\). For \(x^{N},y^{N}\in\mathcal{X}^{N}\times\mathcal{Y}^{N}\), consider the following chain of equalities:
\[P_{X^{N},Y^{N}}\left(x^{n},y^{n}\right) =\prod_{i=1}^{N}P_{X_{i},Y_{i}|X^{i-1},Y^{i-1}}\left(x_{i},y_{i}|x ^{i-1},y^{i-1}\right)\] \[\stackrel{{(a)}}{{=}}\prod_{i=1}^{N}P_{X_{i}|X^{i-1}} \left(x_{i}|x^{i-1}\right)P_{Z}\left(y_{i}\right)\frac{P_{Y_{i}|X^{i},Y^{i-1}} \left(y_{i}|x^{i},y^{i-1}\right)}{P_{Z}\left(y_{i}\right)}\] \[\stackrel{{(b)}}{{=}}\exp\left(\log P_{Z}^{\otimes N }\left(y^{N}\right)\right)\exp\left(\sum_{i=1}^{N}\log P_{X_{i}|X^{i-1}} \left(x_{i}|x^{i-1}\right)\right)\] \[\quad\times\exp\left(\sum_{i=1}^{N}\log\frac{P_{Y_{i}|X^{i},Y^{i-1}} \left(y_{i}|x^{i},y^{i-1}\right)}{P_{Z}\left(y_{i}\right)}\right),\]
where (a) follows from the chain rule, the absence of outputs feedback, and \(P_{Y_{i}|X^{i},Y^{i-1}}\ll P_{Z}\); and (b) is a result of rearranging the terms into exponents. Next, we identify that \(P_{X^{N},Y^{N}}\left(x^{n},y^{n}\right)=h\left(y^{N}\right)g\left(t^{N}\left(y ^{N}\right),x^{N}\right)\), where
\[h\left(y^{N}\right)\triangleq\exp\left(\log P_{Z}^{\otimes N}\left(y^{N}\right) \right),\]
\[g\left(t^{N},x^{N}\right)\triangleq\exp\left(\sum_{i=1}^{N}\log P_{X_{i}|X^{i-1}} \left(x_{i}|x^{i-1}\right)+t_{y^{i}}\left(x^{i}\right)\right).\]
This is exactly the factorization in the well-known Fisher-Neyman factorization theorem [27, 28], and thus \(T^{N}\) is a sufficient statistic of \(Y^{N}\) for the estimation of \(X^{N}\). Since \(U^{N}=X^{N}G_{N}\) is bijective, we conclude the theorem.
## VIII Conclusions and Future Work
This paper presents data-driven algorithms for the design of polar codes. It began with devising algorithms for symmetric input distributions and for memoryless channels. Next, it addressed the case of channels with memory. Here, we devised a NSC decoder that approximated the SC decoder core elements with NNs. We showed that the NSC decoder is consistent, in the sense for sufficiently many samples of the channel inputs and outputs, the symmetric capacity of the effective bit channels is estimated with an arbitrary precision. Next, we extended the methods for the case where the input distribution is not necessarily symmetric via the Honda-Yamamoto scheme.
We also showed the role of neural estimation methods in a data-driven design of polar codes. Specifically, for the case of memoryless channels the MINE algorithm provides a valid construction of the channel embedding. For channels with memory, the DINE algorithm may be used to construct the channel embedding. The importance of neural estimation methods comes from the fact that the channel embedding may be computed independently from the NSC decoder. It is visible in the case of memoryless channels; indeed, in this case the channel embedding is the only thing need to be estimated and the SC decoder for memoryless channels may be applied "as is". This set one of the future research goals of this work - to learn a common NSC for all channels, and devise an algorithm to estimate a channel embedding that is valid for the common NSC.
We demonstrated our approach on both memoryless channels and channels with memory. We showed that the proposed algorithms have similar decoding errors as their analytic counterpart, whenever they exists. We also demonstrated our algorithm on channels for which there is no practical SC decoder in the literature. Another experiment is conducted on the ISI channel that emphasized its main advantage over the SCT decoder; its computational complexity does not grow with the state alphabet of the channel. Our next steps would be to examine our methodology with existing algorithms in the fields, such as list decoding [29]. We are also interested in using polar coding framework for the problem of capacity estimation. This way, we will be able to estimate both the capacity and a capacity achieving code with the same algorithm.
|
2309.04627 | Probabilistic Safety Regions Via Finite Families of Scalable Classifiers | Supervised classification recognizes patterns in the data to separate classes
of behaviours. Canonical solutions contain misclassification errors that are
intrinsic to the numerical approximating nature of machine learning. The data
analyst may minimize the classification error on a class at the expense of
increasing the error of the other classes. The error control of such a design
phase is often done in a heuristic manner. In this context, it is key to
develop theoretical foundations capable of providing probabilistic
certifications to the obtained classifiers. In this perspective, we introduce
the concept of probabilistic safety region to describe a subset of the input
space in which the number of misclassified instances is probabilistically
controlled. The notion of scalable classifiers is then exploited to link the
tuning of machine learning with error control. Several tests corroborate the
approach. They are provided through synthetic data in order to highlight all
the steps involved, as well as through a smart mobility application. | Alberto Carlevaro, Teodoro Alamo, Fabrizio Dabbene, Maurizio Mongelli | 2023-09-08T22:40:19Z | http://arxiv.org/abs/2309.04627v1 | # Probabilistic Safety Regions Via Finite Families of Scalable Classifiers
###### Abstract
Supervised classification recognizes patterns in the data to separate classes of behaviours. Canonical solutions contain misclassification errors that are intrinsic to the numerical approximating nature of machine learning. The data analyst may minimize the classification error on a class at the expense of increasing the error of the other classes. The error control of such a design phase is often done in a heuristic manner. In this context, it is key to develop theoretical foundations capable of providing probabilistic certifications to the obtained classifiers. In this perspective, we introduce the concept of probabilistic safety region to describe a subset of the input space in which the number of misclassified instances is probabilistically controlled. The notion of scalable classifiers is then exploited to link the tuning of machine learning with error control. Several tests corroborate the approach. They are provided through synthetic data in order to highlight all the steps involved, as well as through a smart mobility application.
Misclassification error control, probabilistic safety regions, scalable classifiers, statistical learning
## I Introduction and Problem Formulation
Safery-critical assessment of machine learning (ML) is currently one of the main issues in trustworthy artificial intelligence [1, 2]. The scope is to understand under which conditions autonomous operation may lead to hazards, in order to reduce to the minimum the risk of operating with detrimental effects to the human or the environment. Such an assessment is mandatory in several application domains, such as avionics [3], finance [4], healthcare [5], smart mobility [6], cybersecurity [7] as well as with autonomous systems [1, 2, 8]. Informally speaking, the safety assurance of ML consists of building _guardrails_ around the autonomous decision in front of uncertainty [9]. This can be achieved with a combination of rigorous verification [10], design validation [11] or standardized safety analysis1. However, those approaches look at ML from the outside, by analyzing risks at system level. Intrinsically in ML, on the other hand, there is the possibility to link the search of classification boundaries with error control. More specifically, the error control may be built around the class of behavior it is desired to protect, e.g., collision avoidance, disease status, cyber attack in progress. This may change the theoretical approach to algorithm design considerably. In this respect, the guardrails are built here on the basis of _order statistics_[12], by incorporating them as probabilistic constraints of the model. The notion of _probabilistic safety region_ is firstly outlined. The region defines the subset of data samples under which the error is constrained. Second, a specific class of classifiers, named _scalable classifiers_ is introduced as a valuable tool to construct probabilistic safety regions. In particular, the optimal shape of the region may be obtained by setting the value of the parameters within the set of possible scalable classifiers.
Namely, scalable classifiers share the property of having a scaling parameter that can be adjusted to control both the classification boundary and the inherent error. Suitable classifiers are in this respect, for instance, support vector machines, support vector data description, logistic regression; but the class is very rich and several different and new scalable classifiers may be devised. Probabilistic scaling [13] drives the validation of parameters setting. Based on this scaling procedure, the proposed design methodology is capable of probabilistically guaranteeing a given error level valid for the largest possible region.
Footnote 1: [https://standards.ieee.org/practices/mobility/standards/](https://standards.ieee.org/practices/mobility/standards/).
### _Contribution_
More specifically, the contribution of this research is twofold. First, the family of _Scalable Classifiers_ (SCs) is studied as a new group of classifiers that share the property of being scalable, that is, of being controllable by a single scalar parameter (Section II-A). Then, the definition of _Probabilistic Safety Region_ (PSR) models the idea of giving probabilistic guarantees on the prediction of a classification (but in general also a regression) problem (Section II-B). The link between these two concepts is provided by _probabilistic scaling_, a state-of-the-art technique for providing probabilistic bounds based on the field of order statistics [14]. Moreover, we take into account the variability of the classifier due to the hyperparameter selection. It is known that the choice of the hyperparameters can affect dramatically the result in the prediction of the model. Usually, their optimal setting is a hard task. To cope with this problem, we apply our probabilistic safety framework to classifiers obtained by
finite families_ of hyperparameters, selecting among this set of confident classifiers the one which optimizes a certain statistical index, for example minimizing the false positive rate of the classification (Section III).
The result is a totally new framework in statistical learning that shares the requirements of AI trustworthiness, i.e., it is reliable, safe and robust.
### _Related works_
The concept of _robust_ Machine Learning has several subtle meanings. Initially, overfitting was the main issue involved in robustness [15]. More recently, robustness passes through the capacity of the model to counter adversarial attacks [16], to handle data-privacy problems [17], to have good generalisation properties [18], and many other theoretical and practical challenges [19] facing the scientific community. Today, robustness is inseparable from safety, and the reason for this is entirely agreeable: algorithms that cannot handle data fluctuations or fail to provide sufficient levels of reliability can lead to risky situations for users. Safety and robustness are, therefore, fundamental requirements that cannot be separated.
In relation with the concept of safety [20], the idea of _Safety Region_ (SR) [21] deals with the identification of the regions of the input space that lead to predictions with the same guaranteed level of confidence. Currently, there are different methodologies addressing this topic: _Conformal Prediction_ (CP) [22], which deals with the discovery of the confidence interval of each prediction, thus giving a clear indication of the quality of the prediction. _Error Quantification_[23] provides a methodology to quantify uncertainty with probabilistic maximization. _Bayesian learning contexts_[24], which provide probabilistic guarantees by exploiting the degree of belief about the output. _Selective Classification_, [25] in which the model can abstain from making a prediction and the goal is to minimize incorrect predictions while also minimizing abstentions. _Covariate shift_[26], where the goal is to learn a classifier based on samples from training and test that have different marginal distributions but share the same conditional labelling functions for all points. CP is one of the most recognised methods for confidence calculation. In order to meet the probabilistic guarantees, CP might assign more than a single class to a given point of the input space. The method proposed here is inspired on the procedure used in the computation of the confidence level sets of the SR.
Reformulation of classifiers to make them more robust or reliable is well established and defined. For example, the support vector machine (SVM) model (which we will show to be a scalable classifier) has been extensively studied (see [27, 28, 29, 30, 31]).
Recent approaches deal with finding regions with fixed false negative or positive rates, by means of sensitivity analysis of SVDD in [32], and of Boolean rules in [33], but they disregard the confidence interval of the resulting regions.
Understanding the concept of confidence more broadly, the following approaches can also be considered. _Out of distribution detection_[34], where statistical tests are performed to assess if predictions are performed in conditions different from what was learned from the training stage. _Counterfactual eXplanation_[35, 36], which finds the minimum feature variation in order to change the predicted class.
In summary, the control of the confidence of the model [37] is the goal we pursue in this paper. The proposed methodology is suitable to address the potential variations in the underlying probability distribution of the data.
The introduction of the scalable family of classifiers makes it possible to approximate the PSR while maintaining the same level of confidence. Together with probabilistic scaling, scalable classifiers provide a new framework for compliant machine learning. What is new about this approach is that probabilistic assurance is provided in the design phase of the classifier, along with the calibration of the model. This is a key point because in the state of the art all methods that address the problem of providing probabilistic guarantees in prediction are based on a posteriori approaches.
### _Notation and order statistics concepts_
Given an integer \(n\), \([n]\) denotes the integers from 1 to \(n\). Given \(x\in\mathbb{R}\), \(\lfloor x\rfloor\) denotes the greatest integer no larger than \(x\) and \(\lceil x\rceil\) the smallest integer no smaller than \(x\). The set of non-negative reals is denoted \(\mathbb{R}_{+}\). Given integers \(k,n\), and parameter \(\varepsilon\in(0,1)\), the Binomial cumulative distribution function is denoted as
\[\mathbf{B}(k;n,\varepsilon)\doteq\sum_{i=0}^{k}\binom{n}{i}\varepsilon^{i}(1- \varepsilon)^{n-i}.\]
\(\Pr\{A\}\) denotes the probability of the event \(A\).
The following definition is borrowed from the field of order statistics [14, 23].
**Definition** (Generalized Max).: _Given a collection of \(n\) scalars \(\Gamma=\{\gamma_{i}\}_{i=1}^{n}\in\mathbb{R}^{n}\), and an integer \(r\in[n]\), we denote by_
\[\max^{(r)}(\Gamma)\]
_the \(r\)-smallest value of \(\Gamma\), so that there are no more than \(r-1\) elements of \(\Gamma\) strictly larger than \(\max^{(r)}(\Gamma)\)._
We will often refer to \(r\) as a _discarding parameter_, since the generalized max can be interpreted as a classical maximum after the largest \(r-1\) points are discarded. Indeed, to construct \(\max^{(r)}(\Gamma)\) it is sufficient to order the elements of \(\Gamma\) as \(\{\gamma_{(1)},\gamma_{(2)},\ldots,\gamma_{(n)}\}\) so that
\[\gamma_{(1)}\geq\gamma_{(2)}\geq\cdots\geq\gamma_{(n)}.\]
Then, we let \(\max^{(r)}(\Gamma)\doteq\gamma_{(r)}\).
The following result, see Property 3 in [14], states how to obtain a probabilistic upper bound of a random scalar variable by means of the notion of generalized max. This result has been used in the context of uncertainty quantification [23] and chance-constrained optimization [13, 38], and plays a key role in our successive developments.
**Property 1** (Scaling factor [14]).: _Given probabilistic parameters \(\varepsilon\in(0,1)\), \(\delta\in(0,1)\) and a discarding parameter \(r\geq 1\), let \(n\geq r\) be chosen such that_
\[\mathbf{B}(r-1;n,\varepsilon)\leq\delta. \tag{1}\]
_Suppose that \(\gamma\in\mathbb{R}\) is a random scalar variable with probability distribution \(\mathcal{W}\). Draw \(n\) i.i.d. samples \(\{\gamma_{i}\}_{i=1}^{n}\) from distribution \(\mathcal{W}\). Then, with a probability no smaller than \(1-\delta\),_
\[\Pr_{\mathcal{W}}\Big{\{}\gamma>\max^{(r)}(\{\gamma_{i}\}_{i=1}^{n})\Big{\}} \leq\varepsilon.\]
In words, this results shows that, if the number of points is chosen large enough, the generalized max constitutes with very high probability a good approximation of the true maximum, in the sense that the probability of obtaining a larger value may be bounded a-priori.
The following corollary, proved in this form in [13], provides a way to explicitly bound the number of samples \(n\) (i.e. the so-called _sample complexity_), by "fixing" the discarding parameter \(r\) to be a percentage of the total number of samples, i.e. letting \(r=\beta\varepsilon n\).
**Corollary 1.1** (Explicit bound [13]).: _Let \(r=\lceil\beta\varepsilon n\rceil\), where \(\beta\in(0,1)\), and define the quantity_
\[\kappa\doteq\left(\frac{\sqrt{\beta}+\sqrt{2-\beta}}{\sqrt{2}(1-\beta)} \right)^{2}.\]
_Then, inequality (1) is satisfied for_
\[n\geq\frac{\kappa}{\varepsilon}\ln\frac{1}{\delta}. \tag{2}\]
_Specifically, the choice \(\beta=0.5\) leads to \(r=\left\lceil\frac{\varepsilon n}{2}\right\rceil\) and \(n\geq\frac{7.47}{\varepsilon}\ln\frac{1}{\delta}\)._
## II Scalable Classifiers and Safe Sets
Classification algorithms assign to a given input vector \(\mathbf{x}\in\mathcal{X}\) a single label \(y\in\mathcal{Y}\) based on certain patterns in the data. We here consider the case of _binary_ classification, with classes \(y\in\{+1,-1\}\). Note that this choice of labels is without loss of generality, since any binary classifier can be converted to these labels. In our context, we assume that the label \(+1\) denotes the target class \(\mathrm{S}\), which is to be interpreted as "safe" configurations. The label \(-1\) refers instead to the non-target class \(\mathrm{U}\) (i.e. unsafe configuration). We write \(\mathbf{x}\,\hat{=}\,\mathrm{S}\) to denote the fact that the vector \(\mathbf{x}\) is safe, i.e. it has "true" label \(+1\). Similarly, \(\mathbf{x}\,\hat{=}\,\mathrm{U}\) denotes that \(\mathbf{x}\) is unsafe.
The main goal of our approach is to design, based on observations, a _safety region_, i.e. a region \(\mathcal{S}\) of the feature space \(\mathcal{X}\) for which we have a guarantee that the probability of being unsafe is not larger than a given _risk_ level \(\varepsilon\in(0,1)\). More formally, we consider a probabilistic framework, and assume that the observations come from a fixed probability distribution. Then, for a given _risk_ level \(\varepsilon\in(0,1)\), we are interested in constructing a _Probabilistic Safety Region_ (PSR), denoted by \(\mathcal{S}_{\varepsilon}\), satisfying
\[\Pr\Bigl{\{}\mathbf{x}\,\hat{=}\,\mathrm{U}\text{ and }\mathbf{x}\in\mathcal{S}_{ \varepsilon}\Bigr{\}}\leq\varepsilon. \tag{3}\]
In words, a PSR region \(\mathcal{S}_{\varepsilon}\subseteq\mathcal{X}\) represents a set such that the probability of observing the event \(\mathbf{x}\,\hat{=}\,\mathrm{U}\) conditioned to the event \(\mathbf{x}\in\mathcal{S}_{\varepsilon}\) is lower or equal than \(\varepsilon\).
This paper provides a general approach to constructing such sets while maximizing a given performance index (e.g., size). We rely on a two-level of probability framework in which the final provided set is a probabilistic safety region with a (prescribed) level of probability. To generate such probabilistic safety regions, we introduce a special (but rather general) class of classifiers, that we refer to as _scalable classifiers_ (SCs). Namely, SCs are classifiers whose formulation can be made to explicitly depend on a _scaling parameter_\(\rho\in\mathbb{R}\). The parameter \(\rho\) allows to dynamically adjust the boundary of the classification: changing \(\rho\) causes the classifier to widen, shrink, or change shape completely. With some abuse of notation, we can think of the different values of \(\rho\) as different "level sets" of the classification function.
### _Scalable Classifiers_
Formally, we consider binary classifiers which can be formulated as follows
\[\phi_{\mathbf{\theta}}(\mathbf{x},\rho)\doteq\begin{cases}+1&\text{if }\ f_{\mathbf{\theta}}(\mathbf{x}, \rho)<0,\\ -1&\text{otherwise.}\end{cases} \tag{4}\]
where the function \(f_{\mathbf{\theta}}:\mathcal{X}\times\mathbb{R}\longrightarrow\mathbb{R}\) is the so-called _classifier predictor_. Note that, in the above notation, we highlight the fact that \(f_{\mathbf{\theta}}\) may depend also on a second set of parameters \(\mathbf{\theta}\subset\mathbb{R}^{n\mathbf{\theta}}\). The vector \(\mathbf{\theta}=[\mathbf{\theta}_{1},\cdots,\mathbf{\theta}_{n\mathbf{\theta}}]^{\top}\) collects the so-called classifier _hyperparameters_, that is all those parameters to be adjusted in the model (e.g. different choices of kernel, regularization parameters, etc.). Obviously, a different choice of \(\mathbf{\theta}\) corresponds to a possibly very different classifier. The role of different choices of \(\mathbf{\theta}\) in the construction of the classifier is extremely important, and will be discussed in Section III.
Note that \(\phi_{\mathbf{\theta}}(\mathbf{x},\rho)\) may be interpreted as the "tentative" (or predicted) label associated to the point \(\mathbf{x}\) by the classifier. In the sequel, with some slight abuse of notation, we will sometimes refer to \(f_{\mathbf{\theta}}\) as the classifier itself.
As we will see in the following derivations, the scaling parameter \(\rho\) plays a key role for this particular family of classifiers. Specifically, in order to define a SC, we require that the \(\rho\)-parametrized classifier \(\phi_{\mathbf{\theta}}(\mathbf{x},\rho)\) satisfies some special "ordering" condition, as defined next.
**Assumption 1** (Scalable Classifier).: _We assume that for every \(\mathbf{x}\in\mathcal{X}\), \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\) is a continuous and monotonically increasing function on \(\rho\), i.e._
\[\rho_{1}>\rho_{2}\ \Rightarrow f_{\mathbf{\theta}}(\mathbf{x},\rho_{1})>f_{\mathbf{\theta}}( \mathbf{x},\rho_{2}),\quad\forall\mathbf{x}\in\mathcal{X}. \tag{5}\]
_We assume also that_
\[\lim_{\rho\rightarrow-\infty}f_{\mathbf{\theta}}(\mathbf{x},\rho)<0<\lim_{\rho \rightarrow\infty}f_{\mathbf{\theta}}(\mathbf{x},\rho),\quad\forall\mathbf{x}\in\mathcal{X}. \tag{6}\]
**Property 2** (Boundary radius).: _Suppose that Assumption 1 holds. Then, for each \(\mathbf{x}\in\mathcal{X}\), there exists a unique \(\bar{\rho}(\mathbf{x})\)
_satisfying \(f_{\mathbf{\theta}}(\mathbf{x},\bar{\rho}(\mathbf{x}))=0\). Moreover, the classifier \(\phi_{\mathbf{\theta}}(\mathbf{x},\rho)\) given by (4) satisfies_
\[\phi_{\theta}(\mathbf{x},\rho)=-1\Leftrightarrow\rho\geq\bar{\rho}(\mathbf{x}).\]
The proof of Property 2 is available in Appendix A. In this paper, and under Assumption 1, we denote \(\bar{\rho}(\mathbf{x})\) the unique solution (see Property 2) to the equation
\[f_{\mathbf{\theta}}(\mathbf{x},\rho)=0.\]
In words, a scalable classifier is a classifier for which, given \(\mathbf{x}\), there is always a value of \(\rho\), denoted \(\bar{\rho}(\mathbf{x})\), that establishes the border between the two classes. Therefore, a SC is a classifier that maintains the target class of a given feature vector \(\mathbf{x}\) under a decrease of \(\rho\) We also remark that this definition is implied by condition (5). Indeed, for a given \(\tilde{\mathbf{x}}\in\mathcal{X}\) and \(\rho_{1}>\rho_{2}\), if \(f_{\mathbf{\theta}}(\tilde{\mathbf{x}},\rho_{1})<0\) (i.e. \(\phi_{\mathbf{\theta}}(\tilde{\mathbf{x}},\rho_{1})=+1\)) then \(f_{\mathbf{\theta}}(\tilde{\mathbf{x}},\rho_{2})<f_{\mathbf{\theta}}(\tilde{\mathbf{x}},\rho_ {1})<0\) (i.e. \(\phi_{\mathbf{\theta}}(\tilde{\mathbf{x}},\rho_{2})=+1\)).
Next property shows that any standard binary classifier can be rendered scalable by simply including the scaling parameter \(\rho\) in an additive way.
**Property 3**.: _Consider the function \(\hat{f}_{\mathbf{\theta}}:\mathcal{X}\to\mathds{R}\) and its corresponding classifier_
\[\hat{\phi}_{\mathbf{\theta}}(\mathbf{x})\doteq\begin{cases}+1&\text{ if }\hat{f}_{\mathbf{ \theta}}(\mathbf{x})<0,\\ -1&\text{ otherwise.}\end{cases}\]
_Then, the function \(f_{\mathbf{\theta}}(\mathbf{x},\rho)=\hat{f}_{\mathbf{\theta}}(\mathbf{x})+\rho\) satisfies Assumption 1 and thus provides the scalable classifier_
\[\phi_{\mathbf{\theta}}(\mathbf{x},\rho)\doteq\begin{cases}+1&\text{ if }f_{\mathbf{ \theta}}(\mathbf{x},\rho)<0,\\ -1&\text{ otherwise.}\end{cases}\]
Proof.: The result is trivial because \(f_{\mathbf{\theta}}(\mathbf{x},\rho)=\hat{f}_{\mathbf{\theta}}(\mathbf{x})+\rho\) is clearly a continuous and monotonically increasing function on \(\rho\). It is also straightforward to check that (6) is satisfied.
The next example illustrates the use of the previous property to obtain a scalable classifier from a standard linear classifier. In subsection II-C we present other examples of scalable classifiers and we show that the scaling parameter \(\rho\) does not need necessarily to appear in an additive way into \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\) to obtain a SC.
**Example II.1** (Linear classifier as scalable classifier).: _Consider the standard linear classifier defined by means of the function_
\[\hat{f}_{\mathbf{\theta}}(\mathbf{x})=\mathbf{w}^{\top}\mathbf{x}-b.\]
_The classifier elements \(\mathbf{w},b\) may be obtained, for instance, as the solution of a SVM problem of the form_
\[\min_{\mathbf{w},b}\frac{1}{2\eta}\mathbf{w}^{\top}\mathbf{w}+\frac{1}{2}\sum_{i=1}^{n} \max\left\{0,1+y_{i}(\mathbf{w}^{\top}\varphi(\mathbf{x}_{i})-b)\right\},\]
_and notice that we are not using the usual form of the hinge loss with a minus in front of \(y_{i}\) (\(\max\left\{0,1-y_{i}(\mathbf{w}^{\top}\varphi(\mathbf{x}_{i})-b)\right\}\)) since we would like \(\hat{f}_{\mathbf{\theta}}(\mathbf{x})\) to be negative for \(y=+1\) and positive otherwise. In this case, the classifier depends on the choice of the regularization term \(\eta\), and of the specific regressor functional \(\varphi(\cdot)\). That is, for a fixed choice of regressor, the hyperparameter vector is just the scalar \(\mathbf{\theta}=\eta\). In this sense, we remark that a more rigorous notation would be \(\mathbf{w}=\mathbf{w}(\mathbf{\theta})\) and \(b=b(\mathbf{\theta})\), but we omit this dependence for the sake of readability._
_As stated in the proof of Property 3, it is immediate to observe that linear classifiers belong indeed to the class of scalable classifiers if we introduce a scaling parameter \(\rho\) in an additive way, that is_
\[f_{\mathbf{\theta}}(\mathbf{x},\rho)=\mathbf{w}^{\top}\mathbf{x}-b+\rho. \tag{7}\]
_Indeed, given \(\rho_{1}>\rho_{2}\) we immediately have that_
\[\mathbf{w}^{\top}\mathbf{x}-b+\rho_{1}>\mathbf{w}^{\top}\mathbf{x}-b+\rho_{2},\quad\forall\mathbf{ x}\in\mathcal{X},\]
_and it is straightforward to see that also (6) holds._
### _Main Result: Probabilistic Safety Regions_
Consider a given SC classifier \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\) (i.e. a classifier designed considering a specific choice2 of hyperparameter \(\mathbf{\theta}\)). Then, for a given value of the scaling parameter \(\rho\in\mathds{R}\), we define the \(\rho\)_-safe set_
Footnote 2: Note that in this section, we assume \(\mathbf{\theta}\) to be fixed and given. Section III discusses in detail how the possibility of choosing \(\mathbf{\theta}\) may be exploited to improve the SC.
\[\mathcal{S}(\rho)=\{\,\mathbf{x}\in\mathcal{X}\,:\,f_{\mathbf{\theta}}(\mathbf{x},\rho)<0 \,\},\]
which represents the set of points \(\mathbf{x}\in\mathcal{X}\) predicted as safe by the classifier with the specific choice \(\rho\) i.e. the safety region of the classifier \(f_{\mathbf{\theta}}\) for given \(\rho\).
Note that, contrary to the similar concept introduced in [13], the interpretation of \(\rho\) as a "radius" is not valid anymore. That is, \(\rho\) shall be viewed as a _safety inducing parameter_: the larger \(\rho\), the more stringent the requirements will be and thus, the smaller will be the corresponding \(\rho\)-safe set. Also, note that \(\rho\in\mathds{R}\), hence it can take negative values. 3. The reason why we prefer the current formulation is that it allows us to draw a clear parallel with the results in conformal prediction, see Remark V.1. Indeed, following the reasoning above, it is easy to see that the larger \(\rho\), the _smaller_ is the region. In other words, it is easy to see that
Footnote 3: One may salvage the “radius” interpretation by introducing a new parameter \(\bar{\rho}=-\rho\), but we prefer not to do this because this choice would complicate the ensuing derivations.
\[\rho_{1}>\rho_{2}\Longrightarrow\mathcal{S}(\rho_{1})\subset\mathcal{S}(\rho_ {2}).\]
This behavior is depicted in Fig. 1.
In the next section, for completeness, we present some notable examples of well-assessed classifiers which can be reformulated in a way so that they belong to the SC family. We now introduce the main result of the paper, which is a simple procedure to obtain, from a calibration set \(\mathcal{Z}_{c}\doteq\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}}\), a probabilistic safety region \(\mathcal{S}_{\varepsilon}\) that with a probability no smaller than \(1-\delta\) satisfies the probability constraint
\[\Pr\Bigl{\{}y=-1\text{ and }\mathbf{x}\in\mathcal{S}_{\varepsilon}\Bigr{\}}\leq\varepsilon.\]
We will assume that the pair \((\mathbf{x},y)\) is a random variable and that \(\Pr\{\mathbf{x}\in\mathcal{X}\}=1\). Moreover, the \(n_{c}\) samples of \(\mathcal{Z}_{c}\) are assumed i.i.d..
**Theorem II.1** (Probabilistic Safety Region).: _Consider the classifier (4), and suppose that Assumption 1 holds and that \(\Pr\{\mathbf{x}\in\mathcal{X}\}=1\). Given a calibration set \(\mathcal{Z}_{c}\doteq\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}}\) (\(n_{c}\) i.i.d. samples), suppose that \(\delta\in(0,1)\), \(\varepsilon\in(0,1)\), and the integer discarding parameter \(r\) satisfies \(n_{c}\geq r\geq 1\), and_
\[\mathbf{B}(r-1;n_{c},\varepsilon)\leq\delta.\]
_Consider the subset \(\mathcal{Z}_{c}^{U}=\{(\tilde{\mathbf{x}}_{j}^{U},-1)\}_{j=1}^{n_{U}}\) corresponding to all the unsafe samples in \(\mathcal{Z}_{c}\) and define the probabilistic scaling of level \(\varepsilon\) as follows_
\[\rho_{\varepsilon}\doteq\max^{(r)}\left(\{\bar{\rho}(\tilde{\mathbf{ x}}_{j}^{U})\}_{j=1}^{n_{U}}\right), \tag{8}\]
_and define the corresponding \(\rho_{\varepsilon}\)-safe set_
\[\mathcal{S}_{\varepsilon} \doteq \left\{\begin{array}{ll}\mathcal{S}\left(\rho_{\varepsilon} \right)&\mbox{ if }n_{U}\geq r\\ \mathcal{X}&\mbox{ otherwise.}\end{array}\right.\]
_Then, with probability no smaller than \(1-\delta\),_
\[\Pr\Bigl{\{}y=-1\mbox{ and }\mathbf{x}\in\mathcal{S}_{\varepsilon}\Bigr{\}}\leq\varepsilon. \tag{9}\]
The proof of Theorem II.1 is available in Appendix B.
From Corollary I.1, it is easy to see that the smaller \(\varepsilon\) is, the larger \(\max^{(r)}\) is (i.e., \(\rho_{\varepsilon}\)).
In this view, \(\rho_{\varepsilon}\) is in line with the idea of a safety-inducing parameter discussed previously, since it operationally encodes the probabilistic guarantee required by \(\varepsilon\).
### _Examples of Scalable Classifiers_
As stated by Property 3, any standard classifier can be converted into a scalable one by means of the inclusion of the scaling parameter \(\rho\) in an additive way. Using this scheme, we present in this section families of scalable classifiers obtained from Support Vector Machines and Support Vector Data Description classifiers. Additionaly, we also show how to obtain a scalable classifier from the Logistic Regression classifier by including the parameter \(\rho\) in a non-additive manner. In the following examples, we assume we are given a learning set
\[\mathcal{Z}_{L}\doteq\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\subseteq \mathcal{X}\times\{-1,+1\}\]
containing observed feature points and corresponding labels \(\mathbf{z}_{i}=(\mathbf{x}_{i},y_{i})\). Then, we introduce the kernels (see e.g. [39]). In particular, letting
\[\varphi:\mathcal{X}\longrightarrow\mathcal{V}\]
be a _feature map_ (where \(\mathcal{V}\) is an inner product space) we define
\[\Phi = \left[\begin{array}{cccc}\varphi(\mathbf{x}_{1})&\varphi(\mathbf{x}_{2} )&\ldots&\varphi(\mathbf{x}_{n})\end{array}\right], \tag{10}\] \[D = \mbox{diag}\{y_{1},y_{2},\ldots,y_{n}\},\] (11) \[K = \Phi^{\top}\Phi, \tag{12}\]
with \(K_{i,j}=K(\mathbf{x}_{i},\mathbf{x}_{j})=\varphi(\mathbf{x}_{i})^{\top}\varphi(\mathbf{x}_{j }),\ i\in[n],j\in[n]\) the kernel matrix.
The models considered and their derivation is absolutely classical. However, since we are interested in scalable classifiers with guaranteed safety, for each model we will consider two hyperparameters, i.e. we will set \(\mathbf{\theta}=[\eta,\,\tau]^{\top}\), where besides the classical regularization parameter \(\eta\in\mathbb{R}\) we introduce a weighting term \(\tau\in(0,1)\) that penalizes missclassification errors (the role of \(\tau\) is much in the spirit of quantile regression formulation [40]).
#### Iii-C1 Scalable SVM
SVM is the simplest extension of a linear model and indeed we define its classifier predictor as
\[\hat{f}_{\mathbf{\theta}}(\mathbf{x})=\mathbf{w}^{\top}\varphi(\mathbf{x})-b.\]
The SVM formulation we adopt is the classical one proposed by Vapnik in [41], with the addition of the weighting parameter \(\tau\):
\[\min_{\mathbf{w},b,\xi_{1},\ldots,\xi_{n}} \frac{1}{2\eta}\mathbf{w}^{\top}\mathbf{w}+\frac{1}{2}\sum_{i=1}^{n}\left( (1-2\tau)y_{i}+1\right)\xi_{i}\] \[\mbox{s.t.} y_{i}(\mathbf{w}^{\top}\varphi(\mathbf{x}_{i})-b)\leq\xi_{i}-1,\ i\in[n],\] \[\xi_{i}\geq 0,\ i\in[n].\]
We explicitly report the formulation in (13) since our specific definition of the classifier, which requires \(f_{\mathbf{\theta}}(\mathbf{x}_{i})\) to be negative when \(y_{i}\) equals +1, leads to a slightly different formulation than the classical one.
The offset \(b\) can be found exploiting special feature points \(\mathbf{x}_{s}\) called _support vectors_ that are such that \(\varphi(\mathbf{x}_{s})\) lies on the boundary of the transformed space. The addition of the scaling parameter \(\rho\) changes the model in
\[f_{\mathbf{\theta}}(\mathbf{x},\rho) = \mathbf{w}^{\top}\varphi(\mathbf{x})-b+\rho. \tag{14}\]
We observe that, for the linear kernel, the variation of \(\rho\) is simply a rigid translation of the classification hyperplane; for other kernels, for example, the Gaussian kernel or the polynomial kernel, the effect is the "deflation" or the "inflation" of the classification boundary. The composition with the feature map does not affect the scalability property of the linear classifier, so it is easy to verify from the considerations made in II.1 that indeed scalable SVM satisfies Assumption 1 (see also Property 3).
**Remark II.1** (On the role of \(\tau\) parameter.).: _Indeed, it is easy to see that small values of \(\tau\) add more weight to the class \(+1\), which is the class we are interested in. So, the
Figure 1: Graphical depiction of the role of the scaling parameter. The blue circles represent safe points \(\mathbf{x}\doteq\mathrm{S}\), while the red crosses represent unsafe ones, \(\mathbf{x}\doteq\mathrm{U}\).
choice of a "good" value of \(\tau\) is particularly important. This will be discussed in Section III, where the possibility of considering several values for this parameter in the context of our approach is discussed in detail._
#### Iv-A2 Scalable SVDD
SVDD was introduced in [42] based on the idea of classifying the feature vectors by enclosing the target points (in the kernel space) in the smallest hypersphere of radius \(R\) and center \(\mathbf{w}\). With this idea, we define the scalable classifier predictor for SVDD as
\[f_{\mathbf{\theta}}(\mathbf{x},\rho)=\left\|\varphi(\mathbf{x})-\mathbf{w}\right\|^{2}-(R^{2}- \rho), \tag{15}\]
where \(\mathbf{w},R\) are obtained as the solution of the following weighted optimization problem
\[\min_{\mathbf{w},R,\xi_{1},\ldots,\xi_{n}} \frac{1}{2\eta}R^{2}+\frac{1}{2}\sum_{i=1}^{n}\left((1-2\tau)y_{i} +1\right)\xi_{i}\] (16) s.t. \[y_{i}\left(\left\|\varphi(\mathbf{x}_{i})-\mathbf{w}\right\|^{2}-R^{2} \right)\leq\xi_{i},\;i\in[n],\] \[\xi_{i}\geq 0,\;i\in[n]\]
that, again, depends on the hyperparameters \(\mathbf{\theta}=[\eta,\,\tau]^{\top}\), playing the role of regularization and missclassification parameters. As for the SVM model, the radius \(R\) is retrieved by support vectors, that are feature points lying on the hypersphere boundary of the classification in the kernel space. It is immediate to observe that the introduction of the scaling parameter \(\rho\) maintains the idea that an increase in \(\rho\) will result in a smaller radius \(\tilde{R}^{2}=R^{2}-\rho\), thus implying the meaning of safety induction. Indeed, the scalable SVDD-classifier predictor (15) clearly satisfies equations (5) and (6).
#### Iv-A3 Scalable Logistic Regression
Logistic Regression (LR) classifies points \(\mathbf{x}\in\mathcal{X}\) on the basis of the probability expressed by the logistic function
\[\frac{1}{1+e^{\mathbf{w}^{\top}\varphi(\mathbf{x})-b}} = \mathrm{Pr}_{\mathbf{\theta}}\left\{y=+1\mid\mathbf{x}\right\}\] \[= 1-\mathrm{Pr}_{\mathbf{\theta}}\left\{y=-1\mid\mathbf{x}\right\},\]
where \(\mathbf{w}\) and \(b\) minimize the regularized negative log-likelihood
\[\mathrm{L}\left(\mathbf{w},b\mid\mathbf{x},y\right)=\frac{1}{2\eta}\mathbf{w }^{\top}\mathbf{w}\] \[+\frac{1}{2}\sum_{i=1}^{n}\left((1-2\tau)y_{i}+1\right)\text{log} \left(1+e^{y_{i}\left(\mathbf{w}^{\top}\varphi(\mathbf{x}_{i})-b\right)}\right),\]
with \(b\) explicitly computed with the support vectors of the model. Note that, differently from classical LR and in the spirit of previously described approaches, we introduce into the cost function the weight parameter \(\tau\in(0,1)\) to penalize misclassification, we also consider the regularization parameter \(\eta>0\). In this case, defining
\[\hat{f}_{\mathbf{\theta}}(\mathbf{x})=\frac{1}{2}-\frac{1}{1+e^{\mathbf{w}^{\top}\varphi( \mathbf{x})-b}},\]
the standard LR classifier is given by
\[\hat{\phi}_{\mathbf{\theta}}(\mathbf{x})\doteq\begin{cases}+1&\quad\text{if}\;\;\hat {f}_{\mathbf{\theta}}(\mathbf{x})<0,\\ -1&\quad\text{otherwise}.\end{cases}\]
We now show that the following function
\[f_{\mathbf{\theta}}(\mathbf{x},\rho)=\frac{1}{2}-\frac{1}{1+e^{\mathbf{w}^{\top}\varphi( \mathbf{x})-b+\rho}}.\]
satisfies Assumption 1, and thus provides a scalable classifier. Clearly, \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\) is a continuous and monotonically increasing function on \(\rho\). Moreover,
\[\lim_{\rho\rightarrow-\infty}f_{\theta}(\mathbf{x},\rho)=-1/2<0<1/2=\lim_{\rho \rightarrow+\infty}f_{\theta}(\mathbf{x},\rho).\]
Thus, we conclude that
\[\phi_{\mathbf{\theta}}(\mathbf{x},\rho)\doteq\begin{cases}+1&\quad\text{if}\;\;f_{\mathbf{ \theta}}(\mathbf{x},\rho)<0,\\ -1&\quad\text{otherwise}.\end{cases}\]
is a scalable classifier.
**Remark II.2** (Generality of SC).: _We remark that the three examples above, although already significant in themselves, represent only a small subset of possible scalable classifiers. Indeed, as stated in Property 3, any standard classifier can be easily converted into an scalable one. Thus, the results presented in this paper can be directly applied, for example, to any deep neural network classifier._
**Remark II.3**.: _We emphasize that one of the main advantages of our approach is that the distribution of the calibration set need not be equal to that of the learning set. It should be equal to the one for which we want to impose probabilistic guarantees. This is a crucial observation, since probabilistic guarantees apply only to the distribution from which the calibration set was drawn, which must therefore be chosen carefully. Note also that as the desired degree of guarantee changes, the cardinality required for the the calibration set changes._
**Example II.2**.: _To give the reader a simple but meaningful idea of the method, Figure 2 shows the behavior of the PSR as \(\varepsilon\) varies while \(\delta\) is fixed to \(10^{-6}\). For this example, we sampled with equal probability two classes, "safe" \(\mathrm{S}\) and "unsafe" \(\mathrm{U}\), from two Gaussian distributions with respectively means and covariance matrices_
\[\mu_{\mathrm{S}}=\begin{bmatrix}-1\\ -1\end{bmatrix},\,\Sigma_{\mathrm{S}}=\mathrm{I}\;;\;\mu_{\mathrm{U}}= \begin{bmatrix}+1\\ +1\end{bmatrix},\,\Sigma_{\mathrm{U}}=\mathrm{I}\]
_where \(\mathrm{I}\) is the identity matrix. We sampled \(3000\) points for the training set and \(10000\) for the test set, and \(n_{\mathrm{c}}=n_{\mathrm{c}}(\varepsilon)\) points for the calibration set according to Corollary 1.1 (from \(146\) points for \(\varepsilon=0.5\) up to \(7261\) for \(\varepsilon=0.01\)). The behaviour of the PSR constructed via the scalable classifiers is in agreement with the theory developed: the smaller the \(\varepsilon\) (i.e. the smaller is the error required) the smaller is the PSR, to guarantee more probability of safety. For scalable SVM (left) and scalable SVDD (middle) we choose a linear kernel, while for scalable LR (right) a Gaussian kernel was used. The blue cirles represent safe points \(\mathbf{x}\,\dot{=}\,\mathrm{S}\), while the red crosses represent unsafe ones, \(\mathbf{x}\,\dot{=}\,\mathrm{U}\)._
## III Finite families of hyperparameters
Probabilistic scaling guarantees confidence in prediction for any given scalable classifier. In other words, for any _fixed_ value of hyperparameter \(\mathbf{\theta}\), the safety set obtained selecting the scaling parameter \(\rho\) according to our procedure will fulfill the required probabilistic guarantees (Theorem II.1). However, it should be remarked that different values of \(\mathbf{\theta}\) will correspond to different models, and the resulting set will consequently be different, both in "size" and in "goodness". In particular, if the starting SC has been chosen badly, our procedure would lead to a very small PSR, that would be indeed guaranteed theoretically, but with no practical use.
Hence, the problem of selecting the best initial SC becomes of great importance. In our setup, this problem translates in choosing the best value for the hyperparameter. Also, we remark that, in general, there may be other parameters that affect the performance of a classifier, such as the choice of different kernels or different weights or different regularizations and many others. Hence, in general, the hyperparameter \(\mathbf{\theta}\) may be of larger dimensions and consider several possible choices.
To formally state our problem, we assume to have a finite set of \(m\) possible hyperparameters to choose from
\[\Theta=\left\{\mathbf{\theta}^{(1)},\mathbf{\theta}^{(2)},\ldots,\mathbf{\theta}^{(m)} \right\}, \tag{17}\]
and we consider the problem of selecting the "best" one.
Hence, we assume we are given a performance function \(J:\Theta\rightarrow\mathds{R}\) which measures the goodness of the model described by \(\mathbf{\theta}\). Then, we will choose
\[\mathbf{\theta^{\star}}\doteq\arg\max_{\mathbf{\theta}\in\Theta}J(\mathbf{\theta}).\]
Clearly, depending on the problem at end, different cost functions may be devised. We discuss a possible meaningful choice of performance function in Section III-B. In the following section, we show how the scaling procedure can be easily modified to guarantee that the selected SC, and the ensuing estimate of the PSR, still enjoy the desired probabilistic guarantees.
### _Probabilistic scaling for finite families of SC_
The following results, whose proof is a direct consequence of Bonferroni's inequality and is omitted for brevity, shows how the results in Theorem II.1 may be immediately extended to the case of a finite family of classifiers, i.e. a finite set of candidate SCs described by a finite set of possible values of hyperparameters.
**Theorem III.1** (Probabilistic Safety Region for finite families of hyperparameters).: _Consider the classifier (4), a finite set of possible hyperparameter values \(\mathbf{\theta}\in\Theta=\left\{\mathbf{\theta}^{(1)},\mathbf{\theta}^{(2)},\ldots,\mathbf{ \theta}^{(m)}\right\}\), and suppose that Assumption 1 holds and that \(\Pr\{\mathbf{x}\in\mathcal{X}\}=1\). Fix a risk parameter \(\varepsilon\in(0,1)\), a probability level \(\delta\in(0,1)\) and an integer discarding parameter \(r\geq 1\). Given \(\mathcal{Z}_{c}^{U}=\left\{(\bar{\mathbf{x}}_{j}^{U},-1)\right\}_{j=1}^{\bar{\mathbf{ \mu}}_{U}}\) corresponding to all the unsafe samples in a calibration set \(\mathcal{Z}_{c}\) of \(n_{c}\geq r\) i.i.d. samples, for all \(\mathbf{\theta}^{(k)}\), \(k\in[m]\), compute the corresponding scaling factors:_
* _compute the scaling parameters_ \[\bar{\rho}_{j}^{(k)}\text{ such that }f_{\mathbf{\theta}^{(k)}}(\bar{\mathbf{x}}_{j}^{U}, \bar{\rho}_{j}^{(k)})=0,\quad j\in[n_{U}],k\in[m],\]
Figure 2: 2D examples of PSRs via, respectively from left to right, scalable SVM (linear kernel), scalable SVDD (linear kernel) and scalable LR (Gaussian kernel). Synthetic test data were sampled from Gaussian distributions and classified for varying values of \(\varepsilon\) (from 0.01 to \(0.5\), from lighter to darker colors), after calibrating the scalable parameters with a calibration set of size \(n_{c}\) according with bound (2) and \(\delta=10^{-6}\). Blue points refer to the safe class (\(\mathbf{x}\doteq\mathds{S}\)) and rods to the unsafe one (\(\mathbf{x}\doteq\mathds{U}\)).
* _compute the \(k\)-th probabilistic radius and the \(k\)-th probabilistic safety region of level \(\varepsilon\), i.e._ \[\rho_{\varepsilon}^{(k)} \doteq \max^{(r)}\left(\{\hat{\rho}_{j}^{(k)}\}_{j=1}^{n_{U}}\right),\] (18) \[\mathcal{S}_{\varepsilon}^{(k)} \doteq \left\{\begin{array}{cc}\mathcal{S}\left(\rho_{\varepsilon}^{(k) }\right)&\text{if }n_{U}\geq r\\ \mathcal{X}&\text{otherwise.}\end{array}\right..\] (19)
_Then, the following holds_
\[\Pr\left\{\Pr\Bigl{\{}y=-1\text{ and }\boldsymbol{x}\in\mathcal{S}_{ \varepsilon}^{(k)}\Bigr{\}}\leq\varepsilon\right\}\geq 1-m\mathbf{B}(r-1;n_{c}, \varepsilon), \tag{20}\] \[\forall k\in[m].\]
In particular, this means that all sets \(\mathcal{S}_{\varepsilon}^{(k)}\) are valid PSR candidates, and we have the possibility of selecting among those the "best" one according to some specific measure on how we expect the SC to behave. In the next subsection, we propose a possible criterion which proved to be very effective in our experience.
### _Increase of safe points_
In general, one is interested in a solution which, besides providing probabilistic guarantees on the safe region, i.e. minimizing the probability of having unsafe points in the set \(\mathcal{S}_{\varepsilon}^{(k)}\), it also maximises the number of safe points
Figure 3: Plots of PSRs at the \(wavelength=0.05\) level for Gaussian SVDD with different regularization parameters (\(\eta\)) and different weights (\(\tau\)). The shape of the region changes by varying the design parameters, but maintaining the probabilistic guarantee on the number of unsafe points within it. The best configuration is chosen by maximizing a performance index, in this case the number of calibration points contained in the region (see the equation (21)). For this toy example, the best configuration is obtained for \(epsilon=10^{-1}\) and \(tau=0.9\), but others can be found by increasing the number of candidate design parameters.
captured by the region itself. To this end, we first notice that, when applying the scaling procedure, we are basically only exploiting the unsafe points in calibration set \(\mathcal{Z}_{c}\) (i.e. the points belonging to \(\mathcal{Z}_{c}^{U}\)).
It is thus immediately to observe that the remaining points in the calibration set, i.e. the points belonging to
\[\mathcal{Z}_{c}^{S}=\mathcal{Z}_{c}\setminus\mathcal{Z}_{c}^{U},\]
i.e. the set containing all the safe (\(+1\)) points in \(\mathcal{Z}_{c}\) may be exploited in evaluating the goodness of the candidate sets. To this end, given a candidate set \(\mathcal{S}_{\varepsilon}^{(k)}\), we can measure its goodness by choosing the cardinality of that set as the performance function
\[J(\boldsymbol{\theta}^{(k)})\doteq\left|\left\{\boldsymbol{z}\in\mathcal{Z}_ {c}^{S}:\boldsymbol{z}\in\mathcal{S}_{\varepsilon}^{(k)}\right\}\right|. \tag{21}\]
**Example III.1**.: _Considering the scalable SVDD with Gaussian kernel, in the same design as ExampleII.2, but with a probability of sampling outliers per class set at \(p_{O}=0.1\) (to allow for some noise), with only \(1,000\) points for the test set (to make the boundary plot clearer, see Figure 3) and with \(\varepsilon\) set to \(0.05\) (that gives a calibration set with \(2,064\) points), we computed the probabilistic safety region \(\mathcal{S}_{\varepsilon}\) for different values of the hyperparameters \(\boldsymbol{\theta}=[\eta,\tau]\), specifically \(\eta=[10^{-2},10^{-1},1]\) and \(\tau=[0.1,0.2,\ldots,0.9]\). All regions satisfy the probabilistic bound on the number of unsafe points within \(\mathcal{S}_{\varepsilon}\), i.e. \(\Pr\{\boldsymbol{x}\,\hat{=}\,\text{U}\text{ and }\boldsymbol{x}\in\mathcal{S}_{ \varepsilon}\}<0.05\), but the area covered changes as the design parameters change. The best region can be chosen as the one that maximizes an index parameter, as in this case the equation (21) that increases the number of safe points in the PSR._
Finally, it is worth noting that, more generally, the parameter to be optimized can be specified in principle according to the specific problem to be solved. For example, \(J\) can be defined such that it maximizes accuracy or minimizes only false positives or false negatives or optimizes any other performance index.
## IV A real-world application: Vehicle Platooning
Safety critical assessment represents a fundamental requirement in the automotive industry and vehicle platooning (VP) [43] represents one of the most challenging CPS (Cyber Physical System) in this context. The main goal of VP is to find the best trade-off between performance (i.e., maximizing speed and minimizing vehicle mutual distance) and safety (i.e., collision avoidance). With the idea of finding the largest region in the input space where safety is probabilistically guaranteed, we tested our scalable classifiers on the following scenario: given the platoon at a steady state of speed and reciprocal distance of the vehicles, a braking is applied by the leader of the platoon [44, 45]. Safety is referred to a collision between adjacent vehicles (in the study, it is actually registered when the reciprocal distance between vehicles achieves a lower bound, e.g. 2 m). The dynamic of the system is generated by the following differential equations [44]:
\[\begin{cases}\dot{v}_{\ell}=\frac{1}{m_{\ell}}(F_{\ell}-(a_{\ell}+b_{\ell} \cdot v_{\ell}^{2}))\\ \dot{d}_{\ell}=v_{\ell-1}-v_{\ell}\end{cases} \tag{22}\]
where \(v_{\ell},m_{\ell},a_{\ell},b_{\ell}\) and \(F_{\ell}\) are, respectively, the speed, the mass, the tire-road rolling distance, the aerodynamic drag and the braking force (the control law) of vehicle \(\ell\) and \(d_{\ell}\) is the distance of vehicle \(\ell\) from the previous one \(\ell-1\).
The behaviour of the dynamical system is synthesised by the following vector of features:
\[\boldsymbol{x}=[N,\boldsymbol{\iota}(0),F_{0},\mathbf{m},\mathbf{q},\mathbf{p }], \tag{23}\]
\(N+1\) being the number of vehicles in the platoon, \(\boldsymbol{\iota}=[\mathbf{d},\mathbf{v},\mathbf{a}]\) are the vectors of reciprocal distance, speed, and acceleration of the vehicles, respectively (\(\boldsymbol{\iota}\)(0) denotes that the quantities are sampled at time \(t=0\), after which a braking force is applied by the leader [45] and simulations are set in order to manage possible transient periods and achieve a steady state of \(\boldsymbol{\iota}\) before applying the braking), \(\mathbf{m}\) is the vector of weights of the vehicles, \(F_{0}\) is the braking force applied by the leader, \(\mathbf{q}\) is the vector of quality measures of the communication medium (fixed delay and packet error rate (PER) are considered in the simulations) and finally \(\mathbf{p}\) is the vector of tuning parameters of the control scheme.
The Plexe simulator [44, 45] has been used to register \(20000\) observations in the following ranges: \(N\in[3,8],\ F_{0}\in[-8,-1]\times 10^{3}N\), \(\mathbf{q}\in[0,0.5],\ \mathbf{d}(0)\in[4,9]\ \text{m},\ \mathbf{v}(0)\in[10,90]\) Km/h. Initial acceleration \(\mathbf{a}(0)\) is computed as \(\mathbf{a}(0)=F_{0}/\mathbf{m}\) Km/h\({}^{2}\). The output variable is then defined as \(y\in\mathcal{Y}=\{-1,+1\}\), where \(-1\) means "collision" and \(+1\) means "non-collision".
We searched safety for three levels of guarantee (\(\varepsilon=0.01,0.05,0.1\)) and different hyperparameters (\(\eta=10^{-2},10^{-1},1\) and \(\tau=0.1,0.5,0.9\)), evaluating the performance on the test set (reported in Table III-B) computing the empirical probability of getting a collision inside the "non-collision" probabilistic safety region \(\mathcal{S}_{\varepsilon}\), \(\Pr\!\left\{\boldsymbol{x}\,\hat{=}\,\{\text{collision}\}\right\}\) and \(\boldsymbol{x}\in\mathcal{S}_{\varepsilon}\,\), and the number of non-collision points of the calibration set contained in \(\mathcal{S}_{\varepsilon}\), varying the hyperparameters, \(J(\boldsymbol{\eta},\boldsymbol{\tau})\).
We divided the dataset in training set (\(n_{tr}=3000\) points), calibration set (\(n_{c}=10320,2064,1032\) respectively for \(\varepsilon=0.01,0.05,0.1\)) and test set (\(n_{ts}=n_{tr}-n_{c}\)).
In this numerical example, we use the scalable classifiers presented in Subsection II-C; scalable SVM (SC-SVM), scalable SVDD (SC-SVDD) and scalable LR (SC-LR). In all of them, the Gaussian kernel has been employed. For all scalable classifiers, the trade-off between the guarantee and the number of safe points of the calibration set within the "non-collision" safety region is good, allowing for the construction of operational regions where safety can be guaranteed. In particular, the best performance obtained by each classifier at different levels of \(\varepsilon\) is highlighted in bold in Table III-B. Furthermore, Figure 4 shows the trend of the probability of getting a collision within the safety region as \(\varepsilon\) varies, with \(\eta=1\) and \(\tau=0.5\) (i.e., without regularizing and weighting equally both the classes). As expected, the behavior is (almost) linear with \(\varepsilon\), with SC-LR deviating
slightly from SC-SVM and SC-SVDD.
It should be mentioned that this work represents a significant improvement of the results obtained on the same dataset by [46]. In this previous research, a safety set was searched by numerically minimizing the number of false positives by controlling the radius of an SVDD classifier. Here, with the new theory based on SCs and probabilistic scaling, we obtain better results in terms of performance (i.e., size of safety regions) and, more importantly, with a _formally solid mathematical framework_, applicable to any binary classifier.
## V Conclusions
Statistical learning can rely on two significant new concepts: scalable classifiers and probabilistic safety regions SCs constitute a new family of classifiers, expanding the knowledge in the field of ML classification, which has been state of the art for many years. Three examples of SCs have been proposed (SC-SVM, SC-SVDD and SC-LR), but more can be developed. For example, the results of the paper can be straightforwardly applied in the field of deep neural networks, which, in principle, would have a crucial impact: they would allow classification performance to be controlled without lengthy retraining. Also, the definition of PSR and its _proved_ properties provide the field of CPSs with a new relevant methodology for evaluating safety. The range of applications in which PSRs can be introduced is definitely wide: from safety monitoring to performance improvement and conformity guarantee, just to name a few. Moreover, the ideas here introduced open up new research directions and parallels with other theories. In particular, and with this remark we conclude the paper, our theory has much in common with Conformal Prediction, allowing for connections that both methodologies can use to improve each other.
**Remark V.1** (Link with Conformal Prediction).: _We shall remark that our approach bears various similarities with the Conformal Prediction approach. Conformal Prediction is a framework developed starting in the late nineties and early two thousand by V. V. Work. We refer the reader to the surveys [47, 48, 49, 49] for a nice introduction to this methodology. Similarly to our approach, CP makes use of a calibration set to perform an a-posteriori verification of the designed classifier. In practice, it returns a measure of its "conformity" to the calibration data, based on the introduction of specific score functions. We are currently working in showing how specific score functions may be designed in the case of scalable classifier. Preliminary results, which go beyond the scope of present work, seem to prove how the scaling setup proposed here may be used to prove interesting safety properties of CPs._
## Acknowledgment
We would like to thank A. Angelopoulos for the inspiring discussions on possible links between scalable classifiers and conformal predictions.
Figure 4: Empirical probability of collision, measured in the test set, as a function of \(\varepsilon\).
## Appendix
This appendix is devoted to the proofs and derivations of all the original results presented in this paper.
### _Property 2_
Proof.: Because of (6) we have that if \(\rho\) is small enough then \(f_{\mathbf{\theta}}(\mathbf{x},\rho)<0\). On the other hand, if \(\rho\) is large enough then \(f_{\mathbf{\theta}}(\mathbf{x},\rho)>0\). This, along with the continuity nature of \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\), guarantees the existence of \(\rho\) such that \(f_{\mathbf{\theta}}(\mathbf{x},\rho)=0\). The uniqueness follows from the monotonic assumption on \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\). Denote \(\bar{\rho}(\mathbf{x})\) this unique value of \(\rho\) satisfying \(f_{\mathbf{\theta}}(\mathbf{x},\bar{\rho}(\mathbf{x}))=0\). From the monotonically increasing nature of \(f_{\mathbf{\theta}}(\mathbf{x},\rho)\) we have
\[f_{\mathbf{\theta}}(\mathbf{x},\rho)\geq 0\ \ \Leftrightarrow\ \ \rho\geq\bar{\rho}(\mathbf{x}).\]
Thus,
\[\phi_{\theta}(\mathbf{x},\rho)=-1\Leftrightarrow\rho\geq\bar{\rho}(\mathbf{x}).\]
### _Theorem II.1_
Proof.: Let us introduce the auxiliary function
\[\psi:\mathcal{X}\times\{-1,1\}\rightarrow[-1,1),\]
which is defined as
\[\psi(\mathbf{x},y)\doteq\left\{\begin{array}{cc}-1&\text{if}\ \ y=+1,\\ \frac{\bar{\rho}(\mathbf{x})}{1+|\bar{\rho}(\mathbf{x})|}&\text{otherwise.}\end{array}\right. \tag{24}\]
Denote now
\[\psi_{\varepsilon}=\max^{(r)}(\{\psi(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}})).\]
Since \(\delta\in(0,1)\), \(\varepsilon\in(0,1)\) and the integers \(n_{c}\geq r\geq 1\) satisfy
\[\sum_{i=0}^{r-1}\binom{n_{c}}{i}\varepsilon^{i}(1-\varepsilon)^{n_{c}-i}\leq\delta,\]
we have from Property 1 that, with a probability no smaller than \(1-\delta\),
\[\Pr\Bigl{\{}\psi(\mathbf{x},y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon. \tag{25}\]
The rest of the proof shows that the previous inequality is equivalent to the claim of the theorem. That is,
\[\Pr\Bigl{\{}\psi(\mathbf{x},y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon\ \ \Leftrightarrow\ \ \mathbb{P}\Bigl{\{}y=-1\text{ and }\mathbf{x}\in S_{\varepsilon}\Bigr{\}}\leq\varepsilon.\]
We consider two cases \(n_{U}<r\) and \(n_{U}\geq r\).
* **Case \(n_{U}<r\)**: By definition, \[-1=\psi(\mathbf{x},+1)<\psi(\mathbf{x},-1)\in(-1,1),\ \forall\mathbf{x}\in\mathcal{X}.\] This means that the smallest values for \(\psi(\mathbf{x},y)\) are attained at the safe samples. From \(n_{U}<r\) we have that at most \(r-1\) elements of the calibration set correspond to unsafe samples. Equivalently, no more than \(r-1\) elements of \(\{\psi(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}}\) are larger than \(-1\). This implies that the \(r\)-th largest value in \(\{\psi(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}}\) corresponds to a safe sample and is equal to \(-1\). That is, \[\psi_{\varepsilon}=\max^{(r)}(\{\psi(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}}))=-1.\] Thus, the inequality (25) is equivalent in this case to \[\Pr\Bigl{\{}\psi(\mathbf{x},y)>-1\Bigr{\}}\leq\varepsilon.\] By definition, for every \(\mathbf{x}\in\mathcal{X}\) we have \[\psi(\mathbf{x},y)>-1\ \Leftrightarrow\ y=-1.\] Thus, we obtain that in this case, \[\Pr\Bigl{\{}\psi(\mathbf{x},y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon\ \ \Leftrightarrow\ \ \Pr\Bigl{\{}y=-1\Big{\}}\leq\varepsilon\] \[\ \Leftrightarrow\ \ \Pr\Bigl{\{}y=-1\text{ and }\mathbf{x}\in\mathcal{X} \Bigr{\}}\leq\varepsilon.\] From the assumptions of the Theorem, we have that, by definition, \(n_{U}<r\) implies \(S_{\varepsilon}=\mathcal{X}\). Thus, we conclude \[\Pr\Bigl{\{}\psi(\mathbf{x},y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon\ \Leftrightarrow\Pr\Bigl{\{}y=-1\text{ and }\mathbf{x}\in S_{\varepsilon}\Bigr{\}}\leq\varepsilon.\]
* **Case \(n_{U}\geq r\)**: In this case, the \(r\)-largest value of \(\{\psi(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{c}}\) is attained at an element of the unsafe calibration set \(\mathcal{Z}_{c}^{\prime}=\bigl{\{}(\tilde{\mathbf{x}}_{j}^{U},-1)\bigr{\}}_{j=1}^{ n_{U}}\subseteq\mathcal{Z}_{c}\). That is, \[\psi_{\varepsilon} = \max^{(r)}(\{\psi(\mathbf{x}_{i},y_{i})\}_{i=1}^{n_{U}})\] \[= \max^{(r)}(\{\psi(\tilde{\mathbf{x}}_{j}^{U},-1)\}_{j=1}^{n_{U}})\in( -1,1).\] Define now \[\rho_{\varepsilon}=\max^{(r)}\left(\{\bar{\rho}(\tilde{\mathbf{x}}_{j}^{U})\}_{j=1} ^{n_{U}}\right).\] Since \(\frac{\bar{\rho}(\mathbf{x})}{1+|\bar{\rho}(\mathbf{x})|}\) is a monotonically increasing function on \(\bar{\rho}(\mathbf{x})\), we have that \(\psi_{\varepsilon}\) can be obtained by means of \(\rho_{\varepsilon}\). That is, \[\psi_{\varepsilon}=\max^{(r)}(\{\psi(\tilde{\mathbf{x}}_{j}^{U},-1)\}_{j=1}^{n_{U}} )=\frac{\rho_{\varepsilon}}{1+|\rho_{\varepsilon}|}.\] Thus, from \(\psi_{\varepsilon}>-1\) and the previous expression we obtain the equivalences \[\psi(\mathbf{x},y)>\psi_{\varepsilon} \Leftrightarrow y=-1\text{ and }\frac{\bar{\rho}(\mathbf{x})}{1+|\bar{\rho}(\mathbf{x})|}>\frac{\rho_{ \varepsilon}}{1+|\rho_{\varepsilon}|}\] \[\Leftrightarrow y=-1\text{ and }\bar{\rho}(\mathbf{x})\geq\rho_{\varepsilon}.\] Therefore, \(\Pr\Bigl{\{}\psi(x,y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon\) is equivalent to \[\Pr\Bigl{\{}y=-1\text{ and }\bar{\rho}(\mathbf{x})>\rho_{\varepsilon}\Bigr{\}}\leq\varepsilon.\] From the monotonicity of \(f_{\theta}(\mathbf{x},\rho)\) on \(\rho\) (Assumption 1) we obtain that the previous inequality can be rewritten as \[\Pr\Bigl{\{}y=-1\text{ and }f_{\theta}(\mathbf{x},\bar{\rho}(x))>f_{\theta}(\mathbf{x}, \rho_{\varepsilon})\Bigr{\}}\leq\varepsilon.\] Taking into consideration that \(f_{\theta}(\mathbf{x},\bar{\rho}(x))=0\) we obtain that \(\Pr\Bigl{\{}\psi(x,y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon\) is equivalent to \[\Pr\Bigl{\{}y=-1\text{ and }f_{\theta}(\mathbf{x},\rho_{\varepsilon})<0\Bigr{\}}\leq\varepsilon.\] From the assumptions of the Theorem we have that, by definition, \(n_{U}\geq r\) implies that \(S_{\varepsilon}\) is equal to \(\{\ \mathbf{x}\in\mathcal{X}\ :\ f_{\theta}(\mathbf{x},\rho_{\varepsilon})<0\ \}\). Thus, we conclude in this case that \[\Pr\Bigl{\{}\psi(\mathbf{x},y)>\psi_{\varepsilon}\Bigr{\}}\leq\varepsilon\ \Leftrightarrow\Pr\Bigl{\{}y=-1\text{ and }\mathbf{x}\in S_{\varepsilon}\Bigr{\}}\leq\varepsilon.\] |
2309.16223 | GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations | Diverse explainability methods of graph neural networks (GNN) have recently
been developed to highlight the edges and nodes in the graph that contribute
the most to the model predictions. However, it is not clear yet how to evaluate
the correctness of those explanations, whether it is from a human or a model
perspective. One unaddressed bottleneck in the current evaluation procedure is
the problem of out-of-distribution explanations, whose distribution differs
from those of the training data. This important issue affects existing
evaluation metrics such as the popular faithfulness or fidelity score. In this
paper, we show the limitations of faithfulness metrics. We propose GInX-Eval
(Graph In-distribution eXplanation Evaluation), an evaluation procedure of
graph explanations that overcomes the pitfalls of faithfulness and offers new
insights on explainability methods. Using a fine-tuning strategy, the GInX
score measures how informative removed edges are for the model and the EdgeRank
score evaluates if explanatory edges are correctly ordered by their importance.
GInX-Eval verifies if ground-truth explanations are instructive to the GNN
model. In addition, it shows that many popular methods, including
gradient-based methods, produce explanations that are not better than a random
designation of edges as important subgraphs, challenging the findings of
current works in the area. Results with GInX-Eval are consistent across
multiple datasets and align with human evaluation. | Kenza Amara, Mennatallah El-Assady, Rex Ying | 2023-09-28T07:56:10Z | http://arxiv.org/abs/2309.16223v2 | # GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
###### Abstract
Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet how to evaluate the _correctness_ of those explanations, whether it is from a human or a model perspective. One unaddressed bottleneck in the current evaluation procedure is the problem of out-of-distribution explanations, whose distribution differs from those of the training data. This important issue affects existing evaluation metrics such as the popular faithfulness or fidelity score. In this paper, we show the limitations of faithfulness metrics. We propose **GInX-Eval** (Graph **In**-distribution **eX**planation **Evaluation**), an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness and offers new insights on explainability methods. Using a fine-tuning strategy, the GInX score measures how informative removed edges are for the model and the EdgeRank score evaluates if explanatory edges are correctly ordered by their importance. GInX-Eval verifies if ground-truth explanations are instructive to the GNN model. In addition, it shows that many popular methods, including gradient-based methods, produce explanations that are not better than a random designation of edges as important subgraphs, challenging the findings of current works in the area. Results with GInX-Eval are consistent across multiple datasets and align with human evaluation.
## 1 Introduction
While people in the field of explainable AI have long argued about the nature of good explanations, the community has not yet agreed on a robust collection of metrics to measure explanation _correctness_. Phenomenon-focused explanations should match the ground-truth defined by humans and are evaluated by the accuracy metric. Model-focused explanations contribute the most to the model predictions and are evaluated by the faithfulness metrics. Because ground-truth explanations are often unknown, faithfulness and its variants are the most common measures of quality. Faithfulness metrics remove or retrain only the important graph entities identified and observe the changes in model outputs. However, this edge masking strategy creates Out-Of-Distribution (OOD) graph inputs, so it is unclear if a high faithfulness score comes from the fact that the edge is important or from the distribution shift induced by the edge removal (Gunnemann, 2022).
We propose **GInX-Eval**, an evaluation procedure of in-distribution explanations that brings new perspectives on GNN explainability methods. Testing two edge removal strategies, we evaluate the impact of removing a fraction \(t\) of edges in the graph on the GNN model performance. To overcome the OOD problem of faithfulness metrics, the model is fine-tuned and tested on the reduced graphs at each degradation level. The best explainability methods can identify the graph entities whose removal triggers the sharpest model accuracy degradation. We compare generative and non-generative methods on their **GInX** score against a random baseline across four real-world graph datasets and two synthetic datasets, all used for graph classification tasks. With this strategy, we show that existing explainers are not better than random in most of the cases. In addition, we show the overall superiority of GNNExplainer, PGMExplainer, and most of the generative methods above
gradient-based methods and Occlusion. Our results lead to diverging conclusions from recent studies (Yuan et al., 2023; Agarwal et al., 2022) and again question the use of faithfulness as a standard evaluation metric in explainable AI (xAI). The GInX-Eval framework also proposes the **EdgeRank** score to assess the capacity of explainers to correctly order edges by their true importance for the model. Finally, GInX-Eval is a useful tool to validate ground-truth explanations provided with some datasets and discover both human- and model-based explanations. Because it relies on a fine-tuning strategy of black-box pre-trained models, GInX-Eval is also a useful evaluation procedure in real-world scenarios where models are not publicly available and can only be used via API calls. Due to the computational cost of re-training, GInX-Eval is not proposed as a systematic evaluation metric but as a tool to throw light on the true informative power of explainers and validate ground-truth explanations. To summarize our contributions:
* We first show that faithfulness evaluates OOD explanations. In addition, we observe that (1) it is inconsistent with the accuracy metric, (2) it leads to divergent conclusions across datasets, and (3) across edge removal strategies. Overcoming the OOD problem, we propose **GInX-Eval** (**Graph In**-distribution eXplanation ), a new evaluation framework for GNN explainability methods. The **GInX** score evaluates how informative explanatory edges are to the model and the **EdgeRank** score assesses if those edges are correctly ordered by their importance.
* We propose a validation protocol of ground-truth explanations using the GInX score. This way, we can measure the degree of alignment between human-based and model-based explanations.
* With GInX-Eval, we now finally demonstrate the true informative power of well-known explainability methods, filter out bad methods, and choose methods that can correctly rank edges.
The rest of this article is organized as follows. Section 2 discusses the literature on graph neural networks (GNN) explainability evaluation and the OOD problem. Section 3 presents the limitations of the current evaluation with faithfulness and introduces GInX-Eval, our new in-distribution evaluation procedure, and its two scores, GInX and EdgeRank. Section 4 presents the experiments that we conducted in detail. Section 5 summarizes the paper and discusses the future opportunities.
## 2 Related work
**Evaluation in xAL** To measure the correctness of explanations, a few metrics have been developed. GNN explainability method should satisfy accuracy, faithfulness, stability (Sanchez-Lengeling et al.,
Figure 1: Summary of GInX-Eval procedure. (1) A GNN model is pre-trained to predict the class of the input graphs. An explainability method generates explanatory subgraphs. (2) For each \(t\in[0.1,...,0.9]\), a new train and test datasets are generated where the fraction \(t\) of the top explanatory edges is removed. At each \(t\), the pre-trained GNN model is fine-tuned on the new train dataset, evaluated on the new test set, and the GInX score is computed. If the model performance decreases, i.e., the GInX scores increase, the explanatory edges are informative to the model. The EdgeRank score is also computed to evaluate if explanatory edges are correctly ranked by the explainability method.
Yuan et al., 2023; Agarwal et al., 2021; 2022), consistency and contrastivity (Yuan et al., 2023), usefulness (Colin et al., 2021). The two most popular approaches are: (1) measuring accuracy using ground-truth annotations and (2) measuring faithfulness using objective metrics (Chan et al., 2022). However, the accuracy metric also referred to as plausibility (Li et al., 2022; Longa et al., 2022; Nauta et al., 2022), needs ground-truth explanations and is therefore less universal and more subjective than faithfulness. Nauta et al. (2022) argues that evaluating the accuracy of an explanation to humans is different from evaluating its correctness. It is not guaranteed that it aligns with its faithfulness Jacovi & Goldberg (2020). According to Petsiuk et al. (2018), it is preferable to keep humans out of the evaluation to better capture the model's understanding rather than representing the human's view. Faithfulness metrics are therefore the most popular evaluation metrics, but we show later in Section 3.2 that they have strong limitations including evaluating out-of-distribution explanations.
**Solving the OOD problem.** Recent works have proposed to adapt the GNN model or develop robust explainability methods to overcome the out-of-distribution (OOD) problem. Faber et al. (2020) argue that explanations should stay in the training data distribution and propose CoGe to produce Distribution Compliant Explanation (DCE). Li et al. (2021) propose a novel out-of-distribution generalized graph neural network. Hsieh et al. do not remove features but apply small adversarial changes to the feature values. Instead of developing robust methods, Hooker et al. (2018) evaluate interpretability methods by observing how the performance of a retrained model degrades when removing the features estimated as important. While this retraining strategy circumvents the OOD problem, it has only been developed for CNN models on images to evaluate feature importance. Building on this retraining strategy, we propose a new evaluation procedure for GNN models and introduce an alternative to faithfulness metrics.
## 3 Method
This section highlights the limitations of the popular xAI evaluation procedure using faithfulness metrics and proposes GlnX-Eval to overcome those. We can assess the informativeness of explanations for the model with the GlnX score and the capacity of methods to correctly order explanatory edges by their importance with the EdgeRank score.
### Preliminaries
Given a well-trained GNN model \(f\) and an instance of the dataset, the objective of the explanation task is to identify concise graph substructures that contribute the most to the model's predictions. The given graph can be represented as a quadruplet \(G(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{E})\), where \(\mathcal{V}\) is the node set, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the edge set. \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d_{e}}\) and \(\mathbf{E}\in\mathbb{R}^{|\mathcal{E}|\times d_{e}}\) denote the feature matrices for nodes and edges, respectively, where \(d_{n}\) and \(d_{e}\) are the dimensions of node features and edge features. In this work, we focus on structural explanation, _i.e.,_ we keep the dimensions of node and edge features unchanged. Given a well-trained GNN \(f\) and an instance represented as \(G(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{E})\), an explainability method generates an explanatory edge mask \(M\in\mathbb{R}^{|\mathcal{E}|}\) that is normalized. Furthermore, to obtain a human-intelligible explanation, we transform the edge mask to a sparse matrix by forcing to keep only the fraction \(t\in\mathcal{T}\) of the highest values and set the rest of the matrix values to zero. Each explainability method can be expressed as a function \(h:\mathcal{G}\rightarrow\mathcal{G}\) that returns for each input graph \(G\) an explanatory subgraph \(h(G)\).
**Edge removal strategies.** There are two strategies to select edges in a graph: the _hard_ selection and the _soft_ selection. Hard selection picks edges from the graph so that the number of edges and nodes is reduced. This creates subgraphs that very likely do not lie in the initial data distribution. Soft selection sets edge weights to zero when edges are to be removed. Therefore it preserves the whole graph structure with all nodes and edge indices. Following these two definitions, we define _hard_ and _soft_ explanations. Note here that the hard removal strategy might break the connectivity of the input graphs, resulting in explanations represented by multiple disconnected subgraphs.
### Faithfulness metrics
DefinitionThe faithfulness or fidelity scores are the most general quality metrics in the field of GNN explainability. To evaluate the correctness of the explanation, the explanatory subgraph or
weighted graph \(h(G)\) produced by the explainer \(h\) is given as input to the model to compute the fidelity score on the probabilities:
\[fid=|p(f(h(G))=y)-p(f(G)=y)|\in[0;1] \tag{1}\]
where \(y\) is the true label for graph \(G\) and \(f(G)\), \(f(h(G))\) the predicted labels by the GNN given \(G\) and \(h(G)\) respectively. The closer \(fid\) is to 0, the more faithful the explanation is. The faithfulness score is averaged over the N explanatory graphs \(G_{e,i},i\leq N\) as:
\[\text{Faithfulness}=1-\frac{1}{N}\sum_{i=1}^{N}|p(f(h(G_{i}))=y_{i})-p(h(G_{i}) =y_{i})|\in[0;1] \tag{2}\]
The metric is normalized and the closer it is to 1, the more faithful the evaluated \(N\) explanations are to the initial predictions. The above score corresponds to the \(fid-^{prob}\), one of the four forms of the fidelity scores Yuan et al. (2023), described in Appendix A.1.
Prior work.While faithfulness metrics are the most popular quality measures independent of ground-truth explanations, they have been recently criticized. Based on a "removal" strategy, i.e., we remove or keep the graph entities estimated as important, faithfulness withdraws some entities by setting them to a baseline value either removing them from the graph or putting their weight to zero. Hsieh et al. correctly observe that this evaluation procedure favors graph entities that are far away from the baseline. Consequently, methods that focus on highly weighted edges while maybe ignoring low-weight but still essential connections are favored. In addition, truncated graphs after edge removal can lie out of the data distribution used for training the GNN model (Hooker et al., 2018). In this case, model behavior might differ not because of removing important information but because of evaluating a sample outside the training distribution. The out-of-distribution risk is even larger with graphs because of their discrete nature (Faber et al., 2020).
### GinX-Eval
GinX-Eval is an evaluation procedure of explainability methods that overcomes the faithfulness metrics' OOD problem and assesses the informativeness of explanatory edges towards the GNN model. Figure 1 gives an overview of the procedure. To evaluate the explainer \(h\), GInX-Eval first gathers the explanations produced by \(h\) for all graph instances. The explanatory edges can be ranked according to their respective weight in the subgraph: the most important edges have a weight close to 1 in the mask, while the least important ones have weights close to 0. At each degradation level \(t\), we remove the top \(t\) fraction of the ordered explanatory edge set from the input graphs. We generate new train and test sets at different degradation levels \(t\in[0.1,0.2,...,1]\). The pre-trained GNN model is then fine-tuned at each degradation level on the new train dataset and evaluated on the new test data. While being the most computationally expensive aspect of GInX-Eval, fine-tuning is scalable (see Appendix B.4) and we argue that it is a necessary step to decouple whether the model's degradation in performance is due to the loss of informative edges or due to the distribution shift. The objective here is not to provide a computationally efficient evaluation metric but to highlight the limitations of popular evaluation in xAI for GNN and question the superiority of gold standard methods. The pseudo-code to implement GinX-Eval is given in Appendix B.3.
A drop in test accuracy when removing edges indicates that those edges were important for the model to make correct predictions. These edges are therefore considered as important as they are the most informative to the model. It is worth noticing that edges might be correlated and those spurious correlations can lead to an absence of accuracy drop when removing the top important edges and then a sudden decrease in accuracy when all correlated edges are removed.
#### 3.3.1 GinX Score
Following this evaluation procedure, we define the GInX score at \(t\). It captures how low the test accuracy is after removing the top \(t\) edges. Let \(h(G)\) be the explanatory subgraph generated by the method \(h\), \(y\) the true label for graph \(G\) and \(\chi:\mathcal{G}\times\mathcal{T}\rightarrow\mathcal{G}\) the removal function that takes an explanation \(h(G)\) and returns the hard or soft explanatory graph containing the top \(t\in\mathcal{T}\) edges. We define GInX(\(t\)) as:
\[\text{GInX}(t)=1-\text{TestAcc}(t)=1-\frac{1}{N_{test}}\sum_{i=0}^{N_{test}} \mathds{1}(f(G_{i}\setminus\chi(h(G_{i}),t))=y_{i}) \tag{3}\]
The closer the GInX score is to one, the more informative the removed edges are to the model. Note here that the GInX score at \(t\) can be computed following the hard or soft edge removal strategy; however, we show in Appendix A.2 that the GInX score computed with hard edge removal has higher expressiveness.
#### 3.3.2 EdgeRank Score
Based on the GInX score, we can compute the power of explainability methods to rank edges, i.e., to correctly order edges based on their importance. The edge ranking power can be evaluated with the EdgeRank score defined as:
\[\text{EdgeRank}=\sum_{t=0,0,1,\ldots,0.8}(1-t)\times(\text{GInX}(t+0.1)-\text{ GInX}(t)) \tag{4}\]
A high edge ranking score indicates methods that assign the highest importance weights to the most genuinely informative edges for the model. This is especially important when you try to characterize an explanation and identify fundamental entities within the explanatory substructure.
## 4 Experimental results
In the following section, we propose to validate the GInX-Eval procedure and show its superiority over the widely used faithfulness metric. We support our claims with well-chosen experiments.
Experimental settingExplainability methods were evaluated on two synthetic datasets, BA-2Motifs and BA-HouseGrid, three molecular datasets MUTAG, Benzene and BBBP, and MNISTbin. They all have ground-truth explanations available except for the BBBP dataset. We test two GNN models: GIN (Hu et al., 2020) and GAT (Velickovic et al., 2018) because they score high on the selected real-world datasets, with a reasonable training time and fast convergence. For the two synthetic datasets, we only use GIN since the GAT model does not give good accuracy. Further details on the datasets, GNN training parameters, and time are given in Appendix B. We compare non-generative methods, including the heuristic Occlusion (Zeiler and Fergus, 2014), gradient-based methods Saliency (Baldassarre and Azizpour, 2019), Integrated Gradient (Sundararajan et al., 2017), and Grad-CAM (Pope et al., 2019), and perturbation-based methods GNNExplainer (Ying et al., 2019), PGMExplainer (Vu and Thai, 2020) and SubgraphX (Yuan et al., 2021). We also consider generative methods: PGExplainer (Luo et al., 2020), GSAT (Miao et al., 2022), GraphCFE (CLEAR) (Ma et al., 2022), D4Explainer and RCExplainer (Bajaj et al., 2021). For more details on the differences between generative and non-generative explainers, we refer the reader to Appendix B.5. We compare those explainability methods to base estimators: Random, Truth, and Inverse. Random assigns random importance to edges following a uniform distribution. Truth estimates edge importance as the pre-defined ground-truth explanations of the datasets. The Inverse estimator corresponds to the worst-case scenario where edges are assigned the inverted ground-truth weights. If \(w_{i,j}\) is the ground-truth importance of the edge connecting nodes \(i\) and \(j\), the weight assigned by the Inverse estimator is equal to \(1-w_{i,j}\).
### The Out-Of-Distribution Faithfulness Evaluation
The biggest limitation of the faithfulness metrics is the so-called OOD problem. The generated explanations are out-of-distribution, i.e. they lie outside the data distribution and "fool" the underlying predictor to change the original class, i.e., \(f(h(G))\neq f(G)\). Whereas, in factual explainability scenarios, we expect the explanatory graph \(h(G)\) to have the same class as the input graph \(G\), i.e., \(f(h(G))=f(G)\). Figure 2 illustrates the OOD problem: the extracted model embeddings of explanations of toxic molecules are more similar to the ones of non-toxic molecules. In this case, the model predicts the explanatory subgraphs to be non-toxic while they are valid toxic molecular
fragments. The model prediction is altered not necessarily because we keep only the important entities but also because the model lacks knowledge about these new explanatory graphs. Therefore, the faithfulness score which definition is based on the model predictions of explanations, does not entirely capture the quality of explanations and is ill-suited to evaluate explainability methods.
**Observation 1**_The faithfulness metric is not consistent with the accuracy score._ In figure 3, there is a general misalignment in the rankings of explainers and base estimators on faithfulness or AUC score. For all datasets but Benzene, the Truth estimator, whose accuracy is maximal, has a small faithfulness score \(\sim 0.5\). For MNISTbin, Inverse is by far the best according to the faithfulness score while being the worst explainer by definition on the AUC score. For BA-2Motifs, Random has the highest faithfulness score but can only be 50% accurate by definition. Due to the OOD problem of faithfulness, we cannot decide if the model is fooled by the subgraphs induced by the most informative edges or if human-based and model-based evaluations disagree. Therefore, we cannot
Figure 3: Rankings of base estimators and non-generative explainability methods according to the faithfulness score computed on soft explanations, the faithfulness score on hard explanations, and the AUC score. The AUC ranking is only reported for datasets with ground-truth explanations. Baselines were evaluated on the full explanatory masks, while explainability methods were evaluated on the truncated explanations, keeping the top 10 important undirected edges.
As a result, we cannot rely on the evaluation with faithfulness to draw general conclusions on the explainability methods. We compare the rankings of explainability methods according to the faithfulness evaluated on the two types of explanations, Hard Fidelity and Soft Fidelity respectively, and the accuracy score defined as the AUC score to stay consistent with previous work Longa et al. (2022).
Figure 2: Illustration of the out-of-distribution problem: explanations of a toxic molecule lie closer to the non-toxic molecular representations. Graph embeddings were extracted after the readout layer of the pre-trained GIN model for the MUTAG dataset. We use both t-SNE and UMAP to project the embeddings into 2D representations. Both projection methods show the existence of out-of-distribution explanations.
quantify the alignment between human and model explainability.
**Observation 2**_The evaluation of the explainability methods with the faithfulness metric is not consistent across datasets._ Figure 3 shows no consensus on the top-3 methods across datasets on the soft faithfulness or hard faithfulness score. For instance, we observe that GradCAM and Occlusion have the highest Soft Fid scores for BA-House-Grid, MUTAG, Benzene, and MNISTbin, but not for BA-2Motifs and BBBP where Truth, Random, and GNNExplainer outperform. For Hard Fid, the results are also very heterogeneous among the six datasets. Due to the OOD problem, we cannot conclude that those inconsistencies across datasets are related to differences inherent to the graph data itself, e.g., differences in graph topology, size, or density among the datasets.
**Observation 3**_The faithfulness metric is not consistent across edge removal strategies._ On figure 3, the top-3 ranking for Soft Fid and Hard Fid is always different except for Benzene dataset. This means that the edge removal strategy influences the model perception: the model does not predict labels only based on the information contained in the explanatory edges but also based on the structure of the given explanations. Because of the OOD problem, we cannot decide whether those inconsistencies come from the explainability methods themselves: methods that produce disconnected explanations are penalized by the hard removal strategy because the GNN model is not able to process the message passing.
### Validation of GInX-Eval Procedure
We validate the GInX-Eval procedure on the BA-HouseGrid synthetic dataset because ground-truth explanations, i.e., house and grid motifs, are very well-defined and class-specific. In the binary classification setting, graphs are labeled 1 if they have grids and 0 if they have house motifs attached to the main Barabasi graph. We test three explainability baselines: the Random explainer that assigns values in \([0,1]\) following a uniform distribution, the Truth that assigns ground-truth explanations, and the Inverse estimator that returns the inverse ground-truth explanations and is, therefore, the worst estimator possible.
On Figure 4, GInX-Eval distinguishes the three methods because we observe a sharp decrease of the Truth explainer after 10% edge removal, while the Inverse estimator does not degrade the model performance, and the Random baseline starts decreasing after 20% of the edges are removed. Without re-training, all base importance estimators lead to a model performance degradation. Therefore, evaluating without retraining the model cannot reflect the true explainability power of the methods.
### Evaluating with GInX-Eval
#### 4.3.1 Overview
GInX-Eval evaluates to what extent removing explanatory edges degrades the model accuracy. We adopt the hard selection strategy to remove edges. Even if conclusions are similar for both selection strategies (see Appendix A.3 and C.2), the degradation is of the order of \(10^{-1}\) with hard selection versus \(10^{-2}\) for soft selection. For visualization purposes, we prefer conveying here results with the hard selection. We refer the reader to Appendix C.2 for additional results with the soft selection.
Figure 5 shows how the GInX score increases when we remove a fraction \(t\in[0.1,...,0.9]\) of the most important edges according to multiple explainability methods. For clarity, we choose to display
Figure 4: Comparison between not fine-tuning the GNN model and GInX-Eval on the BA-HouseGrid dataset. Without fine-tuning, the model’s performance also decreases for the worst estimator Inverse where uninformative edges are removed first, preventing a correct evaluation of explainability methods. However, for GInX-Eval where the model is fine-tuned on modified datasets, we observe no test accuracy degradation for the worst-case estimator Inverse.
a smaller set of explainability methods. For more methods, we refer the reader to Appendix C.1. We first observe that model performance is remarkably robust to graph modification for the four real-world datasets, with a maximum growth of the GInX score of 30% observed for the Benzene dataset. For the synthetic datasets, removing many edges leads to a random assignment of labels by the model. In real-world datasets, GNN models might be able to capture high-level information even with absent connections.
We note a particularly small increase of the GInX score for MNISTbin, i.e., in the order of \(10^{-2}\). For this dataset, the GNN model is robust to edge modification. After removing most of the edges from the input graph, the model retains most of the predictive power. The reason might be that node and edge features are more important for the prediction than the graph structure itself for those two datasets.
#### 4.3.2 GinX-Eval of Base Estimators
_Is the ground-truth explanation meaningful to the model?_ The Truth and the Inverse edge importance estimators are evaluated on all datasets except BBBP which has no ground-truth available. We observe in figure 5 that the GInX score stays constant for Inverse and drops significantly for Truth. We conclude that the explanations generated with Inverse have only uninformative edges for the model, while the ground-truth edges contain crucial information for making correct predictions. GInX-Eval is a useful tool to validate the quality of provided ground-truth explanations of published graph datasets.
_Does a random assignment of edge importance informative to the model?_ For all datasets except Benzene, the Random baseline leads to a similar degradation as the Truth estimator in figure 5. There are two reasons for this. First, random explanations contain a few edges present in the ground-truth explanation. Removing just these few edges makes the GInX score increase sharply because of the strong correlations that exist among informative edges. Second, true informative edges might have correlations to some other random edges, so removing edges randomly affects the capacity of the model to correctly learn important patterns in the graph.
_Are explanations obtained with graph explainability methods better than a random guess?_ We observe that a random edge modification removes more informative edges than GradCAM, Integrated Gradient, Occlusion, RCExplainer, and PGExplainer. Therefore, those methods are not better than Random.
Figure 5: GInX scores of a fine-tuned GIN model on graphs with increasing fraction of removed edges. The removed edges are the most important based on explainability methods, and new input graphs are obtained with the _hard_ selection, i.e., explanatory edges are strictly removed from the graph so that the number of edges and nodes is reduced. For more methods, see Appendix C.1.
GInX-Eval identifies how informative ground-truth explanations are for the model, thus assessing the agreement between model and human-based explanations, and draws attention to how much random explanations are meaningful to the model.
#### 4.3.3 GInX-Eval of Explainability methods
_What fraction \(t\) of removed edges should I fix to compare the GInX scores of multiple explainability methods?_ Methods produce explanations of different sizes: some methods constrain their explanations to be sparse, while others assign importance weight to almost all edges in the graph. Figure 6 indicates the heterogeneity of masks generated by different explainability methods. While Truth, PGMExplainer, SubgraphX, and GraphCFE constrain their explanations to be sparse, the rest of the methods include most of the edges in the explanations, assigning a different importance weight to each edge.
The _critical threshold_\(t_{m}^{e}\) of a method \(m\) is the ratio of non-zero values in the masks. Beyond this critical threshold, we are not evaluating the method anymore but a random assignment of edge importance weights. Therefore, it is crucial to compare methods at a threshold \(t\) smaller than the minimum of the methods' critical thresholds. To compare methods, we propose to define the dataset's _optimal threshold_\(t^{*}\) such as \(t^{*}=min_{m\in\mathcal{M}}\{t_{m}^{e}\}\), where \(\mathcal{M}\) denotes the set of explainability methods. The optimal threshold corresponds to the threshold closest to the average mask sparsity of ground-truth explanations. In other words, we take as reference the size of ground-truth explanations as the optimal number of informative edges in the graph and compare methods at this sparsity threshold. We compute the optimal thresholds for the six datasets and report them in table 1. Only the BBBP dataset has no ground-truth explanation available so we set \(t^{*}=0.3\) to have human-intelligible sparse explanations.
\begin{table}
\begin{tabular}{l|c c} Dataset & Truth & Optimal \\ & sparsity & threshold \\ \hline
**BA-2-Modits** & \(0.216\) & \(0.3\) \\
**BA-HouseGrid** & \(0.065\) & \(0.1\) \\
**Benzene** & \(0.175\) & \(0.2\) \\
**MNISTlin** & \(0.235\) & \(0.3\) \\
**MUTAG** & \(0.039\) & \(0.1\) \\ \end{tabular}
\end{table}
Table 1: Truth mask sparsity values for each dataset and the deduced optimal thresholds.
Figure 6: Mask sparsity of different explainability methods. A high sparsity indicates an explanatory mask with many zeros and small pre-processing explanatory subgraphs.
Figure 7: GInX scores at the optimal thresholds. For the BBBP dataset, we define an arbitrary optimal threshold \(t=0.3\). For the other datasets, the optimal threshold is estimated based on the explanatory mask sparsity generated by the Truth estimator. Figure 7 displays the GInX scores of explainability methods at the optimal threshold defined for each dataset. Except for the Benzene dataset, we observe that gradient-based methods and Occlusion have the smallest GInX scores at the optimal thresholds. Gradient-based methods contain less informative edges than GNNExplainer, PGMExplainer, and generative methods. This contradicts
observations made in figure 3 where gradient-based methods and Occlusion are always better than GNNExplainer and PGMExplainer. GInX-Eval unveils new insights on gradient-based methods that go against recent studies (Yuan et al., 2023; Agarwal et al., 2022). On the other hand, GNNExplainer, PGMExplainer, GSAT, and D4Explainer have competitive performance with Random and Truth baselines. This proves that generative methods are not necessarily better at capturing meaningful information than non-generative methods.
The GInX score at the optimal threshold helps filter out uninformative methods including gradient-based methods and Occlusion, and shows that methods can generate informative explanations independent of their generative nature.
#### 4.3.4 EdgeRank score of explainability methods
We use the EdgeRank score to evaluate the capacity of explainers to rank edges correctly according to their true informativeness for the model. In figure 8, we observe that gradient-based methods and Occlusion are not good at correctly ordering edges by their importance. This is another reason why they should not be used to generate meaningful explanations. We also observe that RCExplainer and PGExplainer which perform well on the GInX score have a low edge ranking power, except for the BA-HouseGrid dataset. These two methods can capture the most informative edges but cannot decide what the relative importance of those important edges is. Finally, PGMExplainer, GNNExplainer, GraphCFE, and D4Explainer have both a high GInX score (see figure 5) and a high EdgeRank score, making them the best choice for informative and edge-rank powerful explainers.
## 5 Discussion
This work discusses the pitfalls of faithfulness, one of the most popular metrics in xAI, and the problem of out-of-distribution explanations. Overcoming these limitations, our evaluation procedure GInX-Eval measures the informativeness of explainability methods and their ability to accurately rank edges by their importance for the GNN model. Observing the prediction change, GInX-Eval assesses the impact of removing the generated explanations from the graphs. It gets around the issue of OOD explanations by fine-tuning the GNN model. GInX-Eval is a useful tool to validate the quality of the provided ground-truth explanations. It also demonstrates the poor informativeness of gradient-based methods, contradicting results from recent studies (Yuan et al., 2023; Agarwal et al., 2022) and reproduced in this paper. Combining the GInX and EdgeRank scores, we can filter out uninformative explainability methods and find the optimal ones. Because GInX-Eval relies on a fine-tuning strategy of pre-trained black-box models, our method can easily be used for models only accessible via API calls, including large language models. Due to the computation cost of retraining, GInX-Eval is not meant to be used systematically but is designed as a validation tool for new metrics. This work paves the way for developing approaches that conform with both human- and model-based explainability.
Figure 8: Edge ranking power of explainability methods. |
2309.09303 | On adaptive kernel intensity estimation on linear networks | In the analysis of spatial point patterns on linear networks, a critical
statistical objective is estimating the first-order intensity function,
representing the expected number of points within specific subsets of the
network. Typically, non-parametric approaches employing heating kernels are
used for this estimation. However, a significant challenge arises in selecting
appropriate bandwidths before conducting the estimation. We study an intensity
estimation mechanism that overcomes this limitation using adaptive estimators,
where bandwidths adapt to the data points in the pattern. While adaptive
estimators have been explored in other contexts, their application in linear
networks remains underexplored. We investigate the adaptive intensity estimator
within the linear network context and extend a partitioning technique based on
bandwidth quantiles to expedite the estimation process significantly. Through
simulations, we demonstrate the efficacy of this technique, showing that the
partition estimator closely approximates the direct estimator while drastically
reducing computation time. As a practical application, we employ our method to
estimate the intensity of traffic accidents in a neighbourhood in Medellin,
Colombia, showcasing its real-world relevance and efficiency. | Jonatan A. González, Paula Moraga | 2023-09-17T15:33:42Z | http://arxiv.org/abs/2309.09303v1 | # On adaptive kernel intensity estimation on linear networks
###### Abstract
In the analysis of spatial point patterns on linear networks, a critical statistical objective is estimating the first-order intensity function, representing the expected number of points within specific subsets of the network. Typically, non-parametric approaches employing heating kernels are used for this estimation. However, a significant challenge arises in selecting appropriate bandwidths before conducting the estimation. We study an intensity estimation mechanism that overcomes this limitation using adaptive estimators, where bandwidths adapt to the data points in the pattern. While adaptive estimators have been explored in other contexts, their application in linear networks remains underexplored. We investigate the adaptive intensity estimator within the linear network context and extend a partitioning technique based on bandwidth quantiles to expedite the estimation process significantly. Through simulations, we demonstrate the efficacy of this technique, showing that the partition estimator closely approximates the direct estimator while drastically reducing computation time. As a practical application, we employ our method to estimate the intensity of traffic accidents in a neighbourhood in Medellin, Colombia, showcasing its real-world relevance and efficiency.
**Keywords:** Intensity function; Spatial Point patterns; Linear networks; Variable bandwidth
## 1 Introduction
Spatial point processes on linear networks are a variation of the spatial point process framework that accounts for the geometry of a linear network in which events occur (Ang
et al., 2012). In this framework, events are characterised by their spatial location along the network. This spatial information can be used to understand the relationships between events and the network topology and to model how the linear network influences the occurrence of events.
For example, consider the analysis of traffic accidents on a highway. The occurrence of an accident is not only determined by factors such as driver behaviour and weather conditions but also by the spatial characteristics of the road itself, such as its curves, intersections, and lane configurations. Using a point process on linear spatial networks makes it possible to model the occurrence of accidents as a function of the location and investigate how specific road features affect the likelihood of an accident (Baddeley et al., 2015).
Point processes on linear spatial networks have numerous applications in transportation engineering, epidemiology, and ecology, where events occur on linear features such as roads, rivers, and pipelines (Baddeley et al., 2021). Examples of line networks include maps of railways, rivers, electrical wires, nerve fibres, airline routes, irrigation canals, geological faults, or soil cracks. Data points can be associated with traffic accidents, street crimes, roadside trees, retail stores, roadside kiosks, insect nests, neuroanatomical features, or sample points along a stream. Even John Snow's pioneering work on cholera cases in London can be considered a point pattern on a linear network representing the district streets (Baddeley et al., 2021). However, the statistical analysis of network data presents significant challenges. The non-homogeneity of network data creates computational and geometrical complexities, leading to new methodological problems that may generate methodological errors.
Moreover, the spatial scales of network data can vary widely, further complicating the analysis. Such challenges pose a significant and far-reaching problem to the classical methodology of spatial statistics based on stationary processes, which is mainly inapplicable to network data. Nevertheless, analysing point patterns on linear networks has gained increased attention from GIS and spatial statistics communities in recent years. In spatial ecology and spatial statistics, geostatistical techniques have been focused on analysing spatial variables on a network of rivers or streams.
Candelaria is an emblematic neighbourhood within Medellin, Colombia. The traffic patterns within Candelaria address mobility demands, often manifesting as a rhythmic interweaving of vehicular and pedestrian activities. The data representing Candelaria's road accidents (a point pattern denoted by \(X_{L_{0}}\) hereinafter, where \(L_{0}\) represent Candelaria's road network), as depicted in Figure 1(a), demonstrate significant spatial disparities in the distribution of data points. In such situations, the conventional approach of employing fixed-bandwidth kernel estimation often proves inadequate. Let us define \(X_{L_{1}}\) and \(X_{L_{2}}\) as two subsets of \(X_{L_{0}}\), such that \(X_{L_{2}}\subset X_{L_{1}}\subset X_{L_{0}}\), where \(L1\) and \(L_{2}\) are nested subsets of Candelaria's road network \(L_{0}\). Precisely, dense clusters of accidents occurring at certain intersections (see Figure 1(b), (c)) within the urban road network may be excessively smoothed out. In contrast, sparser accident occurrences in other areas of the road system may be insufficiently smoothed. To address this challenge, implementing adaptive kernel estimation, which allows for variable bandwidth, can yield considerably superior results (Abramson, 1982; Davies and Baddeley, 2018).
Furthermore, additional complexity arises from the fact that the road network itself
exhibits extensive spatial variation. This characteristic adds further difficulty to visually assessing the accident density per unit length of road. Moreover, this variation poses challenges from a computational standpoint due to the requirement of a much finer spatial resolution in certain intersections compared to straight-road segments. Consequently, accurately capturing and analyzing accident patterns necessitates a nuanced and adaptable approach that accounts for both the variability in accident concentrations and the spatial intricacies of the road network.
This paper's objectives are twofold: First, to expand the adaptive methodology for intensity estimation via kernel in the context of point patterns on networks, specifically focusing on the heat kernel. Secondly, we extend a partition algorithm to estimate within reasonable timeframes. In addition, the method has been implemented in the R package kernstadapt(Gonzalez and Moraga, 2022).
Our paper is structured as follows: Section 2 lays down the foundational concepts of point patterns on linear networks. Section 3 delves into introducing and estimating the intensity function, employing various methodologies encompassing our central approach - the heat kernel. Moving forward, Section 4 introduces adaptive estimators for intensity estimation by kernels. Section 5 introduces the partition algorithm designed to expedite the computational process of adaptive estimators, accompanied by simulation demonstrations of our method's efficacy. In Section 6, we apply our developed technique to estimate the intensity of traffic accidents in Candelaria. Finally, in Section 7, we discuss our findings while presenting novel avenues for future research exploration.
Figure 1: (a): Traffic accidents (pink dots) in the road network of Candelaria’s neighbourhood of Medellin, Colombia (Comuna 10) (\(X_{L_{0}}\)) from 2014 to 2019, a total of 76610 data points (traffic accidents). (b): An enlargement from Candelaria’s east part (\(X_{L_{1}}\)). (c): A second enlargement of Candelaria’s zone (\(X_{L_{2}}\)).
Fundamentals: Point processes on linear networks
A _linear network_ can be defined as the union of a finite number of line segments, denoted by \(L\). Let \(l_{i}\) be a line segment; then, it takes the parametrised form
\[l_{i}=[\mathbf{u}_{i},\mathbf{v}_{i}]=\left\{w:w=t\mathbf{u}_{i}+(1-t)\mathbf{v }_{i},0\leq t\leq 1\right\},\]
where \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) are the endpoints. It is assumed that the intersection between two different segments is empty or an endpoint of both segments. The total length of a subset \(B\) of the linear network is denoted by \(|B|\).
A _point pattern_ on a linear network is a finite unordered set \(X=\left\{\mathbf{x}_{i}\right\}_{i=1}^{n}\), where each point represents a location on the linear network. An observed point pattern is a realisation of a random point process \(\mathbf{X}\) on the linear network. We assume that the number of points is finite, has finite mean and variance, and there are no multiple coincident points (Daley and Vere-Jones, 2003; Baddeley et al., 2021).
We can measure distances in a linear network through the _shortest path_. A _path_ between two points \(\mathbf{u}\) and \(\mathbf{v}\) in \(L\) is a points sequence \(\left\{\mathbf{y}_{i}\right\}_{j=0}^{m}\), such that \(\mathbf{y}_{0}=\mathbf{u}\) and \(\mathbf{y}_{m}=\mathbf{v}\), and every segment line \(\left[\mathbf{y}_{j},\mathbf{y}_{j+1}\right]\subset L\), for \(j=0,\ldots,m-1\). If \(||\cdot||\) denotes the Euclidean distance, the length of the path \(\left\{\mathbf{y}_{j}\right\}_{j=0}^{m}\) is given by
\[\sum_{j=0}^{m-1}||\mathbf{y}_{j+1}-\mathbf{y}_{j}||.\]
The _shortest path distance_\(d_{\ell}(\mathbf{u},\mathbf{v})\) between \(\mathbf{u}\) and \(\mathbf{v}\) in a linear network \(L\) is defined as the minimum of the lengths of all paths from \(\mathbf{u}\) to \(\mathbf{v}\). The distance is infinite if no paths are from \(\mathbf{u}\) to \(\mathbf{v}\).
In the shortest-path metric, a disc with radius \(r>0\) and centre \(\mathbf{u}\) in the network \(L\) is the set of all points lying no more than a distance \(r\) from the location \(\mathbf{u}\), in the shortest path distance; formally,
\[b_{L}(\mathbf{u},r)=\left\{\mathbf{v}\in L:d_{L}(\mathbf{u},\mathbf{v})\leq r \right\}.\]
## 3 Intensity function
Given the data points of a point pattern \(X\) in a network \(L\) conceived as a realisation of the point process \(\mathbf{X}\), we want to estimate the intensity function \(\lambda(\mathbf{u})\). Let \(N(B)\) denote the number of points of \(\mathbf{X}\cap B\) in a subset \(B\) of the linear network. Then the point process has intensity function \(\lambda(\mathbf{u}),\mathbf{u}\in L\) if for all closed subsets
\[\mathbb{E}\left[N(B)\right]=\int_{B}\lambda(\mathbf{u})\mathrm{d}_{\ell} \mathbf{u}, \tag{3.1}\]
where the integration is done with respect to arc length on the linear network. The intensity function may be interpreted as the expected number of points per unit length.
Campbell's formula (Daley and Vere-Jones, 2007, section 13.1, p. 269) applies for linear networks and it is given by
\[\mathbb{E}\left[\sum_{\mathbf{u}_{i}\in\mathbf{X}}h(\mathbf{u}_{i})\right]=\int_ {L}h(\mathbf{u})\lambda(\mathbf{u})\mathrm{d}_{\ell}\mathbf{u},\]
where \(h\) is a real, measurable function.
### Estimators
A straightforward first step in analysing spatial point patterns is to estimate the intensity function using kernels (Baddeley et al., 2015). This option suits linear networks but has many more considerations (McSwiggan et al., 2017). We start by considering a kernel estimator of the form
\[\hat{\lambda}(\mathbf{u})=\sum_{i=1}^{n}K(\mathbf{u}|\mathbf{u}_{i}),\quad \mathbf{u}\in L, \tag{3.2}\]
where \(K\) is a smoothing kernel on the real line that must satisfy some properties. It should be non-negative, and it should have a total mass of 1, i.e., \(\int_{L}K(\mathbf{u}|v)\mathrm{d}_{\ell}\mathbf{v}=1\) for all \(\mathbf{u}\in L\). Then, the mass of the intensity estimator must be \(n\), the total number of points. A classical estimator of the type given in Eq.(3.2), where \(K(\mathbf{u}|\mathbf{u}_{i})=\kappa(d_{\ell}(\mathbf{u},\mathbf{u}_{i}))\), and where \(\kappa\) is a smoothing kernel on the real line will not preserve the mass (McSwiggan et al., 2017). Therefore, other options must be considered to reduce the bias.
#### 3.1.1 Edge-corrected estimators
One way to correct the bias of the "natural estimator" is considering _edge correction factors_ intended for preserving the mass (Moradi et al., 2018; Baddeley et al., 2021). For a point pattern \(X\) on a linear network, consider an edge correction function given by
\[c_{L}(\mathbf{u}):=\int_{L}\kappa(d_{\ell}(\mathbf{u},\mathbf{v}))\mathrm{d}_ {\ell}\mathbf{v}.\]
Then, an intensity estimator \(\hat{\lambda}^{\mathrm{U}}(\mathbf{u})\) based on the uniform edge correction (Diggle, 1985) and one based on Jones's (1993) edge correction may be defined as
\[\hat{\lambda}^{\mathrm{U}}(\mathbf{u})=\frac{1}{c_{L}(\mathbf{u})}\sum_{i=1}^ {n}\kappa(d_{\ell}(\mathbf{u},\mathbf{u}_{i})),\quad\text{and}\quad\hat{ \lambda}^{\mathrm{JD}}(\mathbf{u})=\sum_{i=1}^{n}\frac{\kappa(d_{\ell}( \mathbf{u},\mathbf{u}_{i}))}{c_{L}(\mathbf{u}_{i})}.\]
The computation of the edge-correction factors \(\hat{\lambda}^{\mathrm{U}}(\mathbf{u})\) and \(\hat{\lambda}^{\mathrm{JD}}(\mathbf{u})\) can be computationally burdensome (Baddeley et al., 2021). Given this significant computational cost, exploring alternative methods when seeking a rapid estimation is advisable.
#### 3.1.2 Equal-split kernel estimators
Okabe and Sugihara (2012), Chap. 9, and reference therein, summarise kernel density estimators on a general network. Their work investigated computational algorithms that
redistribute the mass of a kernel \(\kappa\) from the real line onto the network. They identified desirable properties for a kernel estimator and found that the so-called _equal-split discontinuous_ and _equal-split continuous_ rules satisfied many of them.
The rule known as _continuous_ possesses remarkable features such as symmetry, mass preservation, and unbiasedness under the assumption of true uniform intensity. On the other hand, the edge-corrected estimator \(\hat{\lambda}^{\mathrm{U}}(\mathbf{u})\) is unbiased but does not preserve mass, whereas \(\hat{\lambda}^{\mathrm{JD}}(\mathbf{u})\) preserves mass but is not unbiased, and neither of them is symmetric. Regrettably, implementing the "continuous" rule algorithm is exceedingly slow (McSwiggan et al., 2017). While the _discontinuous_ rule is faster, it is associated with less favourable characteristics as a non-continuous estimate of the intensity (Okabe and Sugihara, 2012, Sec. 9.3.2). Both methods utilise a kernel on the real line with bounded support on a network, which precludes using the Gaussian kernel. The computational burden increases exponentially with the bandwidth, making automatic bandwidth selection computationally impractical (Baddeley et al., 2021).
The algorithm _equal-split discontinuous_ generates a replica of the kernel \(\kappa\) for each point of \(X\). For locations \(\mathbf{u}\) that share a line segment with the origin \(\mathbf{u}_{i}\), the kernel estimate takes the value \(\kappa(d_{\ell}(\mathbf{u},\mathbf{u}_{i}))\). Whenever the network branches out, the residual tail mass of the kernel is distributed uniformly among the new line segments, ensuring that the total mass is conserved.
When a network lacks loops, let \(\mathbf{u}\) be a location of the network \(L\), \(\mathbf{u}_{i}\) a data point, and \(\{m_{i}\}_{j=1}^{p}\) be the degrees (the number of edges incident to a vertex) of each vertex along the shortest path from \(\mathbf{u}_{i}\) to \(\mathbf{u}\) without \(\mathbf{u}\) and \(\mathbf{u}_{i}\). Then, the equal-split discontinuous kernel is given by
\[K^{D}(\mathbf{u}|\mathbf{u}_{i})=\frac{\kappa(d_{\ell}(\mathbf{u},\mathbf{u}_{ i}))}{\prod_{j=1}^{p}(m_{j}-1)}.\]
When there are possible loops, let \(\pi^{*}=(\mathbf{u}_{i},\mathbf{v}_{1},\ldots,\mathbf{v}_{P-1},\mathbf{u})\) denote the paths, from \(\mathbf{u}_{i}\) to \(\mathbf{u}\), of length less than or equal to \(h\). These paths are _non-reflecting_, i.e., \(\mathbf{e}_{i}\neq\mathbf{e}_{i+1}\), with \(\mathbf{e}_{i}\) as the edge containing \(\mathbf{v}_{i-1}\) and \(\mathbf{v}_{i}\). Let \(\ell(\pi)\) denote the length of the path, and
\[a^{D}(\pi):=\frac{1}{\prod_{j=1}^{P-1}(m_{j}-1)},\]
where \(m_{j}\) is the degree of \(\mathbf{v}_{j}\). Thus, the equal-split discontinuous kernel is given by
\[K^{D}(\mathbf{u}|\mathbf{u}_{i})=\sum_{\pi^{*}}\kappa(\ell(\pi))a^{D}(\pi).\]
The _equal-split continuous_ kernel estimator corresponds to modifying the previous algorithm to generate a continuous function on the network (Okabe and Sugihara, 2012; McSwiggan et al., 2017; Baddeley et al., 2021). This modified approach extends to paths of length less than \(h\), including those that reflect at vertexes. When a path arrives at a vertex of degree \(m\), it encounters \(m-1\) outgoing branches and one incoming branch. Assuming that a weight \(2/m\) is assigned to each outgoing branch, and a weight \((2/m-1)\) is given to the incoming branch. The kernel achieves continuity, and due to the assumption of monotonicity of the kernel \(\kappa\), the resulting function maintains non-negative values.
The equal-split continuous kernel estimator can then be written by
\[K^{C}(\mathbf{u}|\mathbf{u}_{i})=\sum_{\pi}\kappa(\ell(\pi))a^{C}(\pi),\]
where
\[a^{C}(\pi):=\prod_{j=1}^{P-1}\left(\frac{2}{\deg(\mathbf{v}_{j})}-\delta_{j} \right),\]
and where \(\delta_{1}=\delta_{P}=0\), and \(\delta_{j}=\mathbf{1}\left\{\mathbf{e}_{j}=\mathbf{e}_{j-1}\right\},j=2,\ldots, P-1\).
#### 3.1.3 Heat kernel
McSwiggan et al. (2017) introduced a statistically rigorous kernel estimator for a linear network. They achieved this by leveraging the relationship between kernel smoothing and diffusion, as established by previous works such as Chaudhuri and Marron (2000) and Botev et al. (2010). When applied to a network, the heat kernel is equivalent to the Gaussian kernel, representing the function that describes the diffusion or spread of heat across the network, similar to its role in classical physics.
We first consider the case of the real line. Brownian motion on an infinite straight line can be described as a stochastic process, denoted as \(\{X(t)\}_{t\geq 0}\). In this process, the increments \(X(t_{j})-X(t_{j-1}),j=2\ldots,k\) between any two temporal points, \(0\leq t_{1}<t_{2}<\ldots<t_{k}\), are independent Gaussian random variables with a mean zero and variances \(t_{j}-t_{j-1}\). If we consider a Brownian motion \(\{X(t)\}_{t\geq 0}\) that starts at a specific position \(x_{0}\), the probability density of \(X(t)\) later, say at time \(t\), can be characterised by a Gaussian distribution with mean \(x_{0}\) and a variance \(\sigma^{2}=t\). In the case of a Brownian motion that starts at a random position \(x_{0}\) with probability density function \(p(x)\), the probability density of \(X(t)\) is given by
\[f_{t}(x)=\int_{-\infty}^{\infty}p(u)\kappa_{t}(x-u)\mathrm{d}u,\]
where \(\kappa_{t}(\cdot)\) represents the Gaussian probability density function with mean zero and variance \(t\). The function \(f_{t}(x)\) also represents the solution to the traditional _heat equation_ given by
\[\frac{\partial f}{\partial t}=\beta\frac{\partial^{2}f}{\partial x^{2}},\]
with _thermal diffusivity constant_\(\beta\) and initial condition \(f_{0}(x)=p(x)\). The solution to the heat equation can be represented by a kernel operator with kernel \(\kappa_{t}\). In the case of the _heat kernel_ on the real line, it is given by \(\kappa_{t}\). Consequently, the standard Gaussian kernel estimator for a set of data points \(\{x_{i}\}\) on the real line can be obtained by summing the values of the heat kernel \(\kappa_{t}(x-x_{i})\).
On the other hand, Brownian motion on a linear network is a specific type of diffusion occurring on a graph. It is a continuous-time Markov process, denoted as \(\{X(t)\}_{t\geq 0}\), equivalent to one-dimensional Brownian motion on each network segment (see, e.g., Schwiggan et al., 2017, and references therein). Whenever the process reaches a vertex on the network with degree \(m\), it has an equal likelihood of continuing along any of the
edges connected to that vertex (including the edge it arrived from). Notably, if it reaches a terminal endpoint, it is instantaneously reflected (McSwiggan et al., 2017).
The probability density function of Brownian motion on a linear network at time \(t\), denoted as \(f_{t}(\mathbf{u}),\mathbf{u}\in L\), obeys the classical heat equation on the network. This means that the analogous form of the classical heat equation holds at any position \(\mathbf{u}\) that is not a vertex. It is important to note that the second spatial derivative is well-defined, irrespective of the chosen local coordinates on the line segment. Define the first spatial derivative of \(f\) at \(\mathbf{v}\) in the direction towards \(\mathbf{v}^{\prime}\) as
\[\left.\frac{\partial f}{\partial\mathbf{u}[\mathbf{v},\mathbf{v}^{\prime}]} \right|_{\mathbf{v}}:=\lim_{h\downarrow 0}\frac{f(\mathbf{v}+h(\mathbf{v}- \mathbf{v}^{\prime}))}{h||\mathbf{v}-\mathbf{v}^{\prime}||}.\]
At any vertex \(\mathbf{v}\), the density \(f_{t}\) is continuous and holds a property akin to the conservation of heat flow that can be expressed as follows,
\[\sum_{\mathbf{v}^{\prime}\sim\mathbf{v}}\left.\frac{\partial f}{\partial \mathbf{u}[\mathbf{v},\mathbf{v}^{\prime}]}\right|_{\mathbf{v}}=0, \tag{3.3}\]
where the sum is indexed over all edges \([\mathbf{v},\mathbf{v}^{\prime}]\) incident at vertex \(\mathbf{v}\). Assume that we have a given initial condition \(f_{0}(\mathbf{u})=p(\mathbf{u})\) and the heat kernel \(\kappa_{t}(\mathbf{u}|\mathbf{s})\) on the network. A solution to the heat equation and the conservation condition for heat flow on the linear network given in Eq. (3.3) can be represented as a kernel operator as follows,
\[f_{t}(\mathbf{u})=\int_{L}p(\mathbf{s})\kappa_{t}(\mathbf{u}|\mathbf{s})\mathrm{ d}_{\ell}\mathbf{s}.\]
In intuitive terms, \(\kappa_{t}(\mathbf{u}|\mathbf{s})\mathrm{d}_{\ell}\mathbf{s}\) represents the probability that a Brownian motion on the network, initiated at position \(\mathbf{s}\) at time \(0\), will land within the tiny interval of length \(\mathrm{d}_{\ell}\mathbf{s}\) around the point \(\mathbf{u}\) at time \(t\). Alternatively, we can interpret \(\kappa_{t}(\mathbf{u}|\mathbf{s})\) as the transfer function that connects the temperature at location \(\mathbf{s}\) at time \(0\) to the temperature at location \(\mathbf{u}\) at time \(t\). It is worth noting that condition (3.3) implies that the first spatial derivative of \(f\) must be zero at all terminal endpoints. In a physical sense, this assumption indicates that the network is thermally isolated, preventing heat from escaping, including from the terminal endpoints (McSwiggan et al., 2017).
Estimating the intensity through a _diffusion intensity estimator_, \(\hat{\lambda}^{H}(\mathbf{u})\), can be achieved mathematically by employing a sum of heat kernels, i.e.,
\[\hat{\lambda}^{H}(\mathbf{u})=\sum_{i=1}^{n}\kappa_{t}(\mathbf{u}|\mathbf{u}_ {i}),\quad\mathbf{u}\in L,\]
where the time parameter is the squared bandwidth, i.e., \(t=\sigma^{2}\). Note that this mathematical expression should not be directly employed in computations. Instead, the diffusion estimator can be obtained by numerically solving the time-dependent heat equation up to the desired time. Numerical solutions of the heat equation are significantly faster (McSwiggan et al., 2017; Baddeley et al., 2021), often by several orders of magnitude, compared to path-enumeration algorithms. The computational time increases quadratically with the bandwidth.
Adaptive estimators
Adaptive estimators have been superficially studied in the case of linear networks (Baddeley et al., 2021). These have only been proposed in the case of corrected kernel sums (Rakshit et al., 2019); i.e., sums of kernel functions.
In this work, we want to go further and provide techniques for adaptive estimation that cover all cases, with particular emphasis on the heat kernel estimator. The general rationale is the following: a kernel estimator of the form given in Eq. (3.2) usually is equipped with a bandwidth \(\epsilon\). This bandwidth may be a positive constant representing the standard deviation of the kernel involved or a variance-covariance matrix in the anisotropic case (Baddeley et al., 2015). We consider the isotropic case, but instead of constant, the bandwidth will be a spatially varying function \(\epsilon(\mathbf{u}),\mathbf{u}\in L\). Therefore, the intensity estimator takes the general form
\[\hat{\lambda}_{\epsilon}(\mathbf{u})=\sum_{i=1}^{n}K_{\epsilon(\mathbf{u}_{i}) }(\mathbf{u}|\mathbf{u}_{i}),\quad\mathbf{u}\in L,\]
where \(\epsilon(\mathbf{u})\) is a bandwidth function defined as
\[\epsilon(\mathbf{u})=\frac{\epsilon^{\star}}{\gamma}\sqrt{\frac{n}{\tilde{ \lambda}(\mathbf{u})}},\quad\mathbf{u}\in L, \tag{4.1}\]
where \(\epsilon^{\star}\) is a spatial smoothing multiplier known as _global bandwidth_, \(\tilde{\lambda}(\mathbf{u})\) is a fixed-bandwidth pilot estimate of the intensity function (estimated using the global bandwidth), and \(\gamma\) is the geometric mean terms for the marginal intensities evaluated in the points of the point pattern, i.e.,
\[\gamma:=\exp\left\{\frac{1}{n}\sum_{i=1}^{n}\log\{\lambda(\mathbf{u}_{i})^{- 2}\}\right\}\]
This approach was proposed originally by Abramson (1982); the inclusion of the geometric mean frees the bandwidth from the data scale (Silverman, 1986; Davies and Hazelton, 2010).
## 5 Partition algorithm
The partition algorithm is a method used in point processes to determine adaptive bandwidths (see, e.g., Davies and Baddeley, 2018). The partition algorithm aims to find suitable bandwidth values for kernel smoothing. As the adaptive bandwidth approach recognizes that the density of points can vary across the spatial domain, it seeks to assign smaller bandwidths in more crowded regions and larger bandwidths in less overcrowded areas.
By adapting the bandwidth values to the local density of points, the partition algorithm allows for a more accurate and flexible intensity estimation. It captures the spatial heterogeneity of the point pattern according to a prespecified number of bins.
The partition algorithm works as follows: Initially, a candidate bandwidth \(\hat{h}_{i}:=\epsilon(\mathbf{u}_{i})\) is computed for every point \(\mathbf{u}_{i}\) of the point pattern. The candidate is obtained through the Abramson (1982) formula by Eq. (4.1).
Then we consider the empirical \(\zeta\)th quantiles, say \(\hat{h}^{(\zeta)}\) of the \(n\) bandwidths. Then we consider a _quantile step_\(0<\delta\leq 1\), such that \(D=1/\delta\) is an integer; in practice, \(\delta\ll 1\). Then we define a set of bins employing the sequence of values \(\hat{h}^{(0)},\hat{h}^{(\delta)},\ldots,\hat{h}^{(1)}\) such that every observation \(\mathbf{u}_{i}\) belongs to one of the bins
\[\left[h^{(0)},h^{(\delta)}\right],\left(h^{(\delta)},h^{(2\delta)}\right], \ldots,\left(h^{((D-1)\delta)},h^{(1)}\right].\]
If \(X_{d}\) is the point pattern consisting of the points that belong to the \(d\)th bin, then
\[X=\bigcup_{d=1}^{D}X_{d}.\]
To approximate the adaptive smoother, we will substitute each desired bandwidth \(\hat{h}_{i}\) with the bin's midpoint. The approximation results from summing the \(D\) fixed-bandwidth estimates applied to the respective subsets of the original point pattern, i.e.,
\[\hat{\lambda}_{\epsilon}\left(\mathbf{u}\right)\approx\sum_{d=1}^{D}\hat{ \lambda}_{\bar{\epsilon}_{d}}^{*}\left(\mathbf{u}|X_{d}\right),\quad\mathbf{u }\in L, \tag{5.1}\]
where \(\bar{\epsilon}_{i}\) is the midpoint of the \(i\)th bin, and \(\hat{\lambda}_{\bar{\epsilon}_{d}}^{*}\left(\mathbf{u}|X_{d}\right)\) corresponds to a fixed-bandwidth intensity estimate of the sub-pattern \(X_{d}\).
Figure 2: Candelaria’s traffic accidents point patterns (enlargements \(X_{L_{1}}\) in (a) and \(X_{L_{2}}\) in (b)) with adaptive kernel bandwidth from Abramson’s rule. Smaller and darker dots represent shorter bandwidths, i.e., crowded regions in \(L_{1}\) and \(L_{2}\).
### Simulation study
To explore the performance of the partition algorithm in networks by using the heating kernel, we launch a series of simulations designed to assess and contrast the outcomes of explicit calculations for adaptive kernel intensities with various partitioned versions. We aim to evaluate each approach's efficiency and accuracy by varying the parameters and configurations. Through these simulations, we seek to gain valuable insights into the trade-offs involved in adaptively estimating kernel intensities and the impact of employing partitioned estimators. Additionally, we measure the computation time for each method to gauge their practical feasibility and potential scalability.
While it can be theoretically argued that any kernel-type estimator possesses an adaptive version by defining a variable bandwidth, the practical implementation of such adaptivity is far from straightforward. The computational procedures involved can often become exceedingly complex. For instance, in the case of the equal-split kernel estimator, even with optimised code, computing it for a simple network can be time-consuming. When considering adaptive computation, where each point requires a total computation of the estimator (or several sets of points when using the partition algorithm), the complexity becomes prohibitive, rendering the computation impossible or incredibly slow. Given these challenges, in this study, we deliberately decided to exclude the "Equally-split-kernel" estimator from practical use. Instead, we focused on the heating kernel as the most viable and effective option for adaptive estimation thus far. By doing so, we acknowledge the limitations in adapting some kernel-type estimators while highlighting the heating kernel's sufficiency for practical adaptive intensity estimation.
In our simulations, the domain \(L_{2}\) (a subset of Candelaria's road network as defined in Section 1) has been deliberately chosen as the basis for analysis, corresponding to an enlarged representation of Candelaria's neighbourhood. In contrast to planar point pattern methodology, linear networks lack a classical reference network akin to the unit square. Consequently, we must adopt an _ad hoc_ approach in selecting a suitable linear network for effectively demonstrating our methods. By opting for \(L_{2}\), we ensure that the simulations are conducted within a context that closely mimics the characteristics and complexities of Candelaria's neighbourhood. In addition, we adjusted our simulations to have realisations of roughly 520 points.
### Log-Gaussian restricted to linear networks
This is the restriction to the network of a log-Gaussian random field with zero mean and an exponential correlation function with some variances and scales. In this case, the point patterns are observations of point processes with stochastic intensity \(\Lambda(\mathbf{u})=\exp\left(Z(\mathbf{u})\right)\), where \(Z(\mathbf{u})\) is a Gaussian random field observation (see, e.g., Moller et al., 1998).
We set a stationary exponential covariance function of the form
\[\gamma(\mathbf{u})=\sigma^{2}\exp\left\{-\frac{||\mathbf{u}||}{\phi}\right\}, \quad\sigma^{2},\phi\in\mathbb{R}_{+},\]
where the \(\sigma^{2}\) and \(\phi\) stand for the variance and scale, respectively. We select the following values for the variances and scales: \(\sigma^{2}\in\{0.9,2\}\) and \(\phi\in\{0.03,0.09\}\). So we consider
cases for this scenario: _logGaussian 1_: \(\sigma^{2}=0.9,\phi=0.03\), _logGaussian 2_: \(\sigma^{2}=0.9,\phi=0.09\), _logGaussian 3_: \(\sigma^{2}=2,\phi=0.03\) and _logGaussian 4_: \(\sigma^{2}=2,\phi=0.09\). Figure 3 shows an observation of this type of point pattern. The random field is generated on the unit square.
### Gaussian mixture
Given a set of \(g\) spatial anisotropic Gaussian densities, each with mean vector \(\mu_{i},i\in g\) and covariance matrix \(\Sigma_{i}\); we consider a mixture of them. A mixture of distributions is a probabilistic model that combines each with its own mean and covariance matrix to represent complex data with varying patterns and clusters. If \(F_{i}(\mu_{i},\Sigma_{i})\) is the cumulative distribution function for every component, then the cumulative distribution function of the mixture is given by
\[F:=\sum_{i=1}^{g}w_{i}F_{i}(\mu_{i},\Sigma_{i}),\]
where \(\sum_{i}w_{i}=1\). In this case, we set uniform weights all equal to \(1/5\). The mean vectors are randomly assigned so that all their components are observations of the uniform distribution inside the unit square. Then, the surface is evaluated only at locations on the linear network. In this case, we opt for five different distributions (\(g=5\)) and manually assign the covariance matrices.
\[\Sigma_{1}:=\begin{bmatrix}0.01&-0.01\\ -0.01&0.02\end{bmatrix},\quad\Sigma_{2}:=\begin{bmatrix}0.016&0.02\\ 0.02&0.05\end{bmatrix},\quad\Sigma_{3}:=\begin{bmatrix}0.01&0.01\\ 0.01&0.03\end{bmatrix},\]
Figure 3: Simulated realisation of a point pattern in a linear network from log-Gaussian random field observation. (a) The random field observation is based on an initial two-dimensional surface defined within the unit square. (b) The point locations (pink) and the intensity visualisation (grey lines with variable width) in the linear network.
\[\Sigma_{4}:=\begin{bmatrix}0.02&-0.01\\ -0.01&0.05\end{bmatrix},\quad\Sigma_{5}:=\begin{bmatrix}0.01&0.001\\ 0.001&0.005\end{bmatrix}.\]
In a mixture of Gaussian distributions, the covariance matrices represent the spread and orientation of data points within each Gaussian component. They describe the relationships between different variables in the data and influence the shape of the distribution's ellipsoidal clusters. Figure 4 showcases a simulated realisation of a point pattern in a linear network, adopting the Gaussian mixture setting. The Gaussian mixture is generated based on an initial two-dimensional surface delineated within the confines of the unit square. The point pattern unveils the precise locations of the generated points within the linear network.
### Integrated squared error
For a given point pattern on a linear network, the intensity estimation is carried out through a two-step process, starting with direct adaptive kernel estimation. Subsequently, partitioned estimates are executed, encompassing some possible values for the number of groups; indeed, we set \(\delta\in\{0.1,0.05,0.025,0.01\}\). A target resolution of \(128\times 128\) pixels is adhered to throughout this analytical scenario, ensuring a detailed and granular examination of the pattern's intensity distribution. The accuracy of this approximation is methodically evaluated using the _integrated squared error_ (ISE) as the key metric given by
\[\text{ISE}[\hat{\lambda}]:=\int_{L}\left(\hat{\lambda}(\mathbf{u})-\lambda( \mathbf{u})\right)^{2}\mathrm{d}_{\ell}\mathbf{u}.\]
Figure 4: Simulated realisation of a point pattern in a linear network in Gaussian mixture fashion. (a) The Gaussian mixture realisation is based on an initial two-dimensional surface defined within the unit square. (b) The point locations (pink) and the intensity visualisation (grey lines with variable width) in the linear network.
In this context, the expected value \(\lambda(\mathbf{u})\) is derived from the direct computed estimate, i.e., considering each point's bandwidth. The estimated value \(\hat{\lambda}(\mathbf{u})\), on the other hand, reflects the approximation derived from partitioned bandwidths, culminating in a rigorous assessment of the intensity estimation's fidelity and reliability.
In Figure 5, the boxplots illustrate the Integrated Squared Errors (ISE) corresponding to various estimates across different bandwidth partitions. This observation aligns with findings from Davies and Baddeley (2018), where coarser bandwidth partitions tend to yield higher errors. Notably, even with the coarsest partition configuration (\(\delta=0.1\)) involving ten spatial bins, the resulting estimate remains remarkably accurate regarding ISE. As anticipated, as the bandwidth partitions become finer, the estimates converge towards the direct estimation, demonstrating the anticipated trend of improved accuracy with an increased number of bandwidth bins.
We consider total elapsed execution times relative to direct estimation, i.e., \(t_{\text{partition}}/t_{\text{direct}}\). In Figure 6, a clear correlation emerges between elapsed execution times and the number of bandwidth bins employed. Notably, the observed increase in execution times parallels the number of bins. These times consistently remain significantly lower than the time required for direct estimation. This means that the binning procedure offers a much more efficient solution regarding execution times. It is worth noting that when the bandwidth bin count reaches its maximum (\(\delta=0.001\)), there is a notable elongation in processing time. The incremental gains in error reduction do not appear to justify the increase in execution time in this scenario.
Figure 5: Integrated square errors with respect to the direct estimation of intensity function. We consider several values for the parameter \(\delta\). The resolution of the intensity arrays is \(128\times 128\). We truncate the most extreme outliers for better visualisation.
## 6 Application: Traffic accidents in some Colombian cities
Medellin-Colombia is known for its unique division by _comunas_. These comunas are administrative divisions that classify and organise the city into distinct neighbourhoods. Medellin is divided into 16 communas with distinctive characters, history, and social dynamics. The bustling urban hub of _Comuna 10_ is popularly known as _La Candelaria_. The division by comunas allows for focused governance and tailored approaches to address the specific needs of each neighbourhood, fostering community engagement and empowerment.
A compilation of all traffic accidents transpiring between 2014 and 2019 in Medellin was undertaken, with Candelaria being selected as the focal subnetwork. This choice was deliberate, as Candelaria lies within the city's central region, where most accidents occur, and its structural simplicity renders it a fitting candidate for analysis. The dataset containing records of traffic accidents within Candelaria is publicly available on the OpenData portal maintained by the Medellin Town Hall. Additionally, the shapefile depicting the intricate road network of the area is also accessible for reference and analysis (Secretaria de Movilidad; 2022). In Medellin's Candelaria comuna, traffic accidents remain a pressing issue, as evidenced by recent reports and statistics. According to local news sources such as _El Colombiano_, this area has notably increased traffic accidents. The combination of high population density and heavy traffic flow contributes to the heightened risk of collisions. The narrow and congested streets and the presence of informal vendors and pedestrians further compound the problem. Notably, motorcycles, which are popular means of transportation in the city, have been involved in a significant number of accidents. The city authorities have taken steps to address the issue, including implementing stricter traffic regulations and improving infrastructure. However, sustained efforts are required to mitigate the frequency and severity of traffic accidents in Medellin's
Figure 6: Wall clock timings presented in relation to the direct estimation approach, presenting the efficiency gains achieved through the partitioning algorithm.
Candelaria zone.
To ascertain the expected number of traffic accidents within Candelaria through the heating kernel in its adaptive version, applying the partitioning algorithm introduced in Section 5 was necessary. A total of 276 distinct groups (\(\approx\sqrt{76610}\)) were selected to determine the bandwidth bins for the analysis, where 76610 is the total number of points in the point pattern.
Figure 7 displays the estimation result. The figure illustrates the spatial distribution of traffic accidents in the Candelaria neighbourhood. It becomes apparent that the highest concentration of traffic incidents predominantly occurs in the western region. Notably, the roadways hosting more roundabouts, indicative of larger and busier thoroughfares, exhibit a higher incidence of accidents. Intriguingly, an additional concentration of accidents emerges in the central-eastern portion of Candelaria, hinting at localised factors or traffic dynamics that warrant further investigation. This visualization provides valuable insights into the geographic patterning of accidents within the neighbourhood, offering a foundation for targeted interventions and traffic safety improvements. The execution of the estimation procedure, a computationally complex task, took seven minutes when executed on a conventional computing platform.
Figure 7: Estimated adaptive heating kernel intensity using the partition algorithm for Candelaria’s traffic accidents. (a) The pixel intensity values are displayed as colours as a pixel image on the linear network. (b) Estimated intensity given as a perspective view.
Discussion
This paper uses variable-width kernels to address the non-parametric intensity estimation challenge of a point process within a linear network. The distinctive feature of these kernels lies in their adaptability--each point in the point pattern is endowed with a bandwidth that varies according to the density of surrounding points. This ensures a finer bandwidth for densely populated areas and a broader one for isolated points, effectively capturing the local heterogeneity of the process. While this technique has been discussed in the literature at a general level, our contribution lies in extending a partitioning algorithm, enabling efficient estimation with minimal information loss for the heating kernel in the linear network context.
We have employed the heat kernel method, a sophisticated approach. Nevertheless, other techniques, such as Equal-split kernel estimators, have proven reliable in the fixed bandwidth case. We explored the adaptive version of Equal-split estimators but found them computationally infeasible. On the other hand, adaptive versions of Edge-corrected estimators are plausible and conceivable, though some researchers have already ventured into this domain.
In our simulations, we have chosen to model fields, whether random or not, within a planar region, namely the unit square, and subsequently constrained these fields to our linear network embedded within the unit square. This approach serves as a quick and straightforward means of simulation within the linear network. However, it is important to note that contemporary methods now enable simulations directly on the network of random fields (see, e.g., Bolin et al., 2023).
Our adaptive kernel-based technique can be extended to risk estimation. In our practical example, this could involve estimating the relative risk of accidents in Candelaria based on the known vehicle intensity. Identifying areas with elevated accident risk, even if not the most intense, could facilitate collaboration with authorities to establish surveillance systems.
Another avenue for expansion is the incorporation of time. In cases where each point's occurrence is linked to a temporal dimension, an adaptive estimation with an additional dimension becomes necessary. Certainly, there are various avenues for future exploration, such as addressing networks with curvature, which poses a challenge for adaptive estimation and traditional estimation methods.
Finally, our research has yielded a swift and effective technique for estimating the intensity of point processes within linear networks, particularly in scenarios marked by high heterogeneity. This paper contributes to the analysis of point patterns in linear networks by providing practical insights into adaptive estimation methods, underlining their significance in addressing real-world spatial phenomena.
|
2309.06030 | Federated Learning for Large-Scale Scene Modeling with Neural Radiance
Fields | We envision a system to continuously build and maintain a map based on
earth-scale neural radiance fields (NeRF) using data collected from vehicles
and drones in a lifelong learning manner. However, existing large-scale
modeling by NeRF has problems in terms of scalability and maintainability when
modeling earth-scale environments. Therefore, to address these problems, we
propose a federated learning pipeline for large-scale modeling with NeRF. We
tailor the model aggregation pipeline in federated learning for NeRF, thereby
allowing local updates of NeRF. In the aggregation step, the accuracy of the
clients' global pose is critical. Thus, we also propose global pose alignment
to align the noisy global pose of clients before the aggregation step. In
experiments, we show the effectiveness of the proposed pose alignment and the
federated learning pipeline on the large-scale scene dataset, Mill19. | Teppei Suzuki | 2023-09-12T08:04:56Z | http://arxiv.org/abs/2309.06030v4 | # Federated Learning for Large-Scale Scene Modeling
###### Abstract
We envision a system to continuously build and maintain a map based on _earth-scale_ neural radiance fields (NeRF) using data collected from vehicles and drones in a lifelong learning manner. However, existing large-scale modeling by NeRF has problems in terms of scalability and maintainability when modeling earth-scale environments. Therefore, to address these problems, we propose a federated learning pipeline for large-scale modeling with NeRF. We tailor the model aggregation pipeline in federated learning for NeRF, thereby allowing local updates of NeRF. In the aggregation step, the accuracy of the clients' global pose is critical. Thus, we also propose global pose alignment to align the noisy global pose of clients before the aggregation step. In experiments, we show the effectiveness of the proposed pose alignment and the federated learning pipeline on the large-scale scene dataset, Mill19.
## I Introduction
Neural radiance fields (NeRF) [1] have emerged to represent scenes for view synthesis and have been used as maps for many robotics applications, such as vision-based localization, navigation, and SLAM [2, 3, 4, 5, 6, 7]. Although these studies have been evaluated in relatively small-scale scenes, we believe that such NeRF-based robotics applications will be available to the systems operated in large-scale environments, such as self-driving cars and delivery drones. In addition, since NeRF can lift arbitrary 2D information to 3D (_e.g._, semantic labels [8], and feature vectors [9]), it will facilitate a higher-level environmental recognition and understanding in robotics applications.
Nowadays, a large number of vehicles are on the road, and in the near future, many unmanned aerial vehicles, such as delivery drones, will be in the sky. In such a world, we envision the system to continuously build and maintain an _earth-scale_ NeRF-based map using data collected from vehicles and drones in a life-long learning manner and to leverage the map created and maintained with such a system.
Large-scale scene modeling with NeRF has been studied in recent years [10, 11]. These methods divide a large-scale environment into many small areas and model the small areas with multiple NeRF models. Although this pipeline efficiently models a large-scale scene in a distributed training manner, it has three problems in realizing our envisioned system: (i) The existing pipelines need to collect data in a server once, but they need high communication costs to collect data from clients on a server and need a large number of memory budgets to save the earth-scale data; (ii) these pipelines require tremendously large computational resources for training a large number of models to cover an earth-scale environment; and (iii) they can only update the model for each pre-divided area for the maintenance of the NeRF-based map, and if the training data is collected only in a part of the area, the model may suffer from the forgetting problem [3].
To develop a more scalable and maintainable method, we propose the federated learning pipeline for large-scale modeling with NeRF, as depicted in Fig. 1. Federated learning (FL) [12] is a data-decentralizing training protocol that enables clients, such as mobile devices, to collaboratively train a model while keeping the training data decentralized. Since FL does not need to aggregate data collected by clients on the server, it can alleviate the communication cost problem and the data storage problem. FL also alleviates the computational resource problem because it can leverage the resources of clients, thereby enabling their overall FLOPs to outperform those of supercomputers.
To train NeRF in a federated learning manner, we tailor the model aggregation pipeline, which allows the global model to be locally updated based on the client data, avoiding the forgetting problem and improving maintainability. Specifically, we cache the outputs of each local model as the 3D voxel grid, and then the global model is updated by aggregating cached representation based on the clients' pose in the global coordinate. This pipeline does not affect the outputs of the global model beyond the region modeled by the local models. In addition, to align the inaccurate global pose of clients due to sensor noise, we propose the global pose alignment step before the aggregation step, which is inspired by the vision-based localization using NeRF [2, 7, 13].
Our contribution is summarized below:
* We propose the federated learning pipeline for large
Fig. 1: Overview of the proposed federated learning pipeline for neural radiance fields.
scale scene modeling with NeRF. By caching the outputs of each local model as the voxel grid, we realize the local update of the global model.
* We also propose the global pose alignment. To alleviate the sensor noise in the clients' pose in the global coordinate, we align the global pose of each client before the aggregation step by minimizing the difference between the images rendered by the global and local models.
* We assess the effectiveness of our training pipeline on the Mill19 dataset [11], which contains thousands of HD images collected from drone footage over 100,000m\({}^{2}\) of terrain near an industrial complex.
## II Related Work
### _Neural Radiance Fields_
3D reconstruction from image collections [14, 15, 16] is an important task in computer vision fields, and the reconstruction methods have been applied to many robotics applications, particularly SLAM systems [17, 18, 19]. In principle, objects or whole scenes are reconstructed as 3D points based on multiple-view geometry.
In recent years, NeRF [1] has emerged for novel view synthesis. Unlike 3D point-based reconstruction, NeRF models the target scene by a continuous function and its image rendering process is differentiable. Taking advantage of these properties, some vision-based robotics systems have been proposed. iNeRF [2], a vision-based localization method, estimates the camera pose through the minimization of the loss between the target image and the rendered image with the gradient method. Maggio _et al_. [7] and Lin _et al_. [13] improved the robustness of iNeRF by leveraging the advantage of Monte Carlo localization [20]. Adamkiewicz _et al_. [5] utilized this localization scheme for robot navigation. Kruzhkov _et al_. [21] incorporated NeRF into the SLAM system. They used NeRF as a mapping module and reduced the memory budget for the map. Moreover, pure NeRF-based SLAM systems, such as iMap [3] and Nice-SLAM [4], were studied.
As reviewed above, NeRF is applied to robotics applications that use NeRF as a map. When considering systems using NeRF as a map in a large environment, such as self-driving cars and delivery drones, we need to model large-scale scenes. There are certain studies for large-scale scene modeling by NeRF. Block-NeRF [10] and Mega-NeRF [11] have modeled urban-scale environments. Both methods divide the target scene into small areas and then model the small areas with multiple NeRF models. They can efficiently model a large-scale scene in a distributed training manner.
However, the existing large-scale modeling methods have some problems in terms of maintainability and computational costs. When part of the scene is changed, they need to update all models representing the changed scene, even if the changed area is a part of the modeled area. In addition, due to the forgetting problem [3], the model needs to be trained with all data in the corresponding area if a part of the area is changed. Therefore, maintaining the map created by existing methods is costly. Moreover, these methods require tremendously large memory budgets to save data and large computational resources to model the environment if one builds an earth-scale NeRF-based map. Hence, we approach these problems by adopting a federated learning scheme.
### _Federated Learning_
Federated learning (FL) [12] is a data-decentralizing training process for machine learning models. It distributes the model to clients such as mobile devices or organizations, trains the model on clients using their own data, and aggregates the models on the server. By repeating the above process, FL collaboratively trains a global model under the orchestration of a central server. FL has some advantages: preserving privacy since it does not collect user data on the server and leveraging the computational resources of many clients (millions or more in our scenario). Because of these advantages, FL has been studied for training machine learning models in various tasks, such as image classification [12, 22], image segmentation [23, 24], and self-supervised learning [25, 26].
Many FL approaches update models using a synchronous protocol, which needs to wait for all updates on clients before aggregation and delays training. The existence of lagging devices (i.e., stragglers, stagle workers) is inevitable due to device heterogeneity and network unreliability.
To mitigate this problem, asynchronous federated learning (AFL) has been proposed [27], which aggregates local models asynchronously. Xie _et al_. [27] aggregates models by exponential moving averaging. Chen _et al_. [28] proposed a method to aggregate the model with a decay coefficient to balance the previous and current gradients on clients. Since AFL improves the communication efficiency of FL, our pipeline also adopts the AFL strategy.
## III Federated Learning for NeRF
We present the overall pipeline of our method in Fig. 1, which consists of three steps, training local models, global pose alignment, and an aggregation step. We consider the AFL setting; namely, the server updates the global model as soon as it receives the local model.
We first describe the assumptions of this study in Sec. III-A. Then, in Secs. III-B and III-C, we describe the training pipeline for the local models and the aggregation step. Finally, we describe the global pose alignment in Sec. III-D.
### _Assumptions_
We assume each client knows the relative camera poses between collected images. This is a realistic assumption because relative poses can be obtained from some sensors, SLAM, or SfM. In the experiments, we use the camera poses given by Mill19 [11] that are estimated by PixSfM [29]. Note that if the obtained poses are noisy, we would be able to correct them by using the methods that simultaneously optimize poses and the model (_e.g._, BARF [30] and NoPeNeRF [31]).
We also assume that the global pose of each client is known but _noisy_. Since the goal of this study is to train the large-scale NeRF model as a map, there is no map to localize the position1. In this situation, we need to use sensors such as GPS and IMU to obtain the global pose, but the pose obtained from the sensors basically contains noise.
Footnote 1: Once the map (_i.e._, the global model) is initialized, we can use it to localize each client in the same manner as that for iNeRF [2].
In addition, we assume that images collected by clients are captured under the same lighting and have no dynamic and/or transient object. This assumption is unreasonable, particularly for outdoor scenes; we will study more realistic scenarios in future work.
### _Training Local Models_
For simplifying the pipeline, we assume that client models are trained from scratch because NeRF basically "overfits" training data and it does not care how the model is initialized. Of course, we can consider the standard pipeline; namely, clients download the global model and then train it with the local data, which may make convergence faster but increase communication costs.
The local model consists of two models: the density model \(f_{\sigma}\) and the color model \(f_{c}\). Let \(\mathbf{r}=\{\mathbf{r}_{i}\in\mathbb{R}^{3}|\mathbf{r}_{i}=\mathbf{o}+s_{i} \mathbf{d},\ s_{i}\in\mathbb{R}\}_{i}\) be a set of sampled points along the ray passing through a pixel, where \(\mathbf{o}\) and \(\mathbf{d}\) are a camera origin and a ray direction. Following [11], we sample 512 points per ray for rendering images in our experiments. The density at \(\mathbf{r}_{i}\) is computed as \(\sigma_{i}=\phi\circ f_{\sigma}(\mathbf{r}_{i})\), where \(\phi\) denotes the softplus function. For representing RGB, we use the spherical harmonics (SH) as in PlenOctrees [32], which enables caching outputs of the model on the grid voxels in the aggregation step described in Sec. III-C. Specifically, \(f_{c}\) outputs SH coefficients \(\mathbf{k}(\mathbf{r}_{i})=f_{c}(\mathbf{r}_{i})\), where \(\mathbf{k}(\mathbf{r}_{i})=\{k_{\ell}^{m}\in\mathbb{R}^{3}\}_{\ell\circ\ell \leq\ell_{\max}}^{m:-\ell\leq m\leq\ell}\), \(\ell\) and \(m\) are a degree and an order of the SH function, respectively. Each \(k_{\ell}^{m}\) is a set of three coefficients corresponding to RGB. Then, the view-dependent color at \(r_{i}\) is computed by
\[\mathbf{c}_{i}=S\left(\sum_{\ell=0}^{\ell_{\max}}\sum_{m=-\ell}^{\ell}k_{\ell} ^{m}Y_{\ell}^{m}(\mathbf{d})\right), \tag{1}\]
where \(S(\cdot)\) is the sigmoid function and \(Y_{l}^{m}(\mathbf{d}):\mathbb{S}^{2}\rightarrow\mathbb{R}\) is the SH function at the viewing angle \(\mathbf{d}\). Following [33], we use spherical harmonics of degree 2, which has 27 harmonic coefficients.
The rendering and training procedure is the same as in the original NeRF [1]. Specifically, the pixel color corresponding to the ray \(\mathbf{r}\) is computed as follows:
\[\hat{\mathbf{C}}(\mathbf{r})=\sum_{i}\exp(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{ j})(1-\exp(-\sigma_{i}\delta_{i}))c_{i}, \tag{2}\]
where \(\delta_{i}=t_{i+1}-t_{i}\) is the distance between adjacent samples on the ray. Then, the NeRF model is trained by minimizing the following loss:
\[\mathcal{L}=\mathop{\mathbb{E}}_{\mathbf{r}\in\mathbb{R}}\left\|\hat{C}( \mathbf{r})-C(\mathbf{r})\right\|_{2}^{2}, \tag{3}\]
where \(C(\mathbf{r})\in\mathbb{R}^{3}\) denotes the ground-truth pixel color corresponding to \(\mathbf{r}\), and \(\mathcal{R}\) is the set of rays in the training data. Note that we do not consider the coarse-to-fine rendering pipeline [1] in this study, but it can be made available with a larger memory budget and more computational resources.
### _Updating Global Model_
The aggregation step in federated learning often averages the local models. However, this averaging strategy cannot work for NeRF due to its properties; NeRF represents only the scene in the training data and assigns random color and density to the outside of the scene. In addition, the 3D position \((x,y,z)\) may indicate a different location for each client because clients' coordinates may not be aligned due to the sensor noise. For these reasons, the simple averaging strategy will degrade the rendering quality of the global model.
To avoid the aforementioned problem, we tailor the aggregation step for NeRF. We represent the global model as the voxel grid, as in Plenoxels [32]. We cache the output of the local models on the voxel grid, as in FastNeRF [34], and then aggregate the cached voxel grid and the global model, as shown in Fig. 2. Since we only cache the outputs in the area modeled by the local model and aggregate them, this procedure does not affect the output outside the area. In addition, the voxel grid representation allows us to align the global pose of the clients before aggregation (Sec. III-D).
Let \(\tilde{V}_{\sigma}^{n}\in\mathbb{R}^{X\times Y\times Z\times 1}\) and \(\tilde{V}_{c}^{n}\in\mathbb{R}^{X\times Y\times Z\times 27}\) be a cached density grid voxel and a cached SH coefficient grid voxel of the \(n\)-th client, where \(X\), \(Y\), and \(Z\) are the voxel resolution and \(27\) in \(V_{c}\) corresponds to the number of the coefficients. Also, we define the global model's voxel grids as \(V_{\sigma}\in\mathbb{R}^{X_{g}\times Y_{g}\times Z\times 1}\) and \(V_{c}\in\mathbb{R}^{X_{g}\times Y_{g}\times Z_{g}\times 27}\), where \(X_{g}\), \(Y_{g}\), and \(Z_{g}\) are the global model's voxel resolution. Note that the global model is defined on the global coordinate and the local voxel grid is defined on the local coordinate, respectively; hence, the position on the local coordinate, \(P\in\mathbb{R}^{3}\), is mapped to the global coordinate using the global pose, as in \(P_{g}=R_{n}P+t_{n}\), where \(R_{n}\in\mathrm{SO}(3)\) and \(t_{n}\in\mathbb{R}^{3}\) denote the relative rotation and translation between the
Fig. 2: Overview of the proposed aggregation step. We first cache outputs of the local model on the grid voxels, \(\tilde{V}_{\sigma}^{i}\) and \(\tilde{V}_{c}^{i}\), and then add them to the global voxel grid, \(V_{\sigma}\) and \(V_{c}\).
th client's local and global coordinates, respectively, which are obtained from sensors such as GPS and IMU. Then, we update the density grid voxel of the global model with exponential moving averaging in the asynchronous federated learning manner as
\[V_{\sigma}[P_{g}]\leftarrow\eta V_{\sigma}[P_{g}]+(1-\eta)\tilde{V}_{\sigma}^{n} [P], \tag{4}\]
where \(V_{\sigma}[P]\) denotes an element of \(V_{\sigma}\) at \(P\); \(\eta\) denotes a mixing value that is set to 0.9 in the experiments. The SH coefficient grid is also updated in the same manner.
The rendering process of the global model is the same as Plenoxels [33]; namely, the density and SH coefficient are sampled from the voxel grids by trilinear interpolation and then the RGB color is computed following eq. (1) with the sampled coefficients. Finally, pixel colors are rendered by eq. (2) with the sampled density and the color. We define the rendered pixel color through the global model as \(\hat{C}_{g}(\mathbf{r})\).
### _Global Pose Alignment_
As we mentioned in Sec. III-A, the global pose of the clients contains noises. Thus, we need to correct it before aggregation because inconsistency between the local and global models occurs in eq. (4) if the global pose is incorrect.
To align the pose, we optimize it to minimize the difference between the reference and target images, which are rendered by the local and global models, respectively. Let \(t^{*}\in\mathbb{R}^{3}\) and \(\{R_{j}^{*}\in\mathrm{SO}(3)\}_{j}\) be a translation vector and rotation matrices for the target views, which are obtained from the area modeled by both the global and local models. Then, we align the client's global pose by solving the following minimization problem:
\[(\hat{t},\ \hat{R})=\operatorname*{arg\,min}_{\{t,R\}}\sum_{j} \underset{\mathbf{r}_{t^{*},R_{j}^{*}}}{\mathbb{E}}\lambda|\hat{C}_{g}( \mathbf{r}_{t^{*},R_{j}^{*}})-\hat{C}(\mathbf{r}_{t^{*}+t,RR_{j}^{*}})|\] \[+(1-\lambda)|\hat{D}_{g}(\mathbf{r}_{t^{*},R_{j}^{*}})-\hat{D}( \mathbf{r}_{t^{*}+t,RR_{j}^{*}})|, \tag{5}\]
where \(\mathbf{r}_{t,R}\) denotes the ray depending on the camera pose \((t,R)\), and \(\hat{D}_{g}\) and \(\hat{D}\) denote the rendered depth through the global and local models, respectively, which are computed by replacing color values, \(c_{i}\), in eq. (2) with the sampled point position, \(s_{i}\), as in [3]; \(\lambda\in[0,1]\) is a weight parameter. After optimization, we align the global pose of the local model based on \(\hat{t}\) and \(\hat{R}\). Since the rendering process can be differentiable with respect to \(t\) and \(R\), we optimize them using the gradient method, as in the existing NeRF-based localization methods [2, 7, 13]. In our experiments, we used the same optimization pipeline as that in Lin _et al_. [13] to optimize the pose, which is a Monte Carlo-based method. Note that, unlike existing methods [2, 7, 13], we leverage multiple views and rendered depth to make optimization stable. We assess the effectiveness of the multiple target views and the depth loss in the experiment section.
## IV Experiments
### _Experimental Setup_
To simulate our method, we use the Mill19 dataset [11] that includes two scenes, building and rubble, which have thousands of HD images collected from drone footage over 100,000m2 of terrain close to an industrial complex. We first generate the data owned by clients in the following procedure: first, we randomly select one image from training data and compute the Euclidean distance between the camera position of the selected image and the camera positions of the other images, and then we collect \(k\)-nearest neighbors as the data owned by a client2. We repeat this procedure for the number of clients, \(N\), that is set to 100 in the experiments. In this experiment, we randomly set \(k\) for each client in a range of 100 to 200. After training \(N\) local models, we update the global model with the pose alignment and evaluate it with the validation data of Mill19. We set the voxel size for caching the output of the local model to 0.25 m. Following [11], we use the appearance vector to model the color of each image and optimize it in test time. The experiments were conducted using the NVIDIA V100 GPU.
Footnote 2: In our envisioned scenario, the clients are basically drones and vehicles, and the collected data should be sequential frames. Thus, we generate the client data from images that are physically close to each other.
We use InstantNGP [35] as the local model because it converges faster and has fewer FLOPs than the original NeRF architecture, which is suitable if the computational resources of the clients are limited. Each local model is trained for one epoch3 with a batch size of 8192. We optimize the local models with Adam [36], whose hyperparameters are the same as those of InstantNGP[35], except for the learning rate; it is set to 5e-3 only for the hash encoding and 5e-4 for the other parameters.
Footnote 3: One epoch corresponds to the iterations of #pixels / #batchsize.
### _Effect of Pose Alignment_
To evaluate our global pose alignment framework, we randomly select one local model as the global model. We also randomly select another model as a local model from the models whose training data partially overlap the area modeled by the global model. After selecting the models, we compute a center position of the overlapped area as a translation vector, \(t^{*}\in\mathbb{R}^{3}\), and obtain rotation matrices, \(\{R_{j}^{*}\in\mathrm{SO}(3)\}_{j}\), from camera poses of the training data close to \(t^{*}\). We render the images at \(\{(t^{*},R_{j}^{*})\}_{j}\) as the target values through the global model. Finally, we randomly sample a translation vector, \(t\in\mathbb{R}^{3}\), and a rotation matrix, \(R\in\mathrm{SO}(3)\), which can be regarded as the sensor noises, and optimize them by solving eq. (5). Thus, the optimal values will be \(t=(0,0,0)\) and \(R=I\), and we report the gap from them. Note that since we use the appearance vector, as described in IV-A, we also optimize it in addition to the pose. For optimization, we use the Adam optimizer [36] with a batch size of 4096. We set an initial learning rate to 5e-4 and decay it to 5e-5.
As an ablation study, we show the rotation errors and translation errors with various \(\lambda\) in eq. (5) and a various number of target views in Tab. I. We randomly sample noises in a range of [-20 m, 20 m] and [-20\({}^{\circ}\), 20\({}^{\circ}\)]. As presented on the left-hand side of Tab. I, RGB information is critical for the pose alignment, but the depth information
helps marginally with error reduction. The multiple target views also contribute to reducing errors, as depicted on the right-hand side of Tab. I.
We also show the evaluation results of our alignment framework with various magnitudes of initial errors in Fig. 3. We randomly sample the pose noises and then align them. We plot the results for 100 trials in Fig. 3. Note that a few trials with various translation errors have failed, and the failure rate is 5%. Our method can align the translation error up to 75 m, which is comparable to a typical GPS error. Also, it can correct the rotation error that is larger than errors in a typical IMU. A few plots in Fig. 3 indicate relatively large errors after alignment, but we can decrease the errors by increasing the number of particles and resampling rounds in the Monte Carlo-based optimization; in fact, our method can reduce errors even when the initial errors are larger than that of the failure cases. In our method, the increase in computation time is acceptable because alignment will be performed offline and does not require real-time processing.
### _Comparison with Other Training Protocols_
To evaluate the rendered image quality of our method, we compare the proposed training pipeline with two data-centralized training protocols, baseline and distributed training. The baseline indicates training one model with all data, which corresponds to the ordinary NeRF training pipeline. The distributed training indicates the Mega-NeRF's training pipeline [11]; namely, the scene is divided into grids, and then models are trained with data corresponding to each grid. We divide the scene into 4\(\times\)4 grids for the distributed training; that is, 16 models are trained, and the number of data for training each model is approximately 150, which is the same as the expected number of data used to train each model of our pipeline. For a fair comparison, we evaluate these training pipelines with the same model and the same hyperparameters as ours, except for the model size in the baseline. Since the baseline models the scene using one model, while other training protocols use multiple models, we double the grid size of the hash encoding for the baseline to be fair in terms of the model capacity. Note that we assume that the global position is correctly aligned by our position alignment algorithm in this experiment.
We show the evaluation results in Tab. II. Note that since the models can be trained in parallel, the longest training time out of all local models is reported as the training time in the distributed learning and ours. The training time of distributed training and ours is much shorter than that of the baseline because of the distributed training protocol. Since the scale of Mill19 is relatively small compared to our envisioned scenario, the training time of the baseline is acceptable. However, if we scale it to an earth-scale, it is impossible to train a model with the standard training pipeline. In addition to the reduction in training time, our method can alleviate bandwidth consumption because the size of the local model is 0.1 GB while that of the local data is up to 1 GB. The rendering speed for the distributed training is slower than the others because the distributed training combines the outputs of the multiple models to render one image. Rendering by ours is the fastest because it only samples cached outputs from the voxel grids for rendering, unlike the others that consist of hash encoding and MLP.
The PSNR for our method is worse than that for the baseline for two reasons: each client trains a local model with a relatively small number of data, while the baseline trains the model with a sufficiently large number of data. Basically, NeRF requires a sufficiently large number of viewpoints to correctly represent the scene in 3D. However, some clients do not satisfy this requirement around the test view and the local models trained by such clients cannot represent the scene correctly, as shown in Fig. 4. Consequently, such local models degrade the global model performance. This disadvantage is also found in distributed training; in fact, PSNR for distributed training is worse than that for the baseline. The other reason is the quantization error in the caching process. We cache the outputs of the local models on the voxel grid in the aggregation step, and such a quantization
\begin{table}
\begin{tabular}{c|c c} \(\lambda\) & Rot. (deg) & Trans. (m) \\ \hline
0.2 & 43\(\pm\)15.5 & 32\(\pm\)18.7 \\
0.25 & 19.9\(\pm\)16.7 & 17.7\(\pm\)11.2 \\
0.5 & 12.9\(\pm\)16.3 & 9.62\(\pm\)11.7 \\
0.75 & **0.53\(\pm\)0.96** & **0.19\(\pm\)0.33** \\
1.0 & 0.92\(\pm\)1.63 & 0.43\(\pm\)0.78 \\ \end{tabular}
\begin{tabular}{c|c c} \#views & Rot. (deg) & Trans. (m) \\ \hline
1 & 0.34\(\pm\)0.24 & 0.97\(\pm\)0.68 \\
2 & 0.33\(\pm\)0.24 & 0.49\(\pm\)1.18 \\
4 & 0.53\(\pm\)0.96 & 0.19\(\pm\)0.33 \\
8 & **0.12\(\pm\)0.09** & **0.02\(\pm\)0.01** \\ \end{tabular}
\end{table} TABLE I: Rotation and translation errors with various \(\lambda\) and a various number of target views. We report the mean and standard deviation over 10 trials. We use four target views on the left-hand side of the table and set \(\lambda\) to 0.75 on the right-hand side of the table.
Fig. 3: The alignment results for various initial errors. The gray dots denote failure cases that increase the initial errors.
operation approximates the continuous function by the piecewise linear function. Thus, there are approximation errors, which cause the gap between PSNR for distributed training and ours, especially for the building scene that includes high-frequency components, as shown in Fig. 5. This error can be reduced by increasing the cached grid size, but it requires larger memory budgets. Improving local model training and alleviating the quantization errors are possible directions for future work.
The PSNR for ours on the rubble scene is better than that for distributed training. The quantization errors in the rubble scene are smaller than those in the building scene because images in the rubble scenes are basically composed of low-frequency components. Therefore, in scenes where the effect of the errors is small, our training protocol would be better than distributed training in this experimental setting.
## V Limitations and Future Work
As we mentioned in Sec. III-A, this work does not consider dynamic and/or transient objects and lighting changes. It may raise privacy concerns because if a person in the scene is modeled with the lighting, the model may show where and when the person was doing what. However, there are some approaches to modeling the scene by ignoring such objects and lighting effects (_e.g._, NeRF-W [37] and Block-NeRF [10]); by exploiting the advantage of such methods, the proposed pipeline will leverage the privacy-preserving aspect of the federated learning. Therefore, we believe that considering the transient objects and lighting changes in the federated learning pipeline is an important research direction.
Moreover, as we mentioned in Sec. IV-C, the quality of the rendered images is worse than the images rendered by the baseline. Fortunately, there are many studies on training NeRF with limited viewpoints [38, 39, 40]. We believe that we can improve the performance of the local models by leveraging the training protocol of such methods; consequently, the global model can be improved. Another approach is to consider the number of viewpoints for aggregating models. Specifically, by adjusting the mixing coefficient in the exponential moving average according to the number of viewpoints, we may be able to ignore the low-quality outputs of the local model in the aggregation step.
## VI Conclusion
In this study, we proposed the federated learning pipeline for large-scale scene modeling with NeRF. We designed the aggregation step for training NeRF in a federated learning manner: The global model is defined as the voxel grids and the local models are aggregated by caching the outputs of the local model on the voxel grids, which allows us to update the global model locally. In addition, we proposed the global pose alignment step to alleviate the sensor noise in the global pose of each client. We assessed the proposed method on the Mill19 dataset containing thousands of images collected from drone footage over 100,000m\({}^{2}\) of terrain near an industrial complex and verified its effectiveness.
We believe our work opens new avenues for large-scale scene modeling using NeRF and is an important step towards collaboratively training for NeRF in a federated learning manner.
\begin{table}
\begin{tabular}{c|c c|c c|c} & \multicolumn{2}{c|}{Building} & \multicolumn{2}{c|}{Rubble} & \multicolumn{2}{c}{Rendering Speed} \\ & PSNR & Training Time (h) & PSNR & Training Time (h) & (Pixel/Second) \\ \hline Baseline & 22.29 & 32.4 & 24.93 & 28.0 & 80.4K \\ Distributed Training & 18.93 & 0.75 & 21.24 & 0.67 & 69.2K \\ Ours & 18.06 & 0.97 & 23.01 & 0.95 & 342.6K \\ \end{tabular}
\end{table} TABLE II: Performance for different training protocols on building and rubble scenes of Mill19.
Fig. 4: From left to right, the ground-truth image of the test view, the image rendered by the local model trained with a sufficiently large number of viewpoints around the test view, and the image rendered by the local model trained with a relatively small number of viewpoints.
Fig. 5: An image rendered by the client model (left) and an image rendered by the cached voxel grids (right). The lattice pattern composed of high-frequency components is broken. |
2309.08009 | Measuring the Quality of Text-to-Video Model Outputs: Metrics and
Dataset | Evaluating the quality of videos generated from text-to-video (T2V) models is
important if they are to produce plausible outputs that convince a viewer of
their authenticity. We examine some of the metrics used in this area and
highlight their limitations. The paper presents a dataset of more than 1,000
generated videos from 5 very recent T2V models on which some of those commonly
used quality metrics are applied. We also include extensive human quality
evaluations on those videos, allowing the relative strengths and weaknesses of
metrics, including human assessment, to be compared. The contribution is an
assessment of commonly used quality metrics, and a comparison of their
performances and the performance of human evaluations on an open dataset of T2V
videos. Our conclusion is that naturalness and semantic matching with the text
prompt used to generate the T2V output are important but there is no single
measure to capture these subtleties in assessing T2V model output. | Iya Chivileva, Philip Lynch, Tomas E. Ward, Alan F. Smeaton | 2023-09-14T19:35:53Z | http://arxiv.org/abs/2309.08009v1 | # Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset
###### Abstract
Evaluating the quality of videos generated from text-to-video (T2V) models is important if they are to produce plausible outputs that convince a viewer of their authenticity. We examine some of the metrics used in this area and highlight their limitations. The paper presents a dataset of more than 1,000 generated videos from 5 very recent T2V models on which some of those commonly used quality metrics are applied. We also include extensive human quality evaluations on those videos, allowing the relative strengths and weaknesses of metrics, including human assessment, to be compared. The contribution is an assessment of commonly used quality metrics, and a comparison of their performances and the performance of human evaluations on an open dataset of T2V videos. Our conclusion is that naturalness and semantic matching with the text prompt used to generate the T2V output are important but there is no single measure to capture these subtleties in assessing T2V model output.
Keywords:Text-to-Video Models Video Synthesis Evaluation.
## 1 Introduction
Recent rapid advances in text-to-video (T2V) generation algorithms have concentrated on improving the state of the art while producing a reliable metric to measure the quality of generated videos is often an afterthought. It is common for a model to be evaluated on 3 or 4 different metrics and in some cases to also include human assessment. These evaluations show a large variety in the quality of videos generated from T2V models. Considering this variance, how to fairly evaluate the quality of generated videos is important but the development of quality metrics has not received much attention.
In the field of AI-generated images, recent work by Li _et al._[9] has compared the quality of images generated from a collection of text-to-image (T2I) models. This used automatically-computed quality metrics and using human evaluations of perception and alignment of the image to the prompt used to generate it. That work also included the release of a database of 2,982 T2I images and the prompts and model parameters used to generate them as well as a comparison
between human evaluations and automatically created metrics. The work in this paper follows a similar sequence to [9] except that we address evaluating the quality of text-to-video instead of text-to-image models.
Our aim is assess how the number of metrics needed to produce a reliable evaluation of the output from a T2V model could be reduced. This includes comparing the outputs of automatic metrics with human evaluations. We begin with an examination of open-source state-of-the-art T2V models and address the limitations with current evaluation metrics including major pitfalls. We then present the output of human assessment of the authenticity and realism of videos which we refer to as video naturalness, plus the degree to which a generated video aligns with the input prompt which we refer to as text similarity. We also include the semantic matching between the original text prompt and the content inside the generated video. In Section 4, we compare the results of human evaluations with commonly used T2V metrics. Our findings suggest that human evaluations mostly align with commonly used metrics, but not always so.
## 2 Related Work
### Text-to-Video Models
In 2022 the first open-source T2V model called Tune-a-Video was released by Wu _et al._[25] introducing a mechanism that uses the Stable Diffusion model [17] for video generation. This model is built on state-of-the-art T2I diffusion models and involves a tailored spatio-temporal attention mechanism and an efficient one-shot tuning strategy. It served as an inspiration for the rapid development of other open-source models including the following, which are used in this paper.
1. VideoFusion [12] in 2023 uses a decomposed diffusion process to resolve per-frame noise as a base noise that is shared among all frames leading to smoother video output.
2. Text-to-Video Synthesis, also based on the work described in [12] in 2023 and openly available is also a multi-stage text-to-video generation diffusion model which consists of text feature extraction, a text feature-to-video latent space diffusion model, and video latent space to video visual space. This model adopts the Unet3D structure, and performs video generation through an iterative denoising process from pure Gaussian noise video.
3. Text2Video-Zero [8] in 2023 takes a low-cost zero-shot text-to-video generation approach, leveraging the power of Stable Diffusion and tailoring it for video. It achieves this by enriching the latent codes of frames with motion dynamics and using a cross-frame attention of each frame on the first frame to preserve the context, appearance, and identity of foreground objects.
4. Aphantasia [4] also from 2023, is a collection of text-to-image tools, evolved from the artwork of the same name. It is based on the CLIP model and Lucent library, with FFT/DWT/RGB parameterizers and no-GAN generation.
### Evaluation Metrics
The 3 most commonly used metrics for evaluating video quality are as follows.
## 1 Inception Score (IS) [19]
was developed as an alternative to human evaluation and aims to measure both image quality and diversity. It relies on the "inception network" [22] to generate a class probability distribution for images. Higher-quality images should have a low entropy probability \(P(y|x)\), while diversity is measured with the marginal distribution of all images, which should have high entropy.
## 2 Frechet Video Distance (FVD)
[23] measures the distance between feature activations of real and generated videos in the feature space of a pre-trained video classifier, similar to the approach in the Frechet Inception Distance (FID) [6] which was developed for images. A lower FVD score indicates better quality video generation indicating that both the reference and generated videos have similar distributions for feature activations.
## 3 CLIPSim
[24] uses the CLIP [16] model to evaluate semantic matching between an initial text prompt and a generated video. CLIP is a contrastive learning-based model that creates a joint embedding space for images and text, allowing the model to understand relationships between them. CLIPSim extends CLIP to evaluate videos by finding the CLIP score of each frame in a video and returning the average frame score.
Although the use of these metrics is common, there are concerns about their use. **IS** has been criticised for its tendency to overfit on models trained using ImageNet and its inability to distinguish between poor and high-quality images [2] as demonstrated in Figure 0(a) where IS assigned an almost perfect score for the examples shown. **FVD** requires reference videos in order to generate a score, which is challenging when comparing T2V models trained on different datasets which hinders the broader applicability of this metric. The underlying model used in **CLIPSim**, CLIP, has been criticised by the authors of BLIP [10] for its reliance on noisy web image-text pairs arguing that a smaller, filtered dataset should be used. Finally, the ability of Image-to-Text models such as BLIP and CLIP to generate semantically similar captions for images/frames that do not appear to match visually is another concern, as illustrated in Figure 0(b). Here the video was generated using the Aphantasia T2V model [4] with the text prompt "A blue unicorn flying over a mystical land" and by using CLIPSim we generated a similarity score of over 70%.
In summary, the metrics are used widely but have limitations which motivates our interest in assessing them and how they compare to human assessments.
### Image Naturalness
Image naturalness refers to how realistic and free of distortions or artefacts an image appears. Naturalness is related to image quality, which encompasses aspects such as sharpness, contrast, and colour accuracy, but image naturalness specifically focuses on the realism of an image. Non-natural images lack recognisable and interpretable real-world objects or scenes and may include computer-generated graphics, abstract art, or heavily manipulated photographs. They often serve artistic or functional purposes but do not necessarily reflect the properties of natural images that are easily interpreted by human vision.
The Naturalness Image Quality Evaluator (NIQE) is a no-reference image quality assessment metric [13] based on the observation that natural images tend to exhibit a unit-normal Gaussian characteristic in their luminance values. NIQE uses a set of natural scene statistics (NSS) that captures the statistical regularities present in natural scenes that are not present in unnatural or distorted images. NIQE was trained on the LIVE image quality assessment database [20], with 29 reference and 779 distorted images, each with 5 subjective quality scores.
BRISQUE (Blind/Referenceless Image Spatial QUality Evaluator) is an image quality assessment measure [14] that also uses NSS to evaluate the quality of a distorted image without requiring a reference image. BRISQUE extracts 36-dimensional feature vectors from 96 non-overlapping blocks of the distorted image and maps these onto a reduced-dimensional space using principal component analysis. The quality score is calculated using a support vector regression (SVR) model trained on the LIVE IQA database [20].
The performance of NIQE and BRISQUE on real photo (a) and frames extracted from various T2V models is shown in Figure 2 where the metrics evaluate images on a scale of 0 to 100, higher scores indicate lower naturalness. We see images (b) and (c) received the highest scores indicating poor naturalness and non-natural images (d) and (e) received better scores than the low-quality image of a dog in (b) and the image of an oil painting of a couple in (c), which still represent recognisable objects. Although NIQE scores showed slightly better results than BRISQUE, they were still unable to fully differentiate between natural and non-natural images. Based on these properties we opted to develop a new classifier to detect the naturalness of an image, acknowledging that metrics like NIQE and BRISQUE are primarily concerned with the visual quality of generated videos rather than their naturalness.
## 3 An Ensemble Video Quality Metric
To address the limitations of image naturalness and modal biases in T2V bideo quality metrics we propose an ensemble with the workflow shown in Figure 3. The first of the two parts involves data generation, depicted in blue and yellow
Figure 1: Example limitations of existing T2V quality metrics.
boxes on the left side of the figure. Starting with an initial set of text prompts, we generate a video for each using a T2V model under evaluation. The videos are used to produce a set of captions for each using BLIP-2 [10]. The second part involves an ensemble of two metrics, the Text Similarity Metric which calculates a similarity score between the original text prompt and BLIP-generated captions. Next, we use the Naturalness Metric, a customised XGBoost classifier that takes the generated video as input and outputs a score. Both metric outputs are in the range \([0,1]\).
A weighted average based on a linear regression model trained using manually rated videos described later is used to combine the individual metrics.
### Evaluating Image Naturalness
We developed a classifier for video naturalness for which we collected and analysed several statistical measures from each video, including the following:
1. **Texture score** measures the degree of uniformity in an image's texture since natural images, such as landscapes or animal fur, tend to have more complex textures than synthetic images. After converting to grayscale and applying a Gaussian blur to reduce noise we apply Sobel edge detection in the x and y directions and calculate the magnitude of the gradient. The variance of this magnitude is the texture score.
Figure 3: T2V-CL metric ensemble
Figure 2: Image naturalness assessment with NIQE (N) and BRISQUE (B) scores.
2. **The sharpness score** measures the amount of high-frequency content in an image, indicative of the image's level of detail. It is calculated by applying a sharpening filter to the image and then taking the RMS difference between the original and the filtered image.
3. **The colour distribution score** is a measure of the uniformity of colour in an image, exploiting the characteristic of a uniform or artificial colour distribution in a non-natural image. It is calculated by applying K-means clustering with K=2 to the A and B channels of the image's LAB representation. This score is the proportion of pixels in the cluster with the lowest A channel value.
4. **The spectral score** measures the extent to which an image differs from the natural image statistics in the Fourier domain. The function calculates the mean and standard deviation of each colour channel and computes the spectral score as the sum of standard deviations divided by the sum of means.
5. **The entropy score** uses the Shannon entropy formula [3], which measures the level of randomness or disorder in pixel values. Natural images tend to have a higher degree of order and lower entropy than non-natural ones.
6. **The contrast score** measures differences between the lightest and darkest parts of an image by dividing the standard deviation of pixel intensities by the mean intensity.
7. **Oriented FAST and Rotated BRIEF (ORB)** is a feature detection algorithm [18] to compute statistics about the key points in an image including the mean and standard deviations of the distances between key points and of the lengths of the descriptors associated with those key points.
8. **The number and sizes of blobs** is detected using the Laplacian of Gaussian (LoG) method [11]. Blobs are regions in an image with a relatively uniform intensity that stand out compared to the surrounding area.
Despite poor performance on some images, NIQE and BRISQUE scores are useful in filtering very noisy and disordered images thus they are included in our metric comparisons. To facilitate processing, a YUV444 video frame is reshaped from planar to interleaved format, which represents colour information in terms of brightness (Y) and colour (U and V), with 8 bits allocated to each channel. NIQE scores are calculated for the grayscale frame and for the Y, U and V channels in the YUV444 video frame separately as this provides a better visual representation of the image [15] and a more accurate evaluation of quality metrics like NIQE, which are more sensitive to variations in the chrominance channels.
To train a classifier for image naturalness we also calculated a Modified Inception Score (MIS) for each video which operates on a similar principle to Inception Score outlined in Section 2.2 by calculating the mean probability distributions of all frames in a generated video. We modified the IS metric to return a larger value if the mean probability distribution in a video has low entropy. Essentially, if the Inception model assigns a greater probability to one particular class throughout the frames in a video, MIS will produce a larger value. We achieved this by setting the marginal distribution to the uniform distribution.
We collected all the video feature data described above from 187 videos comprising 92 natural and 95 non-natural. We approached the naturalness classifier
task as a binary classification problem and manually assigned each video a label indicating natural or not. We trained three classifiers, AdaBoost, a Bagging classifier with a DecisionTree base and XGBoost. To optimise the performance of each classifier, we employed GridSearch. We evaluated the classifiers' performance using F1 on training, validation, and test sets. The XGBoost classifier performed the best on unseen data and was used in the rest of the paper.
### Evaluating Text Similarity
In the second part of our metric comparison we measure the semantic similarity between the generated video captions and the original text prompt. The process involves generating captions for each video frame using BLIP-2 and measuring the similarity between each caption.
In our approach we combine BERT and Cosine similarity. Using the example illustrated in Table 1 we see that BERT tends to over-perform as is designed to capture more nuanced and complex semantic relationships between sentences or captions, whereas Cosine only considers surface-level similarity based on word overlap. By penalising the BERT similarity score with the Cosine similarity score, we ensure that the combined similarity shown in Equation 1 reflects both surface-level and deeper semantic similarities between two captions, thus providing a more accurate representation of their overall similarity. After conducting an analysis and running multiple experiments, we determined that the optimal ratio between BERT and Cosine similarities is \(0.75:0.25\).
\[\text{Combined sim}=\begin{cases}0.25\ (\text{Cos sim})+0.75\ (\text{BERT sim}),&if\ \text{Cos sim}\neq 0\\ 0.5\ (\text{BERT sim})&otherwise\end{cases} \tag{1}\]
Given that some frames in generated videos may exhibit significant distortions or omissions or not contain recognisable objects such as in Figure 4 where two frames do not include a dog, we calculate the weighted textual similarity for a generated video of \(n\) frames as \(\frac{1}{n}\sum_{i=1}^{n}w_{i}\cdot\text{sim}_{i}\). The weights are assigned based on the frequency of each caption in the overall list of generated captions.
## 4 Evaluation
We now present the comparison of quality metrics for videos generated from T2V models. We used 201 prompts and 5 T2V models outlined earlier in Section 2.1
\begin{table}
\begin{tabular}{l|c c c} & \multicolumn{2}{c}{Cosine BERT} & \multicolumn{1}{c}{Combined} \\ \hline The sunrise was beautiful over the ocean & \multirow{2}{*}{0.00} & \multirow{2}{*}{0.45} & \multirow{2}{*}{0.22} \\ The bulldozer was loud and destroyed the building & & & \\ \hline A hot air balloon in the sky & & & \\ Balloon full of water exploding in extreme slow motion & & & \\ \hline \end{tabular}
\end{table}
Table 1: Similarity scores for two pairs of sentences
to create 1,005 T2V model videos. We carefully selected the 201 prompts by combining content generated by ChatGPT with manual curation. The compilation covers a broad range of topics including influential figures, notable places, and cultural events like Easter and the Brazilian Carnival. 87 of the prompts are short (4 to 8 words), 43 are of average length (9 to 13 words) and 71 are longer than 13 words. The collection of prompts offers a diverse range of videos, spanning from practical scenarios to creative concepts. The videos encompass a variety of actions, relationships, and visual styles. Example frames from the collection of generated videos are shown in Figure 5. For each of the 1,005 generated videos we computed two measures, naturalness and text similarity score as described earlier.
To obtain human quality scores for the generated videos we recruited volunteers to rate videos remotely and in their own time with each person given
Figure 4: Frames from a generated video with the prompt “A golden retriever eating ice cream on a beautiful tropical beach at sunset”. Note that 2 of the frames are missing the dog.
Figure 5: Samples from our generated videos – rows show frames generated by Text2Video-Zero, Text-to-Video Synthesis, Tune-a-Video, Aphantasia and Video Fusion respectively while columns are frames from the same text prompts.
10 days to complete the task across up to 10 sessions. The annotators rated each video on a scale of 1 (low) to 10 (high) for two aspects, alignment and perception. Alignment reflects the compatibility between the generated video and the text of the original prompt which was provided, while perception rates the overall perceptual quality of the video considering issues such as clarity and sharpness, the presence of visual artefacts, blurriness, or pixelation and whether colours represent real-world scenes and objects. 24 (16 male, 8 female) adult annotators, mostly graduate students, completed two ratings of each video giving \(1,005\ videos\times 24\ annotators\times 2\ aspects\) = 48,240 quality ratings. Annotators were rewarded with a gift token when they completed annotating the videos.
In assessing the quality of still images the "de facto" metric is mean opinion score (MOS) [21] which is the mean of the opinions and ratings of human evaluators gathered according to some numeric or qualitative scale. Despite its popularity across various media including speech, audio, images, video, multimedia, etc. it does have issues with its acceptability [21] because a single number cannot capture a diversity of opinion scores. In [7] the authors proposed that the standard deviation of opinion scores reflects the subjective diversity while more recently [5] proposed that as well as the mean of the opinion scores, researchers should assess quality in terms of the distribution of opinion scores, not just the standard deviation.
Following the approach taken by Li _et al._[9] where the authors developed a quality metric for AI-generated text-to-image (T2I), we calculated a "pid_delta" by computing the difference between the average rating (5) and the mean of scores. We did this to see if the work in [9] for T2I images could be applied to T2V videos and we refer to this as T2V-EC. Z-scores were computed by subtracting the mean from each score and dividing by the standard deviation and outliers were determined by comparing scores to a range defined by the mean and standard deviation. Re-scaling then restored the original data range by reversing the z-score transformation though negative values can result from the initial shift towards 5 during normalisation, especially where original scores were below 5. Figure 6 shows the distribution of adjusted MOS scores for alignment and perception. An overall human evaluation score for each video is the average adjusted MOS scores for alignment and perception. It is clear that correlation between alignment and perception is not contingent upon the rating values.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Alignment} & \multicolumn{2}{c}{Perception} & \multicolumn{2}{c}{Combined} \\ & Mean & Std.Dev & Mean & Std.Dev & Human Score \\ \hline Aphantasia & 4.016 & 0.841 & 3.221 & 0.692 & 0.362 \\ Text2Video-Zero & 5.985 & 1.139 & 6.393 & 0.886 & 0.619 \\ T2V Synthesis & 5.333 & 1.622 & 5.485 & 1.366 & 0.541 \\ Tune-a-Video & 5.053 & 1.340 & 5.070 & 1.196 & 0.506 \\ Video\_Fusion & 4.995 & 1.686 & 5.139 & 1.507 & 0.507 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Human evaluation scores for five T2V models.
We now examine how human evaluation differentiated among the 5 models for alignment, perception and for the combined human score (average), and these are shown as means and standard deviations in Table 2 and as distributions in Figure 7. What these show is that Aphantasia is the worst-performing across perception and alignment, Text2Video-Zero is best and Tune-a-Video, Video Fusion, and Text2Video Synthesis appear to be about the same To support this interpretation we conducted a Tukey HSD (honestly significant difference) test on the adjusted combined human evaluation scores to identify differences among the 5 models. Tukey's HSD [1] is a significance test to simultaneously compare multiple means and distributions, MOS scores for 5 T2V models in
Figure 6: Distribution of adjusted MOS Scores.
Figure 7: MOS score distributions across different models.
our case, in a single step and to find means whose differences are greater than the expected standard error. The results of the test are shown in Figure 8 and show that Aphantasia exhibits notably lower performance than the others while Text2Video-Zero demonstrates significantly higher performance. The other three models -- Tune-a-Video, Video_Fusion, and Text2Video Synthesis -- exhibit relatively similar levels of performance.
The combined human evaluation scores were compared against T2V-CL replicating the work of Li _et al._[9] for images, BLIPSim and Inception scores in Figure 9. The replicated T2V-CL metric shows a consistency in ranking with the other metrics except for the Tune-a-Video model which has a higher rank by compared to human evaluation. This discrepancy is because Tune-a-Video predominantly produces cartoon-style videos, as shown in Figure 5 and our naturalness classifier was trained on a range of example videos from a range of models which did not include enough from the Tune-a-Video model.
In a final analysis we examined how prompt length influences model performance. Figure 10 shows boxplots of adjusted MOS scores
Figure 8: Tukey HSD of adjusted MOS Scores across different models.
Figure 9: Comparison of model ranks by different metrics.
of model performance with Text2Video-Zero best and Aphantasia worst in both boxplots and for all prompt lengths. They also show that almost always the shorter the prompt length, the better the quality of the video. This is explained by the fact that alignment to a longer prompt is more difficult for a T2V model.
Figure 10: Boxplots of adjusted alignment (top) and perception (bottom) scores across different prompt lengths
## 5 Conclusions
We investigated techniques for evaluating the quality of text-to-video (T2V) model outputs including a critical analysis of commonly-employed metrics. We examined the limitations of existing methods for evaluating video quality with an emphasis on assessing the naturalness of T2V content as well as the degree of semantic correspondence between videos and text prompts used to generate them. We provide an open dataset of T2V videos from 5 models, with human annotations of their quality1. In summary we can say that there is some consistency across the metrics and with human evaluations but not yet at the level where we can eliminate the need for expensive and time-consuming human assessment, though automatic metrics are a good proxy.
Footnote 1: Code, prompts, examples and the 1,005 generated videos from the 5 models used in this paper are available for public access at [https://tinyurl.com/4eufrek8](https://tinyurl.com/4eufrek8)
|
2309.10646 | Self-Supervised Super-Resolution Approach for Isotropic Reconstruction
of 3D Electron Microscopy Images from Anisotropic Acquisition | Three-dimensional electron microscopy (3DEM) is an essential technique to
investigate volumetric tissue ultra-structure. Due to technical limitations and
high imaging costs, samples are often imaged anisotropically, where resolution
in the axial direction ($z$) is lower than in the lateral directions $(x,y)$.
This anisotropy 3DEM can hamper subsequent analysis and visualization tasks. To
overcome this limitation, we propose a novel deep-learning (DL)-based
self-supervised super-resolution approach that computationally reconstructs
isotropic 3DEM from the anisotropic acquisition. The proposed DL-based
framework is built upon the U-shape architecture incorporating
vision-transformer (ViT) blocks, enabling high-capability learning of local and
global multi-scale image dependencies. To train the tailored network, we employ
a self-supervised approach. Specifically, we generate pairs of anisotropic and
isotropic training datasets from the given anisotropic 3DEM data. By feeding
the given anisotropic 3DEM dataset in the trained network through our proposed
framework, the isotropic 3DEM is obtained. Importantly, this isotropic
reconstruction approach relies solely on the given anisotropic 3DEM dataset and
does not require pairs of co-registered anisotropic and isotropic 3DEM training
datasets. To evaluate the effectiveness of the proposed method, we conducted
experiments using three 3DEM datasets acquired from brain. The experimental
results demonstrated that our proposed framework could successfully reconstruct
isotropic 3DEM from the anisotropic acquisition. | Mohammad Khateri, Morteza Ghahremani, Alejandra Sierra, Jussi Tohka | 2023-09-19T14:28:23Z | http://arxiv.org/abs/2309.10646v1 | Self-Supervised Super-Resolution Approach for Isotropic Reconstruction of 3D Electron Microscopy Images from Anisotropic Acquisition
###### Abstract
Three-dimensional electron microscopy (3DEM) is an essential technique to investigate volumetric tissue ultrastructure. Due to technical limitations and high imaging costs, samples are often imaged anisotropically, where resolution in the axial direction (\(z\)) is lower than in the lateral directions (\(x,y\)). This anisotropy 3DEM can hamper subsequent analysis and visualization tasks. To overcome this limitation, we propose a novel deep-learning (DL)-based self-supervised super-resolution approach that computationally reconstructs isotropic 3DEM from the anisotropic acquisition. The proposed DL-based framework is built upon the U-shape architecture incorporating vision-transformer (VIT) blocks, enabling high-capability learning of local and global multi-scale image dependencies. To train the tailored network, we employ a self-supervised approach. Specifically, we generate pairs of anisotropic and isotropic training datasets from the given anisotropic 3DEM data. By feeding the given anisotropic 3DEM dataset in the trained network through our proposed framework, the isotropic 3DEM is obtained. Importantly, this isotropic reconstruction approach relies solely on the given anisotropic 3DEM dataset and does not require pairs of co-registered anisotropic and isotropic 3DEM training datasets. To evaluate the effectiveness of the proposed method, we conducted experiments using three 3DEM datasets acquired from brain. The experimental results demonstrated that our proposed framework could successfully reconstruct isotropic 3DEM from the anisotropic acquisition.
self-supervised, super-resolution, electron microscopy, isotropic reconstruction, deep learning. +
Footnote †: publicationtext: This work was in part supported by the Academy of Finland (#323385), the Erkko Foundation, and the Doctoral Programme in Molecular Medicine at the University of Eastern Finland.
## I Introduction
Three-dimensional electron microscopy (3DEM) enables the visualization and analysis of volumetric tissue ultrastructure at nanometer resolution. Achieving isotropic acquisition, where resolution is consistent in all dimensions, can assist downstream image analysis and visualization tasks. However, practical limitations, such as the constraints of EM techniques and imaging time and costs, often lead to achieving the resolution in the axial \((z)\) direction lower than lateral \((x,y)\) directions. Focused ion beam scanning EM (FIB-SEM) is one EM technique that can obtain isotropic 3DEM images with sub-10nm resolution in all directions; however, FIB-SEM is low-throughput. On the other hand, serial section transmission EM (ssTEM) or serial block-face scanning EM (SBEM) offers higher throughput and cost-effectiveness compared to FIB-SEM but cannot achieve the required axial resolution [1]. Image super-resolution (SR) is a computational approach that can increase the axial resolution to match lateral resolutions, enabling the reconstruction of isotropic 3DEM from anisotropic acquisitions.
Traditional SR approaches rely on interpolation methods, which can increase axial resolution. However, these methods have limitations in recovering fine missing details in low-resolution (LR) axial planes \((xz/yz)\). To overcome these limitations, learning-based methods have been proposed that leverage prior knowledge about the latent data to the interpolation. One such method is sparse representation over learned dictionaries, which has been used in various SR applications [2, 3]. However, since dictionaries are learned from small image patches, they may not reconstruct high-quality EM images with large field-of-view. Authors in [4] proposed a dictionary-learning-based approach to reconstruct isotropic 3DEM by combining anisotropic 3DEM with sparse tomographic views of the same sample acquired at a finer axial resolution. While this approach offered a promising solution for isotropic reconstruction of 3DEM, it relies on the availability of both anisotropic and sparse tomographic views, which may not always be feasible.
Deep learning (DL) has emerged as a promising approach for SR in computer vision [7], medical [8], and biomedical [9] applications. DL-based methods follow an end-to-end learning procedure, enabling them to effectively learn the mappings from LR to high-resolution (HR) spaces when abundant LR and HR training datasets are available. The DL-based approach for isotropic 3DEM reconstruction from the anisotropic acquisition was introduced in [10], in which authors adopted a 3D convolutional neural network (CNN) architectures, then trained it using pairs of down-sampled isotropic 3DEM (synthetic anisotropic) and isotropic 3DEM acquired from FIB-SEM and tested on images obtained from the same technology. However, this approach has some limitations. Importantly, it requires the availability of isotropic 3DEM images at the desired resolution, which is often not feasible - especially in ssTEM and SBEM techniques. Additionally, when the network is fed with anisotropic 3DEM images acquired from a different technology, severe performance drops, and artifacts may occur due to the domain gap between EM imaging techniques.
Self-supervised super-resolution learning is a powerful technique that can eliminate the need for training datasets and address the domain gap between training and test datasets. It involves training super-resolution algorithms solely on the given LR image, using synthetically generated LR-HR training pairs derived from the LR image itself. Authors in [11] introduced the concept of self-supervised super-resolution learning, where they harnessed the internal recurrence of information inside a given LR natural image across different resolution scales to generate synthetic pairs of LR and HR image datasets. When the network is trained, the given LR image is fed to the network to produce the corresponding HR image. This approach has been employed within studies in the biomedical [12] and medical [13] domains to produce 3D isotropic images from the anisotropic acquisition, respectively, with the focus on the optical fluorescence microscopy and magnetic resonance imaging.
Motivated by the remarkable self-similarity observed in ultra-structures within brain 3DEM datasets, we present an efficient self-supervised super-resolution framework specifically designed to transform anisotropic 3DEM data into isotropic 3DEM, named A2I-3DEM. The key contributions of our work are as follows:
* We propose a framework for reconstructing isotropic 3DEM data from anisotropic acquisition while mitigating the inherent noise-like artifacts present in electron microscopy.
* We introduce a novel DL architecture based on the vision transformer, which effectively captures multi-scale local and global image dependencies, helping in enhanced reconstruction.
* We employ an efficient training strategy by simulating the distortions commonly observed in 3DEM imaging.
## II Method
Let \(\mathbf{x}\in\mathbb{R}^{W\times W\times W}\) and \(\mathbf{y}\in\mathbb{R}^{W\times W\times C}\) denote respectively isotropic and anisotropic 3DEM, where \(\rho=W/C\) indicates the resolution ratio between isotropic and anisotropic acquisitions in the axial direction (\(z\)), i.e., super-resolution ratio. In this section, we introduce our ViT-empowered self-supervised super-resolution approach to reconstruct isotropic 3DEM, \(\mathbf{x}\), from the anisotropic acquisition \(\mathbf{y}\).
### _Self-Supervised Super-Resolution_
Self-similarity of ultra-structures between lateral and axial planes in 3DEM data, especially in the brain gray matter, allows for self-supervised learning upon the anisotropic 3DEM data, see Fig 1. Leveraging such a structural self-similarity, we can synthesize training image pairs from the isotropic \(xy-\)lateral plane. To synthesize the training pairs, large patches that adequately represent the ultrastructural features of interest are extracted from the lateral plane \(P^{i}_{xy}\in\mathbb{R}^{M\times M}\). These patches are then subjected to various degradations such as noise, artifacts, distortions, and anisotropic undersampling resolution with ratio \(1/\rho\) to generate corresponding synthesized axial patches \(P^{i}_{xz/yz}\in\mathbb{R}^{(M/\rho)\times M}\). The synthesized pairs \(\{(P^{i}_{xz/yz},P^{i}_{xy})\}_{i=1}^{N}\) are then used to train network \(f_{\theta}(\cdot):\mathbb{R}^{(M/\rho)\times M}\rightarrow\mathbb{R}^{M\times M}\) parameterized with \(\theta\) to learn the mapping from axial to lateral planes. In practice, \(\rho\) may not always be an integer, which poses challenges to determining the mapping. To overcome this issue, we first employ interpolation to resize the anisotropic data to match the desired isotropic data size. This interpolated data is then utilized as the LR image. The network's parameters \(\theta\) are obtained by optimizing the following empirical loss:
\[\hat{\theta}=\arg\min_{\theta}\sum_{i=1}^{N}\mathcal{L}(f_{\theta}(P^{i}_{xz/ yz}),P^{i}_{xy}), \tag{1}\]
where \(\mathcal{L}\) is the loss function between network prediction \(f_{\theta}(P_{xz/yz})\) and ground truth \(P_{xy}\). The trained network \(f_{\theta}(\cdot)\) is then used to super-resolve the real axial planes to the desired resolution. Finally, by stacking the super-resolved axial planes in the perpendicular direction, the isotropic 3DEM is reconstructed. The proposed self-supervised super-resolution framework is illustrated in Fig.2.
### _Network Architecture_
#### Ii-B1 Overall Pipeline
The proposed network architecture is a hierarchical U-shaped design of the encoder-decoder equipped with ViT blocks, as illustrated in Fig.3. The input is a low-resolution axial plane image, \(\mathbf{I}\in\mathbb{R}^{1\times H\times W}\), which is first
Fig. 1: Ultra-structural self-similarity in 3DEM datasets from the rat brain. A) Gray matter [5] demonstrates the ultra-structural self-similarity across a wide range of sizes, while B) white matter [6] predominantly exhibits this self-similarity in smaller structures.
fed through convolutional layers to extract low-level features, \(\mathbf{X_{0}}\in\mathbb{R}^{C\times H\times W}\), where \(C\), \(H\), and \(W\) respectively indicate the number of channels, height, and width. Afterward, the feature map is passed through a symmetric encoder-decoder with \(K\) levels. Starting from the first encoder, the encoder hierarchically reduces the spatial resolution (\(H\times W\)) while increasing the channel size, leading to the bottleneck feature map, \(\mathbf{F}_{\ell}\in\mathbb{R}^{2^{K-1}C\times\frac{H}{2^{K-1}}\times\frac{2W}{ 2^{K-1}}}\). The feature maps from the bottleneck and encoders are then passed to the decoders to progressively produce the high-resolution representation. Finally, the low-level features are added to the output from the last decoder, and fed with to the feature projection block, producing the super-resolved image.
#### Iii-B2 Vision Transformer
The ViTs partition an image into a sequence of small patches, i.e., local windows, and learn relationships between them. By learning these relationships, the ViT can learn a wide range of image dependencies, which is crucial for achieving high performance in low-level vision tasks like image super-resolution. To capture both global and local image dependencies while keeping computational costs low, we employ the window-based multi-head attention (W-MSA) approach [14, 15]. The extracted attention maps using W-MSA are then passed through the novel gating mechanism, called the gated locally-enhanced feed-forward network (GLEN), to enhance the important features while suppressing the less important ones. These W-MSA and GLEN are embedded into a ViT block illustrated in Fig.3, and the corresponding computation is as follows:
\[\begin{split}\mathbf{X}^{\prime}&=\mathbf{W}\text{- MSA}(\mathbf{LN}(\mathbf{X})),\\ \mathbf{X}^{\prime\prime}&=\mathbf{GLEN}(\mathbf{LN} (\mathbf{X}^{\prime}))+\mathbf{X}^{\prime},\end{split} \tag{2}\]
where, \(\mathbf{LN}\) is layer normalization and \(\mathbf{X}\) is the input feature map.
W-MsaThe input feature map \(\mathbf{X}\in\mathbb{R}^{C\times H\times W}\) is firstly partitioned into \(N=HW/M^{2}\) non-overlapping \(M\times M\) local windows, leading to the local feature map \(\mathbf{X}^{i}\in\mathbb{R}^{M^{2}\times C}\). The standard self-attention mechanism is then applied to each local feature map. The W-MSA, when there is \(k\) head with the dimension of \(d_{k}=C/k\), is obtained by concatenating attention heads \(\mathbf{\hat{X}}_{k}=\{\mathbf{Y}_{k}^{i}\}_{i=1}^{N}\), where \(\mathbf{Y}_{k}^{i}\) is \(k\)-th head attention related to \(i\)-th local window calculated as below:
\[\mathbf{Y}_{k}^{i}=\mathbf{Attention}(\mathbf{X}^{i}\mathbf{W}_{k}^{Q},\mathbf{ X}^{i}\mathbf{W}_{k}^{K},\mathbf{X}^{i}\mathbf{W}_{k}^{V}),i=1,\dots,N, \tag{3}\]
where \(\mathbf{W}_{k}^{Q}\), \(\mathbf{W}_{k}^{K}\), \(\mathbf{W}_{k}^{V}\in\mathbb{R}^{C\times d_{k}}\) are projection metrices of queries (\(\mathbf{Q}\)), keys (\(\mathbf{K}\)), and values (\(\mathbf{V}\)) for the \(k\)-th head, respectively. The attention is obtained as follows:
\[\mathbf{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathbf{SoftMax}(\frac{ \mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k}}}+\mathbf{B})\mathbf{V}, \tag{4}\]
where \(\mathbf{B}\) is the relative position bias [16].
Fig. 2: The workflow of the proposed self-supervised super-resolution framework for the isotropic reconstruction of 3DEM from the anisotropic acquisition. A) Input 3DEM data is anisotropic, with high resolution in the lateral \((x,y)\) directions and inferior resolution in the axial direction (\(z\)). B) Training pairs are synthesized from the anisotropic 3DEM data. The isotropic \(xy-\)lateral plane undergoes under-sampling and distortions to synthesize the \(xz-\) and \(yz-\)anisotropic axial planes. C) The proposed network is trained using synthesized training pairs, where the interpolated synthesized axial plane is employed as LR input, while the isotropic lateral plane is regarded as GT. D) The trained network sequentially takes each axial plane as input, and the resultant outputs are stacked together to obtain isotropic 3DEM, involving two steps: Initially, 3D interpolation is employed to resize the anisotropic 3DEM data, aligning it with the size of the desire 3D isotropic data. Subsequently, the trained network is consecutively fed with each slice from the interpolated data’s axial plane, and the resultant outputs are stacked together to generate isotropic 3DEM with an improved resolution in the axial direction. E) The output is an isotropic 3DEM with the improved resolution ratio \(\rho\) in the axial direction.
GlenThis block processes attention maps through two components: depth-wise convolution, which learns contextual image dependencies required for SR, and a gating mechanism, which highlights informative features while suppressing non-informative ones. As shown in Fig.3, the gating mechanism is implemented as the element-wise product of two parallel paths of linear transformation layers.
#### Ii-A3 Loss Function
To optimize the network's parameters, we utilize the \(\mathcal{L}_{\ell_{1}}=\frac{1}{N}\sum_{i=1}^{N}\left\|x_{i}-\hat{x_{i}} \right\|_{1}\) and projected distribution loss (PDL) [17], which respectively penalze pixel value and distribution mismatch between restored image \(\hat{x}\) and ground truth \(x\), ensuring both pixel-level accuracy and distribution-level fidelity. The total loss is given by:
\[\mathcal{L}_{Total}=\mathcal{L}_{\ell 1}+\alpha\mathcal{L}_{PDL}, \tag{5}\]
where \(\alpha\) is a hyperparameter governing the trade-off between loss functions, which was empirically set to \(0.01\). For optimization, we employed the Adam algorithm [18] with an initial learning rate of \(10^{-4}\). The implementation was done using PyTorch framework.
## III Experiments and Results
### _Datasets_
#### Iii-A1 Synthetic Data
We synthesized an anisotropic 3DEM dataset by under-sampling an isotropic FIB-SEM dataset [19]. In the first step, to reduce noise and artifacts in data, we isotropically downscaled the original data- with voxel resolution \(5\times 5\times 5\)\(nm^{3}\) and image size of \(1530\times 1530\times 1053\)- by a factor of three, resulting in a voxel resolution of \(15\times 15\times 15\)\(nm^{3}\). Subsequently, we applied anisotropic downsampling to achieve a voxel resolution of \(15\times 15\times 45\)\(nm^{3}\). These synthetic pairs of anisotropic and isotropic 3DEM datasets were utilized in our experiments.
#### Iii-A2 Real Data
We used two anisotropic 3DEM datasets acquired from rat brains through the SBEM technique. The first dataset was acquired from the gray matter in the visual cortex [5] with the size of \(1024\times 1024\times 540\), while the second was acquired from the white matter at the corpus callosum [6], with the size of \(1024\times 1024\times 490\). Both datasets had a voxel resolution of \(15\times 15\times 50\)\(nm^{3}\).
### _Results_
We compared the proposed super-resolution method, A2I-3DEM, with several established techniques, including the standard cubic interpolation approach as well as two CNN-based methods: SRMD [20] and PSSR [9]. Additionally, we considered a transformer-based method, SwinIR [15]. For synthetic data, we utilized PSNR and SSIM [21] for quantitative assessments and visually compared the super-resolved
Fig. 3: The proposed U-shaped architecture based on the vision transformer. Training and testing are illustrated in the upper part of the figure, marked respectively in red and green. The bottom part of the figure visualizes the component of the proposed architecture.
volumes with the reference. For real data, lacking a reference, we visually compared the results with the Cubic interpolated data, the initial point for all competitors, to assess resolution enhancement and consistency of details.
For the synthetic dataset, where we have the reference, a visual comparison with competitors is drawn in Fig. 4, and corresponding quantitative results were tabulated in TableI. In Fig. 4, orange restricted areas show that cubic and SRMD led to severely blurred results. Among other methods, A2I-3DEM and SwinIR could produce images with better contrast and distinguishable membranes. Notably, as pointed out by the arrows, A2I-3DEM outperforms SwinIR by producing outputs with reduced blurriness. The superiority of A2I-3DEM is in agreement with the PSNR value reported in TableI. However, SSIM values contradict the visual outcomes, as the cubic interpolation method appears to outperform all other competitors according to SSIM. This discrepancy calls for an alternative image quality assessment metric.
Visual comparison of the first real dataset, pertaining to brain gray matter, is presented in Fig. 5. Consistent with expectations, DL-based methods demonstrate enhanced detail compared to cubic interpolation. Zooming in on specific regions in Fig. 5 (B-E), artifacts such as black point artifacts in white areas or white point artifacts in black areas are evident in the results of SRMD, PSSR, and SwinIR. In contrast, A2I-3DEM not only avoids these artifacts but also successfully reduces noise compared to the other methods.
Fig. 4: Visual comparison of isotropic 3DEM reconstruction results using various methods on the synthetic dataset: \(xz-\)axial plane perspective.
Fig. 5: Visual comparison of isotropic 3DEM reconstruction results using various methods on the real dataset from gray matter: \(xz-\)axial plane perspective.
A subset of visual results from the second real dataset, related to brain white matter, is depicted in Fig. 6. These results highlight the success of our proposed self-supervised method in enhancing the resolution of the given LR image while effectively mitigating noise.
## IV Conclusion
This paper introduced a deep-learning-based self-supervised super-resolution framework to overcome the challenge of acquiring isotropic 3DEM. The framework's ability to generate training datasets directly from the provided anisotropic 3DEM data makes it a practical preprocessing tool for downstream visualization and processing tasks. The incorporation of simulated distortions within the efficient training strategy not only improved the model's generalizability but also enabled the network to learn to mitigate noise that exists in the given LR EM image. Furthermore, the proposed U-shaped architecture, equipped with ViT blocks, effectively captures multi-scale local and global image dependencies, leading to enhanced reconstruction performance. Experimental evaluations conducted on 3DEM datasets of brain tissue demonstrated the network's proficiency in recovering fine details while effectively mitigating noise.
## Acknowledgment
We thank CVLab at Ecole Polytechnique Federale de Lausanne for sharing their 3DEM dataset, Electron Microscopy Unit of the Institute of Biotechnology at University of Helsinki for rat datasets, and the Bioinformatics Center at University of Eastern Finland, for providing computational resources.
|
2309.17129 | Micromagnetics of ferromagnetic/antiferromagnetic nanocomposite
materials. Part II: Mesoscopic modeling | In the second part of this publication, we present simulation results for two
three-dimensional models of Heusler-type alloys obtained by the mesoscopic
micromagnetic approach. In the first model, we simulate the magnetization
reversal of a single ferromagnetic (FM) inclusion within a monocrystalline
antiferromagnetic (AFM) matrix, revealing the evolution of the complex
magnetization distribution within this inclusion when the external field is
changed. The main result of this ``monocrystalline'' model is the absence of
any hysteretic behavior by the magnetization reversal of the FM inclusion.
Hence, this model is unable to reproduce the basic experimental result for the
corresponding nanocomposite -- hysteresis in the magnetization reversal of FM
inclusions with a vertical shift of the corresponding loops. To explain this
latter feature, in the second model we introduce a polycrystalline AFM matrix,
with exchange interactions between AFM crystallites and between the FM
inclusion and these crystallites. We show that within this model we can not
only reproduce the hysteretic character of the remagnetization process, but
also achieve a semi-quantitative agreement with the experimentally observed
hysteresis loop assuming that the concentration of FM inclusions strongly
fluctuates. These findings demonstrate the reliability of our enhanced
micromagnetic model and set the basis for its applications in future studies of
Heusler alloys and FM/AFM nanocomposites. | Sergey Erokhin, Dmitry Berkov, Andreas Michels | 2023-09-29T10:48:57Z | http://arxiv.org/abs/2309.17129v1 | # Micromagnetics of ferromagnetic/antiferromagnetic nanocomposite materials.
###### Abstract
In the second part of this publication, we present simulation results for two three-dimensional models of Heusler-type alloys obtained by the mesoscopic micromagnetic approach. In the first model, we simulate the magnetization reversal of a single ferromagnetic (FM) inclusion within a _mancorstalline_ antiferromagnetic (AFM) matrix, revealing the evolution of the complex magnetization distribution within this inclusion when the external field is changed. The main result of this "monocrystalline" model is the absence of any hysteretic behavior by the magnetization reversal of the FM inclusion. Hence, this model is unable to reproduce the basic experimental result for the corresponding nanocomposite--hysteresis in the magnetization reversal of FM inclusions with a vertical shift of the corresponding loops. To explain this latter feature, in the second model we introduce a _polycrystalline_ AFM matrix, with exchange interactions between AFM crystallites and between the FM inclusion and these crystallites. We show that within this model we can not only reproduce the hysteretic character of the remagnetization process, but also achieve a semi-quantitative agreement with the experimentally observed hysteresis loop assuming that the concentration of FM inclusions strongly fluctuates. These findings demonstrate the reliability of our enhanced micromagnetic model and set the basis for its applications in future studies of Heusler alloys and FM/AFM nanocomposites.
micromagnetics, Heusler alloys, magnetic nanocomposites, antiferromagnets, neutron scattering pacs: 05.40.-a, 05.40.-a, 05.40.-a
## I Introduction
In the previous paper [1], starting with the atomistic modeling of quasi one-dimensional (1D) systems, we have developed a novel mesoscopic micromagnetic approach for simulating materials composed of ferromagnetic (FM) inclusions in an antiferromagnetic (AFM) matrix. The need for this development is based on the discovery of strong ferromagnetism of Ni\({}_{2}\)MnIn Heusler-type precipitates that are embedded in an AFM NiMn matrix [2; 3; 4]. The sizes of the FM inclusions are in the range from 5 to 50 nm and the magnetization curve exhibits a number of interesting features: a vertical shift of the extracted hysteresis loop of the FM precipitates suggests a strong exchange coupling to the AFM matrix and the shape of the loop, especially its abrupt jump near zero field followed by a smooth magnetization change at much higher fields, suggests that there exist at least two different subsystems of FM inclusions.
In the following, we present a mesoscopic micromagnetic analysis of the above described Heusler system with the aim to obtain a detailed and quantitative understanding of its remagnetization processes. We remind that in the first part [1] we have presented atomistic and mesoscopic approaches to the micromagnetic modeling of Heusler alloys providing all necessary prerequisites for 3D mesoscopic calculations. In the present (second) part, we discuss simulation results of the full 3D models, and provide a quantitative comparison of these results to experimental data.
More specifically, in Sec. II, we simulate a single FM inclusion in a monocrystalline AFM matrix. This model, which does not reproduce the experimentally observed hysteresis, is then extended in Sec. III to include FM inclusions in a polycrystalline AFM matrix. Based on these results, we provide in Sec. IV a quantitative comparison between the experimentally observed magnetization loop of Heusler-type precipitates and our simulation results, demonstrating the validity of our model.
## II 3D mesoscopic model: a single FM inclusion in a monocrystalline AFM matrix
The results obtained using the quasi 1D model presented in the first part [1] provide the framework for the next step of our study--application of our mesoscopic model to a 3D system. The need for mesoscopic simulations is based on two factors: (i) the typical grain sizes in a nanocomposite (up to tens of nanometers) result in system sizes that are too large to be simulated using the atomistic approach, and, as we demonstrate further, (ii) the collective nature of the magnetization-reversal process of precipitates in the polycrystalline AFM also requires to simulate systems with large sizes. Our simulations of the magnetization reversal in 3D FM/AFM structures rely on a polyhedron-based finite element micromagnetic algorithm, which we have designed specifically for the modeling of magnetic nanocomposites. The detailed description of this approach can be found in Refs. [5; 6].
The primary structure of interest in this section is a spherical FM inclusion surrounded by a _monocrystalline_ AFM matrix (see Fig. 1, middle panel). In these simulations, a cubic modeling volume with a side length of \(150\,\mathrm{nm}\) was discretized into approximately \(1.5\times 10^{5}\) mesh elements, each about \(3\,\mathrm{nm}\) in size. This discretization allows us to study the details of the magnetization distribution of relatively small particles. Parameters of magnetic materials for both phases were presented in the first part [1]. We conducted simulations by varying the particle diameter \(d_{\mathrm{FM}}\) of the FM inclusion between 10 and \(50\,\mathrm{nm}\) and by adjusting the exchange coupling on the FM/AFM interphase boundary by varying the corresponding exchange weakening coefficient \(\kappa\) between 0 (complete decoupling) and 1 (perfect exchange coupling).
The absence of magnetodipolar fields produced by the AFM phase (on a mesoscopic scale) and the negligibly small influence of the magnetodipolar field produced by the FM crystallites on the AFM matrix (in contrast to the exchange interaction) prompt us to propose a hybrid environment for micromagnetic simulations of this system. We simulate the AFM phase utilizing _periodic_ boundary conditions to account for the only nonlocal interaction relevant for a mesoscopic AFM--the exchange interaction. By contrast, the long-range magnetodipolar interaction inside the FM crystallite is calculated using _open_ boundary conditions. In this procedure the magnetodipolar interaction between the FM inclusions is neglected, which can be justified by its minor role compared to the magnetodipolar interaction of mesh elements inside the same FM inclusion. Hence, we have to compute the magnetodipolar field only within the FM inclusion, which largely reduces the computation time for the most time-consuming part of any micromagnetic simulation--the calculation of the magnetodipolar energy (the FM inclusion occupies a relatively small volume fraction of our system). This acceleration allows us to significantly extend the parameter set for our study.
Simulation results for this model presented in Fig. 1 show magnetization-reversal curves for three FM inclusion diameters \(d_{\mathrm{FM}}=10\), \(30\), and \(50\,\mathrm{nm}\) and various exchange weakening coefficients \(\kappa\). As earlier demonstrated with the quasi 1D system, a large exchange coupling at the FM/AFM interface results in a high coercivity of the FM phase, reaching \(100\,\mathrm{kOe}\) for a 10-nm-sized spherical inclusion. The exchange interaction at the FM/AFM boundary is the only mechanism connecting the phases in this one-particle model, so that when the coupling is reduced, the coercivity decreases dramatically, becoming negligible as expected for a soft FM material.
Details of the remagnetization process in such a system are demonstrated in Fig. 2 for the example of a 50-nm-sized particle with a perfect exchange coupling (\(\kappa=1\)) with the AFM phase. Figure 2(a) shows the magnetization-reversal curve of the system, and Fig. 2(b) displays the \(z\) component of the magnetic moments (i.e., the component parallel to the external field direction) as a function of the distance \(d\) from the center of the FM particle in several external fields. For the AFM phase, only the spin-direction components of one sublattice are displayed. A notable feature of this process is a relatively swift magnetization rotation of the central part of the FM particle, attributable to the weak anisotropy of the FM material. At \(H_{z}=-20\,\mathrm{kOe}\), the majority of magnetic moments are already reversed, while the remaining ones, particularly those located in the vicinity of the interface region, form a 3D "shell" around the reversed inclusion kernel. The field evolution of the magnetization distribution within the FM inclusion is represented in more detail in Fig. 2(c), which shows a vertical cut through the particle. Here, the formation of a shell region (where the magnetic moments are strongly coupled to the AFM matrix) and the rotation of the central part can be clearly observed.
Strong exchange coupling at the interface also results in the deviation of the AFM moments from their initial
Figure 1: Simulated magnetization-reversal curves of a spherical FM inclusion (with varying size \(d_{\mathrm{FM}}\)) as a function of the exchange coupling \(\kappa\) with the monocrystalline AFM matrix (see insets). Curves of the same color represent results for different finite-element polyhedron discretizations of the system.
direction, as depicted in Fig. 2(b). Therefore, the so-called AFM domain wall is formed in the matrix [Fig. 2(c) illustrates only the AFM layer closest to the FM inclusion]. We note that the experimental observation of AFM domain structures presents a significant experimental difficulty due to the absence of magnetodipolar fields of this material at the mesoscopic scale. Thus, for AFM materials containing also FM particles, there emerges an opportunity to draw conclusions about the state of the magnetic moments of the AFM based on the magnetization distribution in the FM inclusions. Such spatial variations in the orientation of the AFM can be experimentally studied by means of the small-angle neutron scattering technique [7; 8; 9].
The key feature of these simulation results is that _hysteretic_ behavior does not occur in this model, irrespective of the size of the FM inclusion or the degree of exchange weakening at the FM/AFM boundary. As it can be seen both in Fig. 1 and 2(a), the magnetization of the FM inclusion follows exactly the same path for an external field (\(H_{z}\)) sweeping from \(+\infty\) to \(-\infty\) and from \(-\infty\) to \(+\infty\), indicating that the remagnetization process is fully reversible. This feature is due to the inability of a relatively small FM inclusion to irreversibly switch the monocrystalline AFM matrix, which has a relatively high magnetocrystalline anisotropy (see [1]). We emphasize that neither our relatively simple model nor any more sophisticated core-shell model of a FM precipitate coupled to the AFM matrix could explain the hysteretic behavior and the nearly perfect symmetry of experimental hysteresis loops observed in [3]. The reason for this discrepancy is the fact (see above) that in such models the orientation of the AFM matrix remains nearly the same in negative and positive external fields, whereas the orientations of the magnetization within the FM inclusion are nearly opposite. This feature renders the energy minima of the complete system in high positive and high negative fields nonequivalent. Therefore, there is a need for the further improvement of our model in order to explain the experimentally observed hysteresis.
## III Extended model: FM inclusions in a polycrystalline AFM matrix
The reason for the absence of any hysteretic behavior for a FM inclusion embedded into a monocrystalline AFM matrix is the inability of such an inclusion to reverse the whole AFM matrix, even when the surface of
Figure 2: (a) Magnetization-reversal curve of a 50-nm-sized spherical FM inclusion that is perfectly exchange coupled (\(\kappa=1.0\)) to a monocrystalline AFM matrix. (b) Corresponding \(z\) components of magnetic moments (for one sublattice in the case of the AFM) as a function of the distance \(d\) from the center of the FM inclusion. (c) Field evolution of the magnetization distribution in the FM inclusion (embedded in the AFM matrix). The outermost red and blue shells of arrows belong to the matrix.
the FM is perfectly exchange-coupled to the surrounding AFM. This inability is due to two system features: (i) small concentration of FM inclusions (and only the FM fraction of the system responds to the external field), and (ii) the very large anisotropy of the AFM material. The concentration of FM inclusions can be a subject of debate, especially when the distribution of these inclusions is strongly inhomogeneous. However, we have found that for any reasonable local concentration of the FM phase, these inclusions are not able to reverse the orientation of the entire _monocrystalline_ AFM matrix with the nominal anisotropy of NiMn (discussed in the first part [1]). Taking into account that in experiment a clear hysteresis is observed, we should look for the physical explanation of the much smaller effective anisotropy of the AFM.
In order to suggest a corresponding explanation, we remind that an analogous phenomenon is well known in the physics of ferromagnetism: the effective (volume-averaged) anisotropy \(K_{\rm eff}\) of a _polycrystalline_ ferromagnet is usually much smaller than the magnetic anisotropy of the same material in its _monocrystalline_ state. The reason for this behavior is explained quantitatively by the Herzer model [10]. The model takes into account that the anisotropy axes of the constituting grains in a polycrystal are usually randomly oriented, leading to random directions of the anisotropy field in each crystallite (grain). For exchange coupled grains--a normal case for a high-quality FM material--the exchange interactions between them lead to the self-averaging of the anisotropy field, resulting in a strong decrease of the volume-averaged anisotropy constant. The effect is obviously stronger in materials with a smaller average grain size \(\langle d\rangle\). In fact, the effective anisotropy constant decreases rapidly as \(K_{\rm eff}\sim\langle d\rangle^{6}\)[10].
Following this paradigm, we have assumed that the AFM matrix of the system studied in Ref. [2] is polycrystalline, and that the FM inclusions that are are embedded between different AFM grains exhibit a random orientation of their anisotropy planes and anisotropy axes within these planes. We have implemented the corresponding model using a cubic simulation volume with a side length of 500 nm divided into \(\sim\)\(1.5\times 10^{5}\) crystallites, each with a size of \(\sim\)10 nm (for both phases). The crystallites possess a polyhedron shape, the simulation volume contains no porosity, and periodic boundary conditions are implemented. Details of the microstructure generation, the discrete realization of the energy contributions, and the energy-minimization procedure can be found in Refs. [5; 6; 11].
Magnetization reversal is simulated in frames of the Stoner-Wohlfarth model, i.e., under the assumption of a uniform magnetization of individual crystallites. The exchange coupling between the crystallites is governed by site-dependent exchange weakening coefficients \(0\leq\kappa\leq 1\), similar to the previous model. There are three kinds of couplings between the crystallites in such a two-phase system: (i) FM/AFM coupling with \(\kappa=\kappa_{\rm FM-AFM}\) (as in the previous model), (ii) coupling between different AFM crystallites with \(\kappa_{\rm AFM}\), and (iii) coupling between FM inclusions with \(\kappa_{\rm FM-FM}\). In all of the following simulations, we have set \(\kappa_{\rm FM-AFM}=\kappa_{\rm FM-FM}=\kappa_{\rm AFM}=1.0\), implying a strong exchange coupling between the corresponding phases. No magnetodipolar interaction between the crystallites is considered at the current stage.
According to measurements [3], the total volume fraction of the precipitates in the sample is about 0.24 %, which initially motivated us to develop the one-particle model described in Sec. II. However, recent experimental observations of this class of Heusler alloys [12] have revealed strong spatial variations in the volume fraction of the FM phase. This finding prompted us to study the dependence of the magnetization reversal in a nanocomposite system on the volume fraction \(c_{\rm FM}\) of the FM phase.
Figure 3 illustrates this dependence over the range of \(c_{\rm FM}\) between 2 and 50 %. As stated above, the exchange coupling between all the different crystallites is assumed to be strong. The magnetization reversal of a nanocomposite with a low FM volume fraction (\(c_{\rm FM}=2\) %) is essentially the same as for the single-particle model with the monocrystalline AFM matrix: the magnetization rotation in the system is entirely reversible (Fig. 3, leftmost panel) because such a low number of FM crystallites can only influence a small portion of AFM crystallites. This influence leads only to a slight deviation of the AFM magnetization curve from its maximum value. The situation changes qualitatively already at \(c_{\rm FM}=3\) %, where hystereses are observed for both FM and AFM phases, indicating the complete magnetization rotation of the nanocomposite at an external field of \(\sim\)\(-\)30 kOe. Further increase of the FM volume fraction results in a decreasing coercivity, owing to the intensified interaction between the phases, until the coercivity vanishes when the volume fractions of AFM and FM are equal.
The results shown in Fig. 3 demonstrate that the integration of spatial fluctuations of the density of FM inclusions and the polycrystallinity of the AFM matrix into our model has solved the problem of a lacking hysteretic behavior, even for small FM crystallites with a uniform magnetization. Collective interactions within the polycrystalline FM/AFM system lead to the magnetization hysteresis over a broad range of FM volume fractions. Similar simulations involving larger crystallites, which include the influence of the magnetodipolar interaction that becomes significant beyond a certain FM inclusion size [compare Fig. 2(c)], are beyond the scope of this paper and will be reported in a separate publication.
## IV Model with several fractions of FM inclusions: explanation of experimental hysteresis loops
The results of the previous section form the basis to explain the experimentally observed hysteresis loop for a Heusler-type alloy (Ni\({}_{2}\)MnIn precipitates embedded in
an AFM NiMn matrix [3], see the red curve in the inset of Fig. 4). This magnetization curve clearly exhibits the characteristics of a multiphase FM subsystem, i.e., a subsystem consisting of several distinct fractions of FM inclusions: (i) a vertical shift of the loop, (ii) a significant drop in the magnetization at zero field, and (iii) a broad hysteresis itself, which is typical for a system of FM particles. Here, we note that all of these features are present in the simulated magnetization-reversal curves of nanocomposites with various FM volume fractions (Fig. 3). Therefore, in order to show that all these features can be explained by our model, we have constructed a hysteresis curve from the already obtained results. An alternative way would be to conduct simulations by generating multiple systems with different spatial fluctuations of the FM inclusions density and then collect statistically significant characteristics of such ensembles. However, this approach would require much more computational efforts, leading to essentially the same result as described below.
An example for a hysteresis obtained from a superposition of loops shown in Fig. 3 is displayed in Fig. 4. Here, we demonstrate a composite loop (FM phase, blue curve), which was obtained from simulation results for systems with \(c_{\mathrm{FM}}=2\), \(3\) and \(50\,\%\) and with the assigned weights of, respectively, \(0.1\!:\!0.5\!:\!0.4\). This way we have successfully modeled the experimental behavior of the FM phase under the assumption of a narrow distribution of FM grain sizes (all particles have a diameter of \(\sim\)10 nm) and a multimodal FM density distribution. This superposition does not only allow us to explain all qualitative aspects of the experimental magnetization curve mentioned above, but also allows us to semi-quantitatively replicate the almost horizontal plateau from \(-5\) T to \(0\) T and the gradual magnetization reversal up to the maximum available field of \(15\,\mathrm{T}\).
The fine details of the remagnetization process in such polycrystalline nanocomposites can be revealed by analyzing the evolution of the magnetization distribution. This evolution is shown in Fig. 5 for a system with \(\kappa_{\mathrm{AFM}}=1\) and \(c_{\mathrm{FM}}=3\,\%\). By relating the displayed im
Figure 3: Simulation results for a polycrystalline FM/AFM many-particle system. Shown are magnetization-reversal curves for various volume concentrations \(c_{\mathrm{FM}}\) of the FM phase (blue lines—FM phase; red dashed lines—one sublattice of the AFM phase). Both exchange couplings (FM/AFM and AFM/AFM) are perfect. The sizes of the FM inclusions and the AFM crystallites are both \(10\,\mathrm{nm}\).
Figure 4: Composite hysteresis loop (blue line—FM phase) combined from the corresponding dependencies of systems with \(c_{\mathrm{FM}}=2,3,50\,\%\) (see Fig. 3). The figure in the subpanel is adapted from Ref. [3] and shows the experimentally obtained magnetization loop of the FM phase in a Heusler-type alloy.
ages to the corresponding magnetization-reversal curve, we observe that both FM and AFM phases are fully magnetized at \(-10.5\) kOe, as it can be seen from Fig. 3 for \(c_{\rm FM}=3\) % (in the case of the AFM phase, we mean the direction of the magnetic moments of one sublattice, with the moments of the other sublattice being antiparallel). At an intermediate field of \(-17.5\) kOe, a wide range of magnetization directions within the FM crystallites can be seen, while the state of the AFM matrix can be described by the onset of domain formation. A large negative field of \(-30\) kOe nearly completely rotates all FM inclusions, but a clear AFM domain structure is visible. These domains are still predominantly orientated in the initial direction, i.e., opposite to the negative external magnetic field. At the final stage, the magnetic moments of all crystallites are completely reversed.
## V Conclusion
We have implemented a micromagnetic simulation methodology that allows to compute the magnetic response of a polycrystalline system containing ferromagnetic (FM) inclusions in an antiferromagnetic (AFM) matrix. Using this new mesoscopic simulation technique, the details of the remagnetization process in a system composed of FM crystallites embedded in an AFM matrix were revealed by simulating hysteresis curves as functions of the ferromagnetic grain size and the exchange weakening on the FM/AFM boundary. It was explicitly shown that a one-particle model employing a monocrystalline AFM matrix is incapable of explaining the experimental hysteresis results obtained on Heusler alloys. Only the inclusion of (i) the polycrystallinity of the AFM matrix and (ii) strong spatial fluctuations of the density of FM particles into the model explains all the qualitative features of experimental observations on these highly nontrivial systems.
## Acknowledgment
We would like to thank Nicolas Josten (University of Duisburg-Essen) for insightful discussions that greatly contributed to the research presented in this paper.
|
2309.11558 | Estimating major merger rates and spin parameters ab initio via the
clustering of critical events | We build a model to predict from first principles the properties of major
mergers. We predict these from the coalescence of peaks and saddle points in
the vicinity of a given larger peak, as one increases the smoothing scale in
the initial linear density field as a proxy for cosmic time. To refine our
results, we also ensure, using a suite of $\sim 400$ power-law Gaussian random
fields smoothed at $\sim 30$ different scales, that the relevant peaks and
saddles are topologically connected: they should belong to a persistent pair
before coalescence. Our model allows us to (a) compute the probability
distribution function of the satellite-merger separation in Lagrangian space:
they peak at three times the smoothing scale; (b) predict the distribution of
the number of mergers as a function of peak rarity: haloes typically undergo
two major mergers ($>$1:10) per decade of mass growth; (c) recover that the
typical spin brought by mergers: it is of the order of a few tens of percent. | Corentin Cadiou, Eric Pichon-Pharabod, Christophe Pichon, Dmitri Pogosyan | 2023-09-20T18:00:11Z | http://arxiv.org/abs/2309.11558v3 | # Estimating major merger rates and spin parameters ab initio via the clustering of critical events
###### Abstract
We build a model to predict from first principles the properties of major mergers. We predict these from the coalescence of peaks and saddle points in the vicinity of a given larger peak, as one increases the smoothing scale in the initial linear density field as a proxy for cosmic time. To refine our results, we also ensure, using a suite of \(\sim 400\) power-law Gaussian random fields smoothed at \(\sim 30\) different scales, that the relevant peaks and saddles are topologically connected: they should belong to a persistent pair before coalescence. Our model allows us to (a) compute the probability distribution function of the satellite-merger separation in Lagrangian space: they peak at three times the smoothing scale; (b) predict the distribution of the number of mergers as a function of peak rarity: halos typically undergo two major mergers (\(>\)1:10) per decade of mass growth; (c) recover that the typical spin brought by mergers: it is of the order of a few tens of per cent.
keywords: Cosmology: theory, large-scale structure of Universe
## 1 Introduction
On large scales, the galaxy distribution adopts a network-like structure, composed of walls, filaments and superclusters (Geller and Huchra, 1989; Sohn et al., 2023). This network is inherently tied to the cosmic microwave background, the relic of the density distribution in the primordial Universe. The non-uniformity of this initially quasi-Gaussian field evolved under the influence of gravity into the so-called cosmic web (Bond et al., 1996) we now observe. One can therefore hope to predict the evolution of the cosmic web by studying the topological properties of the initial density field. From its evolution, one should be able to predict the rate of mergers of dark halos and their geometry hence their contribution to halo spin.
The classical method to study mergers is to run cosmological simulations (e.g. Bertschinger, 1998; Vogelsberger et al., 2020), compute where haloes are located at each time increment and construct that way their merger tree (e.g. Lacey and Cole, 1993; Moster et al., 2013).
The theory of merger trees for dark halos has a long standing history starting from the original Press-Schechter theory (Press and Schechter, 1974), excursion set (Bardeen et al., 1986; Peacock and Heavens, 1990; Bond et al., 1991) and peak patch theory (Bond and Myers, 1996) or related formalisms (Manique and Salvador-Sole, 1995, 1996; Hanami, 2001; Monaco et al., 2002; Salvador-Sole et al., 2022). One notable recent variation is the suggestion to use peaks of 'energy' field as such progenitors (Musso and Sheth, 2021). One of the prevailing theoretical idea is to consider peaks of the initial density field in initial Lagrangian space, smoothed at scales related to halo masses, to be halo progenitors (Bardeen et al., 1986). The statistical properties of mergers can then be predicted analytically through extensions of the excursion set theories (Lacey and Cole, 1993; Neistein and Dekel, 2008), or, alternatively, it can be measured in peak-patch simulations (Stein et al., 2019). In the first approach, all the information is local, preventing us from computing the geometry of mergers. In the second approach, the geometry of mergers is accessible as a Monte Carlo average.
In this paper, we provide an alternative framework that specifically takes into account the geometry of mergers while remaining as analytical as possible. The goal is not for accuracy, but rather to provide a simple framework to access information such as the statistics of the number of mergers or the spin brought by mergers.
The paper is organised as follows: in Section 2, we present our model together with analytical estimates of the number of direct mergers with a given halo In Section 3, we extend our model to take into account the topology of the density field using persistence pairing via DisPerse. This allows us to draw mock merger trees and to predict the PDF of the spin parameter that each merger contributes as a function of rarity. Section 4 warps up.
Appendix A introduces the relevant spectral parameters. Appendix B revisits event statistics while relying on the clustering properties of peaks and events computed from first principle. Appendix D recalls the critical event PDF and provides a fit to it. Appendix E discusses the evolution of peak rarity with Gaussian smoothing.
## 2 Analytical model of mergers per halo
### Model of halo mergers: cone of influence of a halo
The starting point of our approach is the peak picture (Bardeen et al., 1986). In this framework, peaks in the linear density field are the seeds for the formation of halos. We rely on the proposition that the overdensity \(\delta(R)\) of a given peak smoothed at scale \(R\) can be mapped to a formation redshift \(z\), using the spherical collapse model, \(\delta(R_{\rm TH})D(z)=\delta_{c}\), where \(\delta_{c}=1.686\) is the critical density, \(D(z)\) is the growing mode of perturbations and \(R_{\rm TH}\) is the top-hat scale. The mass of the halo formed at this time is inferred from the top-hat window scale, \(M=\bar{\rho}_{\rm n}\frac{4}{3}\pi R_{\rm TH}^{3}\).
While the peak-patch theory provides useful insight in the origin of halos of a given mass at a given time, it becomes singular during mergers. Indeed, by construction, the merging halo disappears into the larger one together with its corresponding peak. Here, we instead rely on the analysis of the geometry of the initial linear density field in \(N\)+1D, where \(N\) is the dimension of the initial Lagrangian space and the extra dimension is the smoothing scale. At each smoothing scale, we can formally define critical points (peaks, saddles and minima) following Bardeen et al. (1986). As smoothing increases, some of these critical points eventually disappear when their corresponding halo (for peaks) merge together. This approach allows to capture mergers in space-time from an analysis of the initial conditions in position-smoothing scale employing the critical event theory (Hanami, 2001; Cadiou et al., 2020), where critical events of coalescence between peaks and saddle points serve as proxies for merger events. The process is sketched in Fig. 1 and illustrated in 1+1D in Fig. 2.
Let us track the Lagrangian history with decreasing smoothing of one peak first identified at a smoothing scale \(R_{0}\) and position \(x_{\rm pk}(R_{0})\). Critical event theory relies on the use of Gaussian filters, so that, to assign mass to halos, we need to match the Gaussian and Top-Hat smoothing scales, \(R\) and \(R_{\rm TH}\). The criteria of equal mass encompassed by the filter 1 gives \(R_{\rm TH}\approx 1.56R\). At \(R_{0}\) the peak describes a halo that has collected its mass from a spherical cross-section of \(N\)+1D space of volume \(\propto R_{0}^{3}\); we can call this sphere a Lagrangian patch of the halo. At smaller \(R\), the Lagrangian position of a peak changed to \(x_{\rm pk}(R)\) and the volume of its Lagrangian patch decreased to \(\propto R^{3}\). The history of the peak in the \(N\)+1D space, including mass accumulation, now consists of its trajectory \(x_{\rm pk}(R)\) and a cone of cross-section \(\sim R^{3}\) around it as shown in Fig. 3 for 2+1D example.
Footnote 1: Different criteria modify the relation between Gaussian and Top-Hat filters, for instance matching the variance of the perturbations leads to \(R_{\rm TH}\approx 2.1R\). Here, we instead rely on matching masses.
A critical event marks the end of a trajectory of a peak that disappeared when its scale reached \(R_{\rm ce}\) and is absorbed into a surviving peak. Counting all critical events within a \(N+1\)D straight cylinder with spatial cross-section of radius \(R_{0}\) around the final surviving peak position \(x_{\rm pk}(R_{0})\) will give the number of all mergers that ever happened within this peak Lagrangian patch. For instance, if two small halos have merged together before merging with a larger halo,
Figure 1: Typical density field around a critical event in 1D. From left to right: 1. We see two distinct objects (in red), separated by a minimum (in blue) in the density field. 2. The critical event occurs (in green), but the density profile still shows two different objects (highlighted with dashed lines). 3. The merger has completed and there remains only one object (in red).
Figure 3: Visualisation of the action of gaussian smoothing on the critical events of a 2D field. The vertical axis is the smoothing scale, increasing upwards. The horizontal cross-section is a 2D space. The various vertical lines represent the tracks of extrema positions (maxima in pink, minima in blue, saddles in green) as one changes the field smoothing. The red and blue squares represent the corresponding critical events (in red are points of peak-saddle and in blue that of minima-saddle coalescence). The gray cones show the volume within some fraction of the smoothing scale (here chosen arbitrarily to be 1.2 times the smoothing scale) around each maxima track, which contain all the past physical history of a given peak. This paper aims to characterise these cones and the properties of the critical events within them so as to compute major merger rates as a function of final halo mass.
Figure 2: Representation of the smoothing of a 1D Gaussian random field representing density fluctuations in the primordial universe. Red lines are maxima of the field, blue lines are the minima. Green points are critical events, where a minimum/maximum pair meets and annihilates itself. By counting how many critical events there are in the vicinity of a peak line, one can count the number of subhalos merging into the corresponding halo.
that would count as two events. However, we are interested in counting only the last _direct_ merger event that brought the combined mass of the two small halos into the main one. Physical intuition tells us that to count only those, we need to count critical events within the history cone of the peak mass accumulation. Indeed, the Lagrangian patch of the peak grows in size by absorbing the layers along its boundary, and if that layer contains a critical event, it is a direct merger.
In 1+1D, as illustrated in Fig. 4, direct mergers correspond to critical events (green points) that are not separated by any intervening peak (red) lines from the main peak. In \((N+1)\)D, this is generalized by only counting a critical event as a direct merger if, at fixed smoothing \(R=R_{\rm ce}\), it is connected to the main peak in \(N\)D space by a filamentary ridge with no other peak in-between.
Fig. 4 confirms that indeed, most critical events that are directly connected to the main peak lie within a "cone of influence" whose size is \(\alpha R\), where \(\alpha\) is between 2 and 4, and vice versa this cone contains almost exclusively only the direct mergers. Counting critical events within the "influence cone" of the peak that can be shown to contribute to the mass and spin growth of this surviving halo is the main analytical tool of this paper. 2
Footnote 2: One important caveat we will have to deal with is the fact that the green points are not distributed randomly; their position is impacted by the presence of the peaks. This can be seen in Fig. 4 where critical events lie predominantly at the boundary of the “influence cone”. This can be quantified through two-point statistics and persistence.
The evolution of a halo in smoothing direction tells us directly its history in terms of mass accumulation, as \(M\propto R^{3}\). That is, we can describe what happened with the halo as its mass increased, say, 10 fold. In the next section we apply this picture to count merger events.
### Number of major merger events within a mass range
Here and in the following, we are interested in counting the number of objects that directly merged into an object of final mass \(M_{0}\) as its mass grew from \(f^{3}M_{0}\) to \(M_{0}\), where \(f<1\). Since smoothing scale maps directly onto mass, this amounts to counting the number of critical events between two scales \(R_{0}\propto\sqrt[3]{M_{0}}\) and \(R_{1}\propto f\sqrt[3]{M_{0}}\).
Let us now count direct _major_ mergers that we define as mergers with satellites that bring at least \(f^{3}M_{0}\) mass in the merger event, with \(f<1\). First note that we define here the mass ratio with respect to the final mass of the peak at a fixed time, rather than at the time of the merger. The number of such mergers, as halo grew from \(f^{3}M_{0}\) to \(M_{0}\), is given by the number of critical event in the section of the cone of influence of the halo which is contained between \(fR_{0}\) and \(R_{0}\). It can be obtained with the following integral
\[N_{\rm merger}=\int_{fR_{0}}^{R_{0}}{\rm d}R\int_{0}^{\alpha R}{\rm d}^{3}r\, n_{\rm ce}(R,\mathbf{r}), \tag{1}\]
where \(n_{\rm ce}(R,\mathbf{r})\) is the number density of critical events at the point \((R,\mathbf{r})\), in the extended 4D space of positions-smoothing scale. Note that in the above formula we are able to avoid the dependence on the past trajectory \(x_{\rm pk}(R)\) of the halo, by treating each slab of constant \(R\) independently, and by evaluating the radial distance \(r\) to the critical event found of at \(R\) from the main peak position, \(x_{\rm pk}(R)\), defined at the same smoothing.
As a first estimate, let us approximate \(N_{\rm merger}\) by taking the density of the critical events inside the cone of influence to be equal to its mean value. Thus, we first neglect any spatial correlation between the existence of a peak (corresponding to the surviving object in a merger) and the critical event (corresponding to the absorbed object in a merger). In Cadiou et al. (2020), we determined that the average density of critical events that correspond to peak mergers is given by
\[\bar{n}_{\rm ce}=\frac{R}{R_{*}^{2}}\frac{1-\tilde{\gamma}^{2}}{\tilde{\gamma }^{2}}\frac{29\sqrt{15}-18\sqrt{10}}{600\pi^{2}}\,, \tag{2}\]
where the spectral scale \(R_{*}\) and parameter \(\tilde{\gamma}\) are given in Appendix A, together with other parameters that characterize the statistics of a density field. Note that \(R_{*}\propto R\), so the number density of critical events scales as \(n_{\rm ce}\propto R^{-4}\). The cubic part of this dependence, \(R^{-3}\), reflects the decrease of spatial density of critical points with increasing smoothing scale; the additional scaling, \(R^{-1}\), reflects that the frequency of critical events is uniform in \(\log R\).
Assuming a power-law density spectrum with index \(n_{\rm s}\) and Gaussian filter, equation (2) gives
\[\bar{n}_{\rm ce}=0.0094\left(\frac{5+n_{\rm s}}{2}\right)^{3/2}R^{-4}\,. \tag{3}\]
Using this mean value \(\bar{n}_{\rm ce}\) for critical density in equation (1) gives us
Figure 4: Our model associates peaks (in red) to halos, and critical events (in green) to mergers as a function of smoothing scale \(R\) which encodes mass evolution. Critical events correspond to the coalescence of a peak with a minimum (in blue). We count any critical event directly connected to the central peak (_i.e._ not separated by another minimum, shown here as gray dashed lines) as a merger. Here, this definition would give use 6 mergers into the central peak. Alternatively, we can estimate this number by counting the number of critical events within some fraction of the smoothing scale \(\alpha R\) (highlighted for one prominent peak in shaded gray). Here, \(\alpha\) is chosen to recover to same count, but note that which events is selected may vary (as highlighted with the two arrows).
a first rough, but telling estimate of the merger number that a typical halo experiences
\[N_{\rm merger}\approx 0.039(-\ln f)\left(\frac{5+n_{\rm s}}{2}\right)^{3/2} \alpha^{3}. \tag{4}\]
The final results depends on the specific value of \(f\) and \(\alpha\). The value of the fraction \(f\) controls down to which mass ratio mergers should be included and has thus a straightforward physical interpretation. The parameter \(\alpha\) controls the opening of the cone of influence around the peak from which critical events should be considered as direct mergers.
Using equation (4) with \(n_{\rm s}=-2\), which corresponds to the typical slope of a \(\Lambda\)CDM power-spectrum at scales ranging from Milky-Way like systems to clusters, we can count the number of direct mergers with mass ratio larger than 1:10 (\(f=1/\sqrt[3]{10}\)). Let us consider two cases. For instance, if we count only critical events within the halo Top-Hat Lagrangian radius, \(\alpha=R_{\rm TH}/R\approx 1.56\), we find \(N_{\rm merger}\approx 0.2\). However, as will bear out in the analysis presented in the next sections, a more sensible choice is to extend the opening angle to twice that ratio \(\alpha=2R_{\rm TH}/R\approx 3.1\) (two mass spheres of the surviving and merging peaks are touching at the scale of critical event). We then find \(N_{\rm merger}\approx 1.7\). Overall, we see that the number of direct major mergers that a halo experiences while increasing its mass tenfold is small, of the order of one or two. For scale invariant history, each decade of mass accumulation contributes similar number of mergers, so a cluster that grew from a galactic-scale protocluster thousand-fold in mass (\(f=1/\sqrt[3]{1000}\)), did it having experienced \(N_{\rm merger}\approx 5\) major mergers. These conclusions follow directly from first principles, while studying the structure of the initial density field.
### Accounting for rarity and clustering
Let us now refine our model so as to define \(\alpha\) more rigorously by i) requiring the merging object to have gravitationally collapsed before it merges and by ii) taking into account the correlations between the central halo peak and the merger critical events.
Let us therefore consider the number density of critical events of given height \(\nu\) as a function of their distance \(r\) to a central peak, both defined at the same smoothing scale \(R\)
\[n_{\rm ce|pk}(\nu,R,r)=\bar{n}_{\rm ce}(R)C(\nu,\gamma)\left(1+\xi_{\rm ce,pk} (\nu,R,r)\right)\,, \tag{5}\]
where \(C\) is the distribution function, \(\int_{-\infty}^{\infty}C(\nu,\gamma)\,{\rm d}\nu=1\), of critical events in overdensity and is given by the analytical formula in Appendix D along with a useful Gaussian approximation. The clustering of critical events in the peak neighbourhood is described by the peak-critical event correlation function on a slice of fixed \(R\), \(\xi_{\rm ce,pk}(\nu,R,r)\). The composite index \({\rm pk}\) refers to any peak parameters that may be specified as a condition. Our goal is to determine what range of \(\nu\) and what extent of \(r\) one needs to consider to count halos merging into a surviving halo with a particular \(\nu_{\rm pk}\).
#### 2.3.1 Heights of critical events
To establish the range of critical event heights that describe mergers of real halos, we rely on the spherical collapse approximation to map the peak overdensity to its gravitational collapse time. We consider a physical halo at redshift \(z\) to be described by a peak at the Top-Hat scale \(R_{\rm pk}\) such that \(\delta_{\rm pk}(R_{\rm pk})=\delta_{\rm c}/D(z)\), where \(\delta_{\rm c}\) is the critical overdensity for collapse model and \(D(z)\) is the linear growing mode value at redshift \(z\). Thus, a critical event found at smoothing \(R_{\rm ce}\) describes the merger of a satellite halo of scale \(R_{\rm ce}\) into the main peak at scale \(R_{\rm pk}\) that corresponds to the same redshift, i.e. such that \(\delta_{\rm ce}(R_{\rm ce})=\delta_{\rm pk}(R_{\rm pk})=\delta_{\rm c}/D(z)\). The situation is demonstrated in Fig. 5. The ratio of scales and, correspondingly, masses of the two merging halos is therefore determined by the condition of equal overdensities at the time of merger.
Let us now consider a peak that has reached a scale \(R_{0}\) and that experienced a merger in its past when its scale was \(R_{\rm pk}\), \(R_{\rm pk}\leq R_{0}\). Requiring that, at the time of the merger, the surviving peak's scale (mass) is larger than that of the satellite sets the relation \(R_{0}\geq R_{\rm pk}\geq R_{\rm ce}\) or, conversely, \(\sigma(R_{0})\leq\sigma(R_{\rm pk})\leq\sigma(R_{\rm ce})\). From which it follows that the rarity of the relevant critical events at scale \(R_{\rm ce}\),
\[\nu=\frac{\delta_{\rm ce}(R_{\rm ce})}{\sigma(R_{\rm ce})}=\frac{\delta_{\rm pk }(R_{\rm pk})}{\sigma(R_{\rm ce})}=\nu_{\rm pk}\frac{\sigma(R_{\rm pk})}{ \sigma(R_{\rm ce})} \tag{6}\]
is within the range
\[\nu_{\rm pk}\frac{\sigma(R_{0})}{\sigma(R_{\rm ce})}\leq\nu\leq\nu_{\rm pk}. \tag{7}\]
The lower bound corresponds to mergers that have completed at the very last moment, \(R_{\rm pk}=R_{0}\). The upper bound is achieved for mergers of two equal mass objects, \(R_{\rm pk}=R_{\rm ce}\).
To obtain the total number of merger events in a halo history we now integrate the conditional event density in equation (5) over the physically relevant range of heights from equation (7), _i.e._
\[N_{\rm merger}(\nu_{\rm pk})= \tag{8}\] \[\int_{fR_{0}}^{R_{0}}\!\!{\rm d}R\int_{0}^{\infty}4\pi r^{2}\,{ \rm d}r\int_{\nu_{\rm pk}\frac{\sigma(R_{0})}{\sigma(R)}}^{\nu_{\rm pk}}{ \sigma(R)}{\rm d}\nu\,n_{\rm ce|pk}(\nu,R,r).\]
#### 2.3.2 Clustering of critical events around peaks
Ultimately, the conditional event density \(n_{\rm ce|pk}\) should include only critical events that will directly merge with the peak in the constraint; this would make the density of critical events go to zero \(n_{\rm ce|pk}(\nu,R,r)\to 0\) far from the peak, _i.e._ when \(r\to\infty\). While
Figure 5: Panel a) shows the tracks of two peaks (red) and a saddle point (blue) of the density field as a function of smoothing scale \(R\) in 1+1D space. Panel b) shows field profile at three smoothing scales. At smoothing scale \(R_{0}\), the peaks and saddle points are distinct. At smoothing \(R_{\rm ce}\), one peak and a saddle point create a critical event C, after each only one peak survives to larger smoothing scales. The merging of peaks is completed at the scale \(R_{\rm pk}\) when the overdensity of the surviving peak, now at point A, is equal to the overdensity of the critical event \(\delta_{\rm c}\) and thus can be viewed as reaching the threshold \(\delta_{\rm c}\) for halo formation at the same time. We can then interpret the critical event as a merger with mass ratio \(M_{\rm A}/M_{\rm C}=(R_{\rm ce}/R_{\rm pk})^{3}\) at a redshift corresponding to \(D(z)=\delta_{\rm c}/\delta_{\rm A}\).
we cannot implement this condition analytically (as it is non local), we can measure \(n_{\rm ce|pk}(\nu,R,r)\) numerically, as will be done in the upcoming Section 3. We can however approximate the conditional density ab initio by taking the conditions that the peak is linked to the critical event to obtain \(\widetilde{n}_{\rm ce|pk}\) given by
\[\widetilde{n}_{\rm ce|pk}(\nu,R,r)\equiv\frac{\langle{\rm Peak}(\mathbf{x})\,{\rm Event }(\mathbf{y})\rangle}{\langle{\rm Peak}(\mathbf{x})\rangle}\,, \tag{9}\]
using formally straightforward analytical calculations of critical event - peak correlations, as described in Appendix B. Here, \(\mathbf{x}\) and \(\mathbf{y}\) are the random vectors containing the density and its successive derivatives at the location of the peak and the critical event, respectively and '\({\rm Peak}\)' and '\({\rm Event}\)' enforce the peak and critical event conditions respectively. We can expect \(\widetilde{n}_{\rm ce|pk}\) to track the exact \(n_{\rm ce|pk}\) up to several smoothing length distance from the peak, but further away from the peak it just describes the mean unconstrained density of critical events: \(\widetilde{n}_{\rm ce|pk}(\nu,R,r)\to\widetilde{n}_{\rm ce}(R)C(\nu)\) as \(r\to\infty\). Therefore, in this approximation the question of where to truncate the integration over the peak neighbourhood remains and we have
\[N_{\rm merger}(\nu_{\rm pk})\approx\\ \int_{f\,R_{0}}^{R_{0}}{\rm d}R\int_{0}^{\alpha R}4\pi r^{2}\,{ \rm d}r\int_{\nu_{\rm pk}\frac{\sigma(R_{0})}{\sigma(R)}}^{\nu_{\rm pk}}{ \rm d}\nu\,\widetilde{n}_{\rm ce|pk}(\nu,R,r). \tag{10}\]
For scale-free spectra, introducing the dimensionless ratios \(u=r/R\) and \(w=R/R_{0}\) and changing the order of integration, equation (8) can be written as
\[N_{\rm merger}(\nu_{\rm pk})=\int_{f}^{1}\frac{{\rm d}w}{w}\int_ {0}^{\infty}{\rm d}u\,\frac{{\rm d}^{2}N_{\rm ce|pk}}{{\rm d}u\,{\rm d}\ln w},\] \[\frac{{\rm d}^{2}N_{\rm ce|pk}}{{\rm d}u\,{\rm d}\ln w}\equiv 4 \pi u^{2}\int_{\nu_{\rm pk}\omega(w+3)/2}^{\nu_{\rm pk}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
This range corresponds to the smallest scale cross section of the influence cone if we count major mergers of a peak that that grows ten times in the process, _i.e._\(R_{\rm ce}=1/\sqrt[3]{10}R_{0}\) in equation (8). Note that our quantity differs from two-point correlations functions in which one would compute the distance between any critical event and any peak, as was done in the previous section (Fig. 6). Here, each critical event only contributes to the density at a single distance to its associated peak, and we thus expect the signal to drop to zero at infinite separation, since the probability for a critical event to be associated with a peak far away vanishes.
Should the critical events be randomly distributed, their number per linear or would grow as \(r^{2}\). Instead, in Fig. 7 we find an exclusion zone at small separations, an excess probability at \(r/R\sim 2-3\) and a cut-off at large separations as a consequence of our requirement for the critical event to be paired to a peak.
We integrate the curves in Fig. 7 and report the mean number of critical events per peak per logarithmic smoothing bin in Table 1. All the effects accounted for in our numerical analysis give just \(\approx 40\%\) lower values in Table 1 relative to equation (4) (per \(\ln f\)) with \(\alpha=3.1\). This shows that the two main corrections to the naive uniform density estimate -- the restriction to only collapsed satellite halos (Section 2.3.1) on the one hand, and an attraction of critical events of similar heights towards the peak influence zone (Section 2.3.2) on the other hand -- compensate each other to a large extent.
As a function of peak rarity \(\nu_{\rm pk}\), the mean number of critical events per peak in a constant \(R\) slab first rises up to \(\nu_{\rm pk}\approx 3\) before decreasing. It is fairly flat in the range \(\nu_{\rm pk}=2.5-3.5\), where most of the physical halos are, which argues for taking \(\alpha\) parameter independent on \(\nu_{\rm pk}\) if one uses the rough estimate as the global mean density of events times effective volume as in equation (4), Section 2.2.
So far, we have only studied properties of peaks and critical events at the same scale, using the defining equation (7) to _implicitity_ perform a multi-scale analysis. We can however track peaks from one slab of smoothing scale to another to build peak lines and associate them to critical events. This generalizes the procedure sketched on Fig. 4 in 3+1D. For each peak with mass \(M_{\rm pk}\), we follow its peak line to find all associated critical events with a mass \(M_{\rm pk}\geq M_{\rm cp}\geq M_{\rm pk}/10\) that have \(\delta_{\rm ce}(R_{\rm ce})\geq\delta_{\rm pk}(R_{\rm pk})\) where \(R_{\rm ce}\) and \(R_{\rm pk}\) are now different. We only retain peaks that exist at scales \(R_{\rm pk}\geq R_{\rm min}\sqrt[3]{10}\) to ensure that we do not miss critical events below our smallest scale.
Given our numerical sample, we are now in a position to study the distribution of the number of merger per peak, which we show on Fig. 8. This measurement differs from the value one would obtain by taking Table 1 multiplied by the logarithm of the range of scales. Indeed, we account here for the decrease in the number of relevant satellites as their mass approaches the final peak mass. We also obtain our measurement by computing the number of mergers per surviving peak at the scale \(M_{\rm pk}\). Compared to that, in Table 1, we give the value per any peak at \(1/10M_{\rm pk}\) scale, irrespective of whether and how long it would survive at the further smoothing scales. We find that the dependence of the number of merger with peak height is weak, with a mean number of mergers varying between 1.8 and 2. This shows that in the language of effective influence cone radius of equation (4), \(\alpha=2R_{\rm TH}/R=3.1\) is a very good proposition.
### Orbital spin parameter of mergers
Let us now define the spin parameter of an event at scale \(R_{\rm ce}\) relative to a peak of rarity \(\nu\) at scale \(R_{\rm pk}\) as
\[\lambda(\nu)=\frac{R_{\rm ce}^{3}}{R_{\rm pk}^{3}}\frac{\left\langle|\mathbf{ r}\times\nabla\psi|\mid\nu\right\rangle}{\sqrt{2}R_{\rm pk}\sigma_{-1}(R_{\rm pk})}. \tag{12}\]
Here, we recall that \(\sigma_{-1}\) is the variance of the gradient of the potential, whose definition is given in Appendix A. This definition reflects the spin definition of Bullock et al. (2001); it compares the orbital angular momentum brought by a merger to the angular momentum of a merger separated by a distance \(R_{\rm pk}\) with a relative velocity \(\sqrt{2}\sigma_{-1}\) with respect to the central peak, assuming that the velocities are well approximated by the Zel'dovich approximation. We show on Fig. 9 the distribution of the spin parameter \(\lambda\). The sample consists of mergers with mass ratios greater or equal to 1:10 around rare peaks. In practice, we select
\begin{table}
\begin{tabular}{c c c c c c} \(\nu_{\rm pk}\) range & \([1.5,2.0]\) & \([2.0,2.5]\) & \([2.5,3.0]\) & \([3.0,3.5[\) & \([3.5,\infty[\) \\ \hline \(\frac{\mathrm{d}N_{\rm seplpk}}{\mathrm{d}\ln R}\) & 0.90 & 1.20 & 1.40 & 1.42 & 1.27 \\ \end{tabular}
\end{table}
Table 1: Number of critical event per peak, both being measured at the same scale, per logarithmic bin of smoothing scale, as a function of the rarity of the peak.
Figure 8: Distribution of the number of major mergers for peaks in different rarity bins, as labelled. The mean number of major mergers increases from 1.8 for low-\(\nu\) peaks to 2 for high-\(\nu\) peaks and is graphically represented with vertical lines with tilted caps for readability.
Figure 7: The radial number density of topologically linked critical event per peak as a function of distance to the peak on a slice of constant \(R\) and per logarithmic bin in smoothing scale (see the text for details of how critical events are identified and paired to a peak). Critical events are preferentially clustered at \(r\sim 2.5-3.5R\), depending on rarity. When compared to Fig. 6, the rarep peaks have relatively less connected events, suggesting that their satellites would have merged together before merging into the central object.
and \(\delta_{\rm co}\gg\delta_{\rm pk}\). The distribution is roughly log-normal with a mean value of \(\mu_{\lambda}\approx 0.048\) and a standard deviation of \(\sigma_{\lambda}\approx 0.51\).
It is remarkable that our estimate for the orbital spin of mergers is found to be very close to the values measured in \(N\)-body simulations, which are on the order of 0.04 (see e.g. Bullock et al., 2001; Aubert et al., 2004; Bett et al., 2007; Danovich et al., 2015). Albeit simplistic, our model thus allows us to provide a natural explanation to the fact that mergers bring in a comparable amount of angular momentum to that of the full halo: gravitational tides alone (which is the only ingredient of our model) are able to funnel in a significant amount of angular momentum through mergers. This provides a theoretical motivation for the amplitude of the spin jumps during mergers employed in semi-analytical models (see e.g. Vitvitska et al., 2002; Benson et al., 2020).
### Mass distribution of mergers
Finally, we compute the mass distribution of mergers as well as their density. The goal here is to obtain the distribution of the time and mass ratios of the mergers. In order to build the distribution of the mass ratio, we track peaks over multiple decades of smoothing scales (_i.e._ mass). Since the number density of peaks evolves as \(R_{\rm pk}^{-3}\), the sample size quickly decreases with smoothing scale; we select here peaks that exist at a scale larger than \(R_{\rm min}\sqrt[3]{100}\approx 4.6R_{\rm min}\). The practical consequence is that our sample is only complete over two decades in mass ratio. We also only retain rare peaks that have \(\nu_{\rm pk}>2.5\). Let us then estimate the time of the merger as follows: we associate to both the peak and the critical event a time \(t_{\rm pk}\) and \(t_{\rm co}\) respectively, using \(\delta=\delta_{c}/D(z)\). Note that some of the selected peaks will have collapsed by \(z=0\) while other will in the distant future. In order to aggregate peaks collapsing at different times, we compute the lookback time of the merger relative to the collapse time of the peak \(\Delta=(t_{\rm pk}-t_{\rm co})/t_{\rm pk}\). We estimate the distribution of mergers in lookback time-mass ratio space using a 2D kernel density estimation which we show in Fig. 10, top panel. As expected, the more massive the merger, the more recently it happened.
We then show on the bottom panel of Fig. 10 the cumulative distribution function of the merger time, for mass ratios larger than 1:10 as a function of peak height. We recover the trend found in \(N\)-body simulations that rarer halos have had more recent major mergers than lower mass ones. Cluster-like structures (\(\nu_{\rm pk}\gtrapprox 3.5\)) typically had \(80\,\%\) of their mergers in the second half of their life (past \(7\,{\rm Gyr}\) for a cluster at \(z=0\)), and had half their mergers in the last third of their life (last \(5\,{\rm Gyr}\) for a cluster at \(z=0\)).
Our model reproduces qualitative trends observed in \(N\)-body simulations, namely that rarer peaks typically had their last major merger more recently than less rare ones.
## 4 Conclusion and perspectives
We built a model of mergers from an analysis of the initial conditions of the Universe. Following the work of Hanami (2001); Cadiou et al. (2020), we relied on the clustering of critical events around peaks in the initial density field to study the properties of halo mergers. We started with a simple model that yields analytically tractable results, while further refinements presented in Section 3 allowed for more precise results at the cost of numerical integration.
We focussed here on the analysis of merger events that bring at least \(10\,\%\) of the _final_ mass of the main halo, which we refer to as'major mergers'. We however note that our approach can be extended to the analysis of minor mergers, which we leave for future work.
We first obtained a zero\({}^{\rm th}\)-order analytical estimate of the mean number of mergers per decade in mass using the mean abundance of critical event per peak in Sections 2.1 and 2.2. Our results are consistent with halos having had one to two major mergers per relative decade of mass growth. We then refined our model in Section 2.3 by accounting for the timing of the collapse of the halos involved in a merger candidate; a critical event should only be counted as a merger if its two associated halos have collapsed before the merger happens. We showed this can be achieved semi-analytically by numerically
Figure 10: Top: distribution of the merger as a function of mass ratio and lookback time. We show the median with \(68\,\%\) interval in black. The hashed area correspond to regions of the parameter space that may not be complete, see the text for details. Bottom: the corresponding fraction of major merger as a function of lookback time relative to the peak collapse time, for different peak heights. Rarer objects have had more recent mergers.
Figure 9: Merging objects bring in orbital spin; we quantify here its distribution. The distribution resembles a log-normal distribution with parameters \(\mu_{\lambda}\approx 0.048\) and \(\sigma_{\lambda}\approx 0.51\); the distribution resembles its \(N\)-body counterpart.
computing the value of a cross correlation function, equation (10), that reveals that critical events cluster at 2-3 times the smoothing scale of the peak (Fig. 6).
Finally, in Section 3, we addressed the double-counting issue, whereupon a given critical event may be associated to several peaks, by uniquely associating each to the one peak it is topologically connected to. To that end, we relied on a multi-scale analysis of Gaussian random fields using computational topology to restrict ourselves to the study of peaks that, up to the critical event, from a persistent pairs. In this model, we found again that halos of different rarities undergo about 2 major mergers. By tracking peaks in position-smoothing scale and by associating critical events to them, we were able to provide numerically-cheap and easy-to-interpret data on the statistical properties of halo mergers. We found that mergers come from further away for rarer peaks (Fig. 7), but that the total number of major mergers only weakly depends on peak rarity (Fig. 8).
We then computed the gravitational fields at the location of the critical event to estimate the relative velocity of mergers and predict the orbital spin they bring in. We find that it has a log-normal distribution with a mean of \(\mu_{\lambda}=0.048\) and \(\sigma_{\lambda}=0.5\). These properties are remarkably close to the distribution of DM halo spins measured in hydrodynamical simulations (\(\mu_{\lambda}=0.038\), \(\sigma_{\lambda}=0.5\), Danovich et al., 2015). This suggests that our model captures the (statistical properties) of the orbital parameters of mergers, as is expected should they be driven by gravitational tides (Cadiou et al., 2021, 2022). We also computed the distribution of the mass brought by mergers and their timing, which we found to be in qualitative agreement with results obtained in \(N\)-body simulations (Fakhouri et al., 2010).
While the aim of this model was not to compete with numerical simulations, it provides theoretical grounds to explain the properties of mergers observed in \(N\)-body simulations and efficient tools to predict their statistics and geometry _ab initio_. Our model could be improved with precision in mind, for example by taking into account deviations from spherical collapse under the effect of shears to improve our time assignments. It however reveals that the statistical properties of the merger tree of dark matter halo can be explained through a multi-scale analysis of these initial conditions.
### Perspectives
Statistics involving successive mergers could potentially be built on top of our model by using critical event associated to the same peak line, for example to study the relative orientation of the orbital angular momentum of successive mergers. However, we found that such analysis was complicated by the fact that peak move with smoothing scale. Different definitions of the angular momentum (distance to the peak at the same scale, at the same density or for a fixed peak density) yielded qualitatively different results. This should be explored in future work.
The model built in this paper relied on a linear multi-scale analysis of the density field. This could be employed to provide control over the merger tree (timing of the merger, orientation) in numerical simulations through 'genetic modifications' of the initial field (Roth et al., 2016; Rey and Pontzen, 2018; Stopyra et al., 2021; Cadiou et al., 2021, 2021). We also note that, as we did for tidal torque theory in Cadiou et al. (2022), such an approach would allow direct testing of the range of validity of the model.
This paper focussed on mergers of peaks corresponding to the relative clustering of peak-saddle events. One could extend the analysis to the relative clustering of saddle-saddle events to provide a theoretical explanation for which filaments merge with which, thus impacting their connectivity or their length (Galarraga-Espinosa et al., 2023). Conversely, extending the model to the relative clustering of saddle-void events (which wall disappears when?) is also of interest, as the later may impact spin flip, and is dual to void mergers, and as such could act as a cosmic probe for dark energy. One could compute the conditional merger rate subject to a larger-scale saddle-point as a proxy to the influence of the larger scale cosmic web, following both Musso et al. (2018) and Cadiou et al. (2020) to shed light on how the cosmic web drives galaxy assembly (Kraljic et al., 2018; Laigle et al., 2018; Hasan et al., 2023). Eventually, such theory could contribute to predicting the expected rate of starburst or AGN activity as a function of redshift and location in the cosmic web.
## Acknowledgements
We thank S. Codis and S. Prunet for early contributions to this work via the co-supervision of EPP's master, and J. Devriendt and M. Musso for insightful conversations. We also thank the KITP for hosting the workshop _"CosmicWe23: connecting Galaxies to Cosmology at High and Low Redshift"_ during which this project was advanced. This work is partially supported by the grant Segal ANR-19-CE31-0017 of the French Agence Nationale de la Recherche and by the National Science Foundation under Grant No. NSF PHY-1748958. CC acknowledges support from the Knut and Alice Wallenberg Foundation and the Swedish Research Council (grant 2019-04659). We thank Stephane Rouberol for running smoothly the infinity Cluster, where the simulations were performed.
## Author Contributions
The main roles of the authors were, using the CRediT (Contribution Roles Taxonomy) system ([https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/credit.html](https://authorservices.wiley.com/author-resources/Journal-Authors/open-access/credit.html)):
**CC**: Conceptualization; formal analysis; investigation; methodology; software; writing - Original Draft Preparation; supervision. **EPP**: Investigation; software; writing - Review & Editing; Visualization. **CP**: Conceptualization; formal analysis; methodology; software; investigation; writing - Review & Editing; supervision. **DP**: Conceptualization; formal analysis; methodology; writing - Review & Editing; supervision. **DP**: Conceptualization; formal analysis; methodology; writing - Review & Editing; validation; supervision.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.04633 | Measurement of the magnetic octupole susceptibility of PrV2Al20 | In the electromagnetic multipole expansion, magnetic octupoles are the
subsequent order of magnetic multipoles allowed in centrosymmetric systems,
following the more commonly observed magnetic dipoles. As order parameters in
condensed matter systems, magnetic octupoles have been experimentally elusive.
In particular, the lack of simple external fields that directly couple to them
makes their experimental detection challenging. Here, we demonstrate a
methodology for probing the magnetic octupole susceptibility using a product of
magnetic field $H_i$ and shear strain $\epsilon_{jk}$ to couple to the
octupolar fluctuations, while using an adiabatic elastocaloric effect to probe
the response to this composite effective field. We observe a Curie-Weiss
behavior in the obtained octupolar susceptibility of \ce{PrV2Al20} up to
temperatures approximately forty times the putative octupole ordering
temperature. Our results demonstrate the presence of magnetic octupole
fluctuations in the particular material system, and more broadly highlight how
anisotropic strain can be combined with magnetic fields to formulate a
versatile probe to observe otherwise elusive emergent `hidden' electronic
orders. | Linda Ye, Matthew E. Sorensen, Maja D. Bachmann, Ian R. Fisher | 2023-09-08T23:07:32Z | http://arxiv.org/abs/2309.04633v1 | # Measurement of the magnetic octupole susceptibility of PrV\({}_{2}\)Al\({}_{20}\)
###### Abstract
**In the electromagnetic multipole expansion, magnetic octupoles are the subsequent order of magnetic multipoles allowed in centrosymmetric systems, following the more commonly observed magnetic dipoles. As order parameters in condensed matter systems, magnetic octupoles have been experimentally elusive. In particular, the lack of simple external fields that directly couple to them makes their experimental detection challenging. Here, we demonstrate a methodology for probing the magnetic octupole susceptibility using a product of magnetic field \(H_{i}\) and shear strain \(\epsilon_{jk}\) to couple to the octupolar fluctuations, while using an adiabatic elastocaloric effect to probe the response to this composite effective field. We observe a Curie-Weiss behavior in the obtained octupolar susceptibility of PrV\({}_{2}\)Al\({}_{20}\) up to temperatures approximately forty times the putative octupole ordering temperature. Our results demonstrate the presence of magnetic octupole fluctuations in the particular material system, and more broadly highlight how anisotropic strain can be combined with magnetic fields to formulate a versatile probe to observe otherwise elusive emergent 'hidden' electronic orders.**
Within the Landau paradigm, generic electronic and magnetic ordered states are characterized by symmetry-breaking order parameters, and the electromagnetic multipole expansion provides a powerful framework to further characterize these. Ordered states based on electric and magnetic dipoles are common, and the corresponding order parameters can be readily coupled to, and manipulated by, uniform electric and magnetic fields respectively. In centrosymmetric systems, the magnetic octupole is the next-allowed magnetic multipolar degree of freedom, but is much less commonly encountered. In particular, interactions between higher rank multipoles fall off more rapidly than the simpler dipole case, making such states very rare. Initially discussed as a 'hidden' degree of freedom based on localized electron orbitals [1; 2; 3; 4], magnetic octupoles have more recently witnessed a growing application as a notion in describing topological antiferromagnets with non-collinear spin structures [5; 6; 7; 8], and spin polarization in the electronic structure of alternagnets [9; 10]. The difficulty of generating the most natural conjugate field for these octupolar moments -_i.e._ microscopic magnetic fields with octupolar angular distribution- makes the experimental detection and control of such orders and associated fluctuations generally challenging, motivating the development of novel experimental methods to couple to and probe these elusive degrees of freedom.
In this work we experimentally demonstrate a thermodynamic probe of magnetic octupolar fluctuations utilizing a combination of strain and magnetic field which (acting together as a composite effective conjugate field) couples bilinearly to magnetic octupoles. In particular, we focus on measuring the octupolar susceptibility, defined analoguously to the magnetic susceptibility as the rate of change of the induced octupolar moment in response to the conjugate field. In contrast to measuring the spontaneous octupole moment below a phase transition, measuring the susceptibility for temperatures above a phase transition is especially useful since this is a quantity (a) that is finite for all temperatures, (b) the divergence of which directly attests to the presence of growing fluctuations, and (c) which can be directly compared to other competing symmetry channels (_i.e._ even in cases where a lower rank multipole 'wins', identifying the presence of strong fluctuations of the competing higher rank multipole state can help determine pathways to realizing such a state in other related materials). Here we experimentally identify strong octupolar fluctuations in the cubic compound PrV\({}_{2}\)Al\({}_{20}\), by measuring the temperature-dependence of the octupolar susceptibility.
PrV\({}_{2}\)Al\({}_{20}\) belongs to a family of praseodymium-based cubic cage compounds Pr\(T_{2}X_{20}\) (\(T\)= Ir, Rh, \(X\)= Zn; \(T\)= V, Ti, \(X\)= Al) [11; 12; 13; 14; 15; 16; 17]. In Pr\(T_{2}X_{20}\) the \(J=4\)\(4f^{2}\) state of Pr\({}^{3+}\) takes a doublet crystal field ground state in the local \(T_{d}\) environment with \(\Gamma_{3}\) symmetry [17]. The \(\Gamma_{3}\) non-Kramers doublet [18] is non-magnetic, such that dipolar moments are forbidden within the manifold ( \(\langle\Gamma_{3}^{a}|J_{i}|\Gamma_{3}^{b}\rangle=0\), \(i=x,y,z\), \(a,b=1,2\)); this makes the family of compounds an ideal platform to study higher-rank multipolar order and fluctuations, as well as their interaction with conduction electrons [11; 12; 13; 14; 15; 16; 17]. The allowed order parameters within the manifold are two distinct electric quadrupoles (\(3J_{z}^{2}-J^{2}\), \(J_{x}^{2}-J_{y}^{2}\)) and a magnetic octupole (\(\overline{J_{x}J_{y}J_{z}}\), equivalent to the notation \(\mathcal{T}_{xyz}\)[5]); the former (latter) belong to \(\Gamma_{3}\) (\(\Gamma_{2}\)) irreducible representations of \(T_{d}\). We have specifically chosen PrV\({}_{2}\)Al\({}_{20}\) to probe octupolar fluctuations because of its well-isolated crystal electric field (CEF)
\(\Gamma_{3}\) ground state (40 K between ground and first excited states [12]). The material undergoes a succession of two closely-spaced continuous phase transitions at low temperatures. While the exact character of these phases is unknown, the first of these (at 0.75 K) is thought to be to an antiferroquadrupolar state [12] while it has been suggested that the second phase transition at 0.65 K might have an octupolar character [16].
The spatial distribution of the magnetic charge density of the \(\overline{J_{x}J_{y}J_{z}}\) octupolar moments is depicted in Fig. 1(a1,a2) with (a1) and (a2) forming a time-reversal pair. Due to their complex angular distribution, uniform magnetic and strain fields (which linearly couple to lower-order magnetic dipoles and electrical quadrupoles, respectively) are not able to distinguish the positive and negative octupole moments. However, two higher-order quantities are allowed by symmetry to couple bilinearly to the octupole moments [21; 22]: a third-order magnetic field tensor \(H_{i}H_{j}H_{k}\), and alternatively, a combination of strain and magnetic field of the form \(H_{i}\epsilon_{jk}\) (where \(i,j,k\in x,y,z\) and \(i\neq j\neq k\)). The latter field, \(H_{i}\epsilon_{jk}\) requires a magnetic field along one of the principal cubic axes (\(H_{i}\)) and a shear strain in the plane perpendicular to that axis (\(\epsilon_{jk}\)); although neither \(H_{i}\) nor \(\epsilon_{jk}\) alone splits the \(\Gamma_{3}\) doublet to first order, two equivalent second order interactions of the octupolar moment with the field (strain) and then strain (field) provide the mechanism by which the combination acts as an effective field. Starting from the positive or negative octupolar moment (Fig. 1(a1,a2)), a vertical magnetic field \(H_{z}\) deforms the magnetic charge density of the octupolar moments via a Zeeman interaction and induces a component with opposite sign of the electric quadrupole \(J_{x}J_{y}+J_{y}J_{x}\) for the two octupoles (Fig. 1(b1,b2)) [2]; the induced quadrupole can then couple to a shear strain \(\epsilon_{xy}\) via magnetoelastic coupling [23], differentiating the two octupoles. The same steps can be applied in reverse: a shear strain \(\epsilon_{xy}\) induces a magnetic dipole moment of opposite sign depending on the octupole moment (Fig. 1(c1,c2)), which can then be split by \(H_{z}\). Since the magnetic octupole, magnetic field and shear strains belong to \(\Gamma_{2},\Gamma_{4}\) and \(\Gamma_{5}\) irreducible representations of \(T_{d}\) point group respectively, the two interaction paths illustrated above can be expressed as \(\Gamma_{4}\otimes\Gamma_{2}=\Gamma_{5}\) and \(\Gamma_{5}\otimes\Gamma_{2}=\Gamma_{4}\), respectively. There are in total three components \(H_{x}\epsilon_{yz},H_{y}\epsilon_{xx}\) and \(H_{z}\epsilon_{xy}\) illustrated in Fig. 1(d) that constitute the composite, effective 'octupolar' field.
At the microscopic level, the coupling of \(H_{i}\epsilon_{jk}\) with the octupolar degrees of freedom within the \(\Gamma_{3}\) doublet is based on a second-order virtual process involving excited crystal field states, while that of \(H_{i}H_{j}H_{k}\) invokes third order perturbations (see Supplementary Materials). In Fig. 1(e) we illustrate the anticipated splitting within the ground state manifold induced by the presence of strain and/or magnetic field, along paths in the field-strain phase space depicted in Fig. 1(f). In the presence of only strain (path A-C), the splitting is negligible (the only expected contribution arises from admixing of \(\Gamma_{5}\) quadrupole states from the excited CEF levels), while in the presence of only magnetic field (path A-B),
Figure 1: **Interplay between \(\overline{J_{x}J_{y}J_{z}}\) magnetic octupole with the composite strain/magnetic field** (a-c) Illustration of the magnetic charge density of the positive and negative \(\overline{J_{x}J_{y}J_{z}}\) octupolar moments in the CEF groundstate of PrV\({}_{2}\)Al\({}_{20}\), shown via contours of constant magnetic charge (blue and orange lobes indicate positive and negative sign of magnetic charge, respectively). Panel (a1,a2) shows the case for \(\epsilon=H=0\); Panel (b1,b2) illustrates the effect of a \(z\)-axis magnetic field \(H_{z}\); and panel (c1,c2) illustrates the effect of a \(\Gamma_{5}\) strain \(\epsilon_{xy}\). The magnetic charge density distribution shown in (a1) and (a2) are time-reversal pairs. (d) Schematic of the three components of \(H_{i}\epsilon_{jk}\) with cuboids illustrating the lattice deformations with respect to the undeformed lattice (dashed lines) and red arrows the perpendicular magnetic field. (e,f) Energetic splitting of the ground state doublet (e, see text and Supplementary Materials) for selected linear cuts in the 2D space of applied strain and magnetic field (f). (g) Illustration of the orientation of applied uniaxial stress and magnetic field (both along [111]) in our experiment with respect to the principal cubic axes. The strain controlled in the experiments \(\varepsilon_{111}\equiv\Delta L_{111}/L_{111}\) is also indicated.
a \(H^{3}\) splitting from third order processes is expected; the splitting is significantly enhanced in the presence of both strain and magnetic fields (paths C-D and B-D).
Experimentally, we opted for a configuration with both magnetic field and uniaxial stress oriented along [111] of the single crystal (Fig. 1(g), more see Supplementary Materials). This configuration maximizes \(H_{i}\epsilon_{jk}\) (invoking all three permutations) and avoids strain or high-order magnetic field components that couple bilinearly to the allowed quadrupoles in the \(\Gamma_{3}\) CEF manifold (see Supplementary Materials). In other words, this set-up is exclusively sensitive to the octupolar degrees of freedom in the system, at least to the extent that thermal population of the higher energy excited CEF states can be neglected.
The octupolar fluctuations are then probed by the AC elastocaloric effect [24; 25; 26]. We use a commercial Razor-bill CS100 strain cell to deform the sample, and monitor the displacement of the jaws of the strain cell using a capacitor sensor, from which changes in the longitudinal strain in the [111] direction, which we denote as \(\varepsilon_{111}\equiv\Delta L_{111}/L_{111}\), are extracted. The three shear strains, \(\epsilon_{xy}\), \(\epsilon_{yz}\) and \(\epsilon_{zx}\), are equal to each other and proportional to \(\varepsilon_{111}\) (see Supplementary Materials). In the measurement, a static (DC) offset strain is applied to the sample - this arises in part from differential thermal contraction of the material with respect to the materials of the strain cell (unknown in magnitude, meaning that the true 'zero strain point' is unknown) and in part from controlled variation of the strain cell (meaning that we precisely know variation of the strain with respect to this unknown starting point). A small-amplitude AC strain is superimposed (typically of order \(10^{-5}-10^{-4}\)) at a sufficiently high frequency that heat flow in/out of the sample is suppressed and hence these AC sample deformations occur under a quasi-adiabatic condition. Changes in the strain-induced splitting of the CEF doublet then necessarily induce temperature oscillations at the same frequency in order to keep the total entropy constant. The adiabatic elastocaloric temperature oscillation is thus given by [24]
\[\left(\frac{\partial T}{\partial\epsilon}\right)_{S}=-\frac{T}{C_{\epsilon}} \left(\frac{\partial S}{\partial\epsilon}\right)_{T} \tag{1}\]
which, as we will show, offers a window into the octupole susceptibility.
In order to illustrate the expected elastocaloric response we consider a simplified Landau model, in which the free energy is given by
\[F=\frac{a}{2}O^{2}-\lambda_{1}H\epsilon O-\lambda_{2}H^{3}O \tag{2}\]
where \(O\) represents the total octupolar moment of the system, and \(a=a_{0}(T-T^{*})\) the \(T\)-dependent Landau free energy parameter with \(T^{*}\) being the octupolar ordering temperature. \(\lambda_{1}HeO\) (\(\lambda_{2}H^{3}O\)) describes the allowed bilinear coupling between the octupolar moments and \(H_{i}\epsilon_{jk}\) (\(H_{i}H_{j}H_{k}\)). \(H\) indicates the strength of the [111]-oriented magnetic field and \(\epsilon\) the strength of shear strain forming \(H_{i}\epsilon_{jk}\) with \(H\) (\(\epsilon\) is directly proportional to the measured/controlled \(\varepsilon_{111}\), see Supplemental Material). This model is relevant for sufficiently high temperatures above any octupolar (or competing) phase transition, and for sufficiently small values of \(H\) and \(\epsilon\) (for instance such that the effect of coupling to potential quadrupolar fluctuations can be integrated out to yield such an 'octupolar-only' effective theory). To illustrate the evolution of \(O\) we combine the \(\lambda_{1}\) and \(\lambda_{2}\) terms in Eq. 2 for simplicity (absorbing a factor into the definition of \(\epsilon\)):
\[F=\frac{a}{2}O^{2}-\lambda H(\epsilon-H^{2})O \tag{3}\]
Inspection of Eqn. 3 reveals that for non-zero \(H\) the octupolar moment will be zero not at zero applied strain, but at some finite strain wherein the two allowed couplings cancel. Minimizing Eq. 3 with respect to \(O\) yields \(O=\lambda H(\epsilon-H^{2})/a\) and hence \(F=-\lambda^{2}H^{2}(\epsilon-H^{2})^{2}/2a\). From this, an octupolar susceptibility with respect to the composite strain-magnetic field can be defined:
\[\chi_{O}\equiv\left.\frac{\partial O}{\partial(H\epsilon)}\right|_{O=0}= \frac{\lambda}{a} \tag{4}\]
Although susceptibilities are usually defined in the limit of both vanishing conjugate field and vanishing order parameter, given the inequivalence of those two points (\(O\to 0\) and \(H\epsilon\to 0\)) in the phase space in the present case, and in particular given the challenge of precisely identifying the true 'zero strain' point, we suggest the limit of vanishing order parameter (\(O\to 0\)) to be more physically relevant and more experimentally feasible, as we return to below.
Figure 2: **Temperature- and field-dependence of the elastocaloric coefficient in PrV\({}_{2}\)Al\({}_{20}\)** The elastocaloric coefficient \(dT/d\epsilon_{111}\) as a function of the DC longitudinal strain in the [111] direction, \(\varepsilon_{111}\) (a) for a constant temperature \(T=2.5\) K and several representative magnetic fields; and (b) at a constant magnetic field \(\mu_{0}H=10\) T and various representative temperatures. The color scale indicating the respective magnetic field and temperature for each curve is inset.
\(S\) can then be solved for, followed by \(dS/d\epsilon\)
\[S=-\frac{\partial F}{\partial T}=\frac{d\chi_{O}}{dT}\frac{\lambda H^{2}(\epsilon- H^{2})^{2}}{2} \tag{5}\]
\[\frac{\partial S}{\partial\epsilon}=\lambda\frac{d\chi_{O}}{dT}H^{2}(\epsilon- H^{2}) \tag{6}\]
Thus, for small perturbations from the zero-octupole point, and sufficiently far away from any phase transition, the elastocaloric coefficient \(dT/d\epsilon\) is anticipated to be linearly dependent on \(\epsilon\), with a slope that is quadratic in \(H\). Determination of the coefficient of the strain- and field-dependence of the elastocaloric effect as a function of temperature then allows the extraction of the temperature-dependence of \(d\chi_{O}/dT\) and thus of \(\chi_{O}\). The mean-field expectation for \(\chi_{O}\) based on Eqn. 2 is a Curie-Weiss functional form, and thus \(d\chi_{O}/dT\) is anticipated to follow \((T-T^{*})^{-2}\), as has been previously observed for nematic and quadrupolar-strain susceptibilities measured by a similar strategy [25; 27; 28].
As a starting point for examining the octupolar response in PrV\({}_{2}\)Al\({}_{20}\), the elastocaloric coefficient \(dT/d\varepsilon_{111}\) is examined at various fields/temperatures (Fig. 2). In Fig. 2(a) we show \(dT/d\varepsilon_{111}\) as a function of \(\varepsilon_{111}\), measured with and without a magnetic field at \(T=2.5\) K. In the absence of a magnetic field, the signal is small and approximately linear w.r.t. strain, likely due to the small \(\Gamma_{5}\) quadrupolar susceptibility originating from excited crystal field states mentioned above. With increasing \(H\), the slope of \(dT/d\varepsilon_{111}\) as a function of \(\varepsilon_{111}\) is found to rapidly increase, as expected from the Landau model (Eqs. 2-6). Thus, the assumption of CEF-forbidden contributions being much weaker than the octupolar contribution is made for the remainder of the analysis. We also note that the zero intercept of \(dT/d\varepsilon_{111}\) curves in Fig. 2(a) indeed vary with \(H\), representing a shift in the strain required for cancellation of \(H\epsilon\) and \(H^{3}\) couplings with each other as a function of field discussed above.
The bias-strain-dependence of \(dT/d\varepsilon_{111}\) at fixed field (\(\mu_{0}H=10\) T) and for selected representative temperatures is shown in Fig. 2(b), revealing a slope that rapidly increases upon cooling. It should be noted that the traces in both Fig. 2(a) and (b) become significantly more curved as their average slope increases at lower \(T\) and higher \(H\). This can be attributed to \(\epsilon\)-non-linearity when \(O\) and the corresponding field-induced shear quadrupoles (see Fig. 1(b), \(\propto OH\)) are far from zero. The simple Landau picture described above should nevertheless hold in the region where \(O\to 0\), which is reasonably approximated (but not perfectly matched, due to potential effects from non-symmetry-breaking strain) by the zero of \(dT/d\varepsilon_{111}\). This motivates taking the strain-derivative of \(dT/d\varepsilon_{111}\) in the limit \(dT/d\varepsilon_{111}\to 0\). To do this, \(dT/d\varepsilon_{111}\) as a function of \(\varepsilon_{111}\) was fit for each temperature with a 2\({}^{\text{nd}}\)-order polynomial and the derivative evaluated at the polynomial's zero intercept to extract \(d^{2}T/d\varepsilon_{111}^{2}\) in the limit of vanishing order parameter as well as vanishing higher-order contributions. At higher \(T\) (above 10K) where \(dT/d\varepsilon_{111}\) is linear with \(\varepsilon_{111}\) (see Supplementary Materials) we simply use the slope of the linear fits to \(dT/d\varepsilon_{111}\) to determine \(d^{2}T/d\varepsilon_{111}^{2}\).
The extracted \(d^{2}T/d\varepsilon_{111}^{2}\) with \(H\) is summarized in Fig. 3(a) and its evolution over the entire \(H-T\) plane in Fig. 3(b); it is apparent that \(d^{2}T/d\varepsilon_{111}^{2}\) increases monotonically with \(H\) and the most prominent response can be found at low \(T\) and high \(H\). The data at each temperature were fit to \(\alpha+\beta H^{2}\) where \(\alpha\) and \(\beta\) are constants (Fig. 3(a)), with the small \(\alpha\) coefficient deriving from the \(\Gamma_{5}\) quadrupole moments described earlier (see Supplementary Materials for a discussion on the range of fitting used to extract \(\alpha\) and \(\beta\)). The quadratic coefficient \(\beta\) obtained by this procedure for each \(T\) is summarized in Fig. 3(c): an onset of measurable signal is observed from below approximately 15 K, and exhibits a strong increase with decreasing \(T\), before rolling over below 2.5 K. The characteristic 'fishtail' shape has been observed for similar low-temperature elastocaloric measurements of the quadrupolar material TmVO\({}_{4}\)[29] and is anticipated for an entropy landscape in which contributions to the total entropy arise from the splitting of the CEF groundstate (here induced by the \(H_{i}\epsilon_{jk}\) effective field) as well as the largely strain-independent phonon contribution.
According to the thermodynamic argument presented earlier, in the weak-field limit the \(H^{2}\) coefficient reflects the combined effects of the octupolar susceptibility and the heat capacity via
\[\frac{d^{2}T}{d\epsilon^{2}}=-\frac{\partial^{2}S}{\partial\epsilon^{2}} \frac{T}{C_{\epsilon}}=-\frac{T}{C_{\epsilon}}\frac{d\chi_{O}}{dT}H^{2} \tag{7}\]
Hence, we can extract \(d\chi_{O}/dT\) from the \(\beta\) coefficients by multiplying by \(C_{\epsilon}/T\).
We use the zero field \(C_{p}/T\) (Fig. 3(c) inset) as a close approximation of \(C_{\epsilon}\), since the \(\Gamma_{3}\) doublet in principle remains un-split in this case, which should also hold true near the zero octupole point. In practice, subtle strain effects associated with the heat capacity measurements (_i.e._ strain-induced splitting of the CEF multiplet that arise when the sample is held on a sample platform for heat capacity measurements) may potentially affect the lowest temperature data. The resulting estimate of \(d\chi_{O}/dT\) as a function of \(T\) is shown in the main panel of Fig. 3(d).
Fitting data away from the low-T saturation (\(T>3\) K) to \((T-T^{*})^{-2}\) expected for \(d\chi_{O}/dT\) (black solid line) yields a remarkably good fit to this Curie-Weiss form, with \(T^{*}=(0.5\pm 0.15)\) K and a Curie constant \(C=(6.5\pm 0.7)\) J/mol/K/T\({}^{2}\) (dependence of the fitting results on the temperature range see Supplementary Materials). Similar \((T-T^{*})^{-2}\) behavior has been previously observed for other entropy-based detection of susceptibility responses of lower rank multipoles using magnetocaloric [30] and elastocaloric effects [25; 29; 31].
Deviations from the para-octupolar Curie behavior below approximately 3 K might reflect a variety of possible scenarios. Subtle strain-dependence of the heat capacity mentioned above could affect the deduced temperature
dependence of \(\chi_{O}\) at low temperatures [29]. The effect might also reflect more complicated interactions with the competing order in the quadrupolar channel known to be present in PrV\({}_{2}\)Al\({}_{20}\)[14], or might even be related to the onset of a low-temperature multipolar Kondo effect, partially screening the growing octupole susceptibility [12; 32; 1]. These possibilities, which, importantly, do not affect the key observation of a Curie-Weiss octupolar susceptibility at higher temperatures, can potentially be resolved by high-precision low-temperature heat capacity measurements performed as a function of strain.
The robust observation of a diverging octupole susceptibility up to temperatures approximately a factor of 40 greater than the critical temperature clearly reveals the presence of significant octupolar fluctuations in PrV\({}_{2}\)Al\({}_{20}\). Uncertainty in the Weiss temperature \(T^{*}\) relative to the absolute value are quite large, but two standard deviations includes both observed critical temperatures, implying that octupolar order is potentially a viable competing state in PrV\({}_{2}\)Al\({}_{20}\), if not perhaps even realized at the lower temperature phase transition [14].
The composite nature of the effective field \(H_{i}\epsilon_{jk}\) used here allows a simple protocol to isolate magnetic octupolar contributions to the total entropy, providing insight to the fluctuations of an otherwise elusive 'hidden' order. More broadly, we anticipate the tools and methodology demonstrated in this study can be used to explore fluctuations (and be naturally extended to also probe and manipulate the corresponding ordered states) in the wider class of noncollinear antiferromagnets [5] and alternagnets [9; 10], along with other novel types of hidden order following similar symmetry-based arguments [33].
We acknowledge fruitful discussions with M. Ikeda, Y. B. Kim and A. Paramekanti. This work was supported by the Gordon and Betty Moore Foundation Emergent Phenomena in Quantum Systems Initiative through Grant GBMF9068. L.Y. acknowledges partial support from the National Science Foundation (DMR2232515) and the Marvin Chodorow Postdoctoral Fellowship at the Department of Applied Physics, Stanford University.
|
2309.13429 | Modeling Student Performance in Game-Based Learning Environments | This study investigates game-based learning in the context of the educational
game "Jo Wilder and the Capitol Case," focusing on predicting student
performance using various machine learning models, including K-Nearest
Neighbors (KNN), Multi-Layer Perceptron (MLP), and Random Forest. The research
aims to identify the features most predictive of student performance and
correct question answering. By leveraging gameplay data, we establish complete
benchmarks for these models and explore the importance of applying proper data
aggregation methods. By compressing all numeric data to min/max/mean/sum and
categorical data to first, last, count, and nunique, we reduced the size of the
original training data from 4.6 GB to 48 MB of preprocessed training data,
maintaining high F1 scores and accuracy.
Our findings suggest that proper preprocessing techniques can be vital in
enhancing the performance of non-deep-learning-based models. The MLP model
outperformed the current state-of-the-art French Touch model, achieving an F-1
score of 0.83 and an accuracy of 0.74, suggesting its suitability for this
dataset. Future research should explore using larger datasets, other
preprocessing techniques, more advanced deep learning techniques, and
real-world applications to provide personalized learning recommendations to
students based on their predicted performance. This paper contributes to the
understanding of game-based learning and provides insights into optimizing
educational game experiences for improved student outcomes and skill
development. | Hyunbae Jeon, Harry He, Anthony Wang, Susanna Spooner | 2023-09-23T16:53:07Z | http://arxiv.org/abs/2309.13429v1 | # Modeling Student Performance in Game-Based Learning Environments
###### Abstract
This study investigates game-based learning in the context of the educational game "Jo Wilder and the Capitol Case," focusing on predicting student performance using various machine learning models, including K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), and Random Forest. The research aims to identify the features most predictive of student performance and correct question answering. By leveraging gameplay data, we establish complete benchmarks for these models and explore the importance of applying proper data aggregation methods. By compressing all numeric data to min/max/mean/sum and categorical data to first, last, count, and nunique, we reduced the size of the original training data from 4.6 GB to 48 MB of preprocessed training data, maintaining high F1 scores and accuracy.
Our findings suggest that proper preprocessing techniques can be vital in enhancing the performance of non-deep-learning-based models. The MLP model outperformed the current state-of-the-art French Touch model, achieving an F-1 score of 0.83 and an accuracy of 0.74, suggesting its suitability for this dataset. Future research should explore using larger datasets, other preprocessing techniques, more advanced deep learning techniques, and real-world applications to provide personalized learning recommendations to students based on their predicted performance. This paper contributes to the understanding of game-based learning and provides insights into optimizing educational game experiences for improved student outcomes and skill development.
## 1 Introduction
A radical shift that is currently transforming the field of education is the adoption of technology to facilitate student learning and growth in school; widespread usage of tablets, laptops, and mobile devices in primary and secondary education enables a universe of possibilities in terms of leveraging digital technology to enhance student learning (Schindler et al., 2017).
In particular, game-based learning is an educational intervention that has received extensive interest in the field of pedagogy, for three key reasons. First, it has been proposed to increase student engagement, given that the elements of discovery, immersion, and feedback inherent to game-based environments can pique students' curiosity and motivate them to master the accompanying educational material. Existing evidence demonstrates that students in primary and secondary school exhibit lower learning anxiety when playing educational games instead of listening to traditional lectures, in addition to an increase in both motivation and academic performance within the context of several STEM disciplines (Hung et al., 2014; Cliglu and Ustun, 2023). Second, a key advantage of game-based learning and digital approaches in general is the increased accessibility and scale; akin to online platforms like Coursera and Udemy, these interventions could address educational inequalities and widen access to millions of students globally, particularly those in low-income homes or developing countries (Haleem et al., 2022). Lastly, game-based learning offers the possibility of personalized approaches, such that the games are tailored and adapted to the skill level of each student. Thus, in spite of the fun
damentally challenging and time-consuming task of developing educational games that are in alignment with current pedagogical guidelines and that cover different academic disciplines, work in the domain of game-based learning is crucial given its ability to promote student outcomes and skill development.
However, for game-based learning approaches to be effective, the underlying engine must accurately predict a student's performance throughout the game; otherwise, feedback (e.g., hints, suggestions) and in-game checkpoints will not be implemented appropriately, such that students remain engaged with the material and are continually improving as they learn from the game. Despite significant advances in the development of several game-based frameworks (Min et al., 2017) and theoretical student learning models (Mayer, 2019), research on knowledge tracing has lagged behind, primarily due to the lack of large-scale, open-source datasets from game-based learning platforms that have amassed a significant population of student users. Furthermore, though knowledge tracing and characterization of a student's mastery level have been investigated in intelligent tutoring systems (Haridas et al., 2019), there has been little work focused on evaluating what specific features of a game-based learning environment support effective learning experiences for student; this is crucial given its implications in question design and pedagogical guidelines, in addition to the idea that we can only iterate and improve on these game-based approaches if we know what parts of the gameplay experience are driving the most and least learning.
In order to address the paucity of available evidence in these areas, we leverage student gameplay data from _Jo Wilder and the Capitol Caese_, a popular educational game designed for students in elementary school and focused on the skills of historical analysis and reading comprehension; the dataset was released to encourage efforts to improve predictive student modeling and knowledge tracing (discussed above), which will ultimately help improve the gameplay design and enhance learning outcomes (Wardrip et al., 2022). We hypothesize that by leveraging one of the only existing compiled databases of student gameplay interactions in a historical inquiry game, we can better determine the features that are most predictive of student performance and correct question answering. In addition, our empirical findings establish complete benchmarks in terms of several machine learning models, which will be discussed in Section 3. In summary, this paper makes three main contributions as follows:
1. We leverage an open-access dataset from the _Jo Wilder_ online educational game, consisting of 23.6 million user interactions segmented across multiple game sessions, questions, and levels designed for 3rd to 5th grade history class students; we release the code and data for reproducibility at [https://github.com/HarryJeon24/Student_Performance](https://github.com/HarryJeon24/Student_Performance).
2. We determine off-the-shelf, benchmark performance for Random Forest, K-Nearest Neighbors (KNN), and Multilayer Perceptron (MLP) models with standard preprocessing criteria to discern baseline scores in terms of accuracy and F1 scores; we achieve the state-of-the-art performance on the _Jo Wilder_ dataset with a F1 score of 0.83 using the MLP model, which is higher than the current best contender with a F1 score of 0.72.
3. We establish benchmark performance for the multilayer perceptron, which is the first neural network model to be run on the dataset; details on parameters and performance will be discussed in Section 4.
This work will facilitate the further development of game-based learning approaches, especially in the context of question and gameplay design; to the best of our knowledge, this is the first published work on the _Jo Wilder_ educational game dataset using state-of-the-art machine learning models for student knowledge tracing.
## 2 Background and Related Work
Recent work on evaluating game-based learning approaches has spanned a variety of domains in STEM and the humanities, including but not limited to biology, mathematics, and reading comprehension Hung et al. (2014); Min et al. (2017); Cliglu and Ustun (2023). However, most of the existing research has taken the perspective of investigational studies, focused on interviewing students and understanding factors such as their motivation, engagement, and performance while playing different educational games; this is in contrast to directly modeling student knowledge (i.e.,
knowledge tracing) and leveraging computational approaches to determine how much a student has mastered the material.
Hui-Chun et al. (2021) improved on these qualitative approaches by introducing quantitative metrics to represent a student's comprehension for different mathematical concepts in a primary school educational game, including competencies such as the multiplication of integers and fractions. By scoring each question in the game with a relevancy index (i.e., to explictly label what learning objective each question was designed to address), the in-game checkpoint assessment scores were used to decide which of three alternative learning paths was presented to the students; thus, this allowed the underlying game engine to adopt a diagnostic approach, such that students who have not yet attained the expected learning outcomes are instructed with further supplementary content. The post-gameplay scores demonstrated that students on average scored approximately 13% higher than students who listened to traditional lecture (Hui-Chun et al., 2021). Despite these advances, however, it is evident that multiple layers of information were left out from consideration, including but not limited to how much time was spent in each game session, what text the students saw, which questions they spent the most time on, where they placed their cursor, and what coordinates in the game they clicked on.
In particular, Geden et al. (2020) utilized similar time-stamped gamepalay features (in-game assessment scores, level of the game that a student is on, what the students are clicking and which gameplay actions they engage in) to predict student microbiology content knowledge, achieving the optimal F1-score of 0.65 with a linear support vector machine (SVM) and 0.63 with k-nearest neighbors (KNN). These features are similar to those that are available to us with the _Jo Wilder_ dataset, which will be further explored in Section 3; however, since this dataset is novel, we are unable to disclose any comparative metrics. To the best of our knowledge, machine learning approaches to model student mastery has not been performed on a historical inquiry game of this scale.
Furthermore, it is important to note that several pioneering studies have begun to incorporate multimodal data to augment model performance in terms of knowledge tracing; Emerson et al. (2020) took advantage of not only student gameplay interactions, but also data on facial expression and eye gaze captured by camera sensors on the computer workstations students were using. Applying a logistic regression model, a F1-score of 0.607 was achieved, demonstrating that complex multimodal, data inputs may not necessarily translate to better knowledge tracing as compared to previous work by Geden et al. (2020). As such, in the context of this study, we decide to make use of only gameplay interactions, and address gaps in the literature by examining whether certain features from student in-game interaction data from a historical inquiry game disproportionately account for accurate predictions on whether a student correctly answers an in-game assessment question or not (i.e., as a proxy for student mastery of the material). Our work is distinguished because we propose to evaluate the existing dataset with a neural network model (i.e., multilayer perceptron), which has not been experimented before to the best of our knowledge.
## 3 Approach/Method
### Machine Learning Techniques
We trained K-Nearest Neighbors (KNN), Multi-layered Perceptron, and Random Forest models on our data.
KNN is a commonly used machine learning classifier that is useful because it does not make assumptions about the underlying distribution of the data. In KNN, the training data is saved, and, when presented with a new observation to classify, the points that are most similar to the new observation are used to classify it by selecting the most common label of the points.(Pouriyeh, 2023c) The similarity of two points is calculated using a distance function such as Euclidean distance, Manhattan distance, or cosine similarity. This function is chosen at the time of training the KNN model. Euclidean Distance is defined as:
\[D(x,y)=\sqrt{\sum_{i=1}^{m}(x_{i}-y_{i})^{2}}\]
Manhattan Distance is defined as:
\[D(x,y)=\sum_{i=1}^{m}|x_{i}-y_{i}|\]
Cosine similarity is defined as:
\[\begin{split} similarity&=cos(\theta)\\ &=\frac{A\cdot B}{\|A\|\|B\|}\\ &=\frac{\sum_{i=1}^{n}A_{i}B_{i}}{\sqrt{\sum_{i=1}^{n}A_{i}^{2}} \sqrt{\sum_{i=1}^{n}B_{i}^{2}}}\end{split} \tag{1}\]
The number of most similar points to use is represented by the hyperparameter, k. One of KNN's main drawbacks is that it suffers from the curse of dimensionality where as the number of features increases, the amount of observations needed to obtain a good accuracy exponentially increases.[14] In this case, this is not an issue as our dataset is large and we are not using that many features. Another drawback of KNN is that it is computationally expensive as the model needs to store all of the training data and calculate each distance. However the simplicity of this machine learning algorithm and the fact that it does not make assumptions about the data distributions is incredibly useful and make it a good fit for our data.
```
Input: Training data \((X,Y)\), test data \(X_{test}\), number of neighbors \(K\) Output: Predicted class labels for test data \(Y_{test}\) for\(i=1\) to \(N_{test}\)do Calculate the distance \(d(X_{test}^{(i)},X^{(j)})\) between \(X_{test}^{(i)}\) and all training data points \(X^{(j)}\); Sort the distances in ascending order and select the \(K\) nearest neighbors; Assign the majority class label of the \(K\) neighbors as the predicted class label for \(X_{test}^{(i)}\); end for
```
**Algorithm 1**K-Nearest Neighbors Algorithm
Multilayered Perceptron is a machine learning technique that uses multiple layers of perceptrons that are fully connected in an artificial neural network. In a single perceptron, inputs (features) are assigned weights. The weighted sum of the features will allow the classifier to predict if an observation is positive or negative based on its comparison to a threshold. The weights and biases are updated iteratively by the model's performance on each data point to minimize 0-1 loss. The equation is shown below:
\[a=\phi(\sum_{j}w_{j}x_{j}+b)\]
where a is the perceptron's activation, \(x_{j}\) are the perceptron's inputs or the features of an observation, \(w_{j}\) are the weights, and b is the bias. We are summing over all of the inputs (features) so \(w_{j}\) represents the weight for the jth feature.[14]\(\phi\) is the activation function which can be set in the training phase. It can take be set to a linear classifier or a logistic classifier for example.
With a single perceptron, the data must be linearly separable, but with multiple layers, the data does not need to be linearly separable. [14] Multilayered Perceptron is a feed-forward neural network which means that the perceptron nodes are arranged in such a way that there are no cycles and a later layer will not feed into a previous layer. [13] The inputs of the next layer are the outputs of the previous layer. Thus if we have layer 1:
\[a_{1}=\phi(\sum_{j}w_{j}x_{j}+b)\]
(where \(a_{1}\), \(w_{j}\), and \(b\) are vectors whose length is number of units, or width, of the layer). The output, \(a_{1}\), feeds into layer 2:
\[a_{2}=\phi(\sum_{j}w_{j}a_{2}+b)\]
and so on.
One of the drawbacks of neural networks is that the results are not interpretable and the model does not produce information about the correlations of the features to the label or the feature importance. [14] This algorithm's strengths lie in that fact that it is able to work with with complex, non-linear problems and it benefits from large inputs of data. As we have a lot of data and a rather complicated problem, this model is a suited for our data. [15]
The third machine learning technique that we used is Random Forest. In Random Forest, multiple decision trees are modeled on the data in order to produce multiple predictions which are combined together in a majority vote to produce one prediction.
In a decision tree, the data is partitioned using hyperrectangles. Decision trees take the dataset,
select the "best" feature, and split the data into subsets (based on a threshold value or categories). There are two commonly used ways of finding the "best" feature to split the data on: information gain using entropy and Gini impurity. [25] Entropy is calculated as:
\[\text{Entropy}=-\sum_{i=1}^{n}p_{i}log_{2}p_{i}\]
Where \(p_{i}\) is the probability of randomly selecting an example in class \(i\). Gain is calculated as:
\[\text{Gain(S, A)}=\text{Entropy(S)}-\sum_{v\in Values(A)}\frac{|S_{v}|}{|S|}\]
Where, Values(A) is all the possible values for attribute A, and \(S_{v}\) is the subset for S for which attribute A has value \(v\). Gini Impurity is calculated as:
\[\text{Gini Impurity}=1-\sum_{i=1}^{n}p_{i}^{2}\]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
[25]
## 4 Experiments
### Data Exploration
As stated in the introduction, our data comes from students interacting with the _Jo Wilder and the Capitol Caese_ game. The data consists of 23.6 million user interactions which includes clicking on multiple types of objects, hovering over maps or objects, and reaching checkpoints. Each observation comes from one of 23.5 thousand user sessions. Each session is split into 23 levels (0 to 22). At the end of each level, there is a question. Data from the first 18 questions were provided in a separate data set. We have 424 thousand observations. Almost 300 thousand of the questions were answered correctly, and the other 124 thousand were answered incorrectly.
Features:
* the ID of the session in which the event takes place
* the index of the event
* the amount of time (in milliseconds) between the start of the session and the event
* the name of the event type
* the name of the event
* the level that the event took place in (0 to 22)
* the page number of the event in notebook events
* the x-coordinate of the click in the in-game room
* the y-coordinate of the click in the in-game room
* the x-coordinate of the click on the player's screen
* the y-coordinate of the click on the player's screen
* duration of hover events in (milliseconds)
* the text the player sees during this event
* the fully qualified ID of the event
* the fully qualified ID of the room the event took place in
* the fully qualified ID of the text that the player sees
* 1 if the user played in fullscreen mode, 0 otherwise
* 1 if the user played the game in high quality, 0 otherwise
* 1 if the user played the game with music, 0 otherwise
* combines levels and questions into groups via bins (0-4, 5-12, 13-22)
### Pre-Processing
#### 4.2.1 Feature Extraction
As we had many more observations than we did labels, we needed to aggregate the data for each level. We tried many methods in order to find the best method for each feature. For numerical features, we tried mean, sum, min, and max for aggregation. For categorical features, we tried first, last, count and runique.
#### 4.2.2 Feature Selection
We decided to drop columns that were missing large amounts of data. Thus, we dropped _page_, _hover_duration_ and _text_fqid_.
Additionally, we decided to drop the _text_ column that contains natural language as its values are unstructured data and not particularly useful in this case since they consist of character dialogue that the game generates, and do not add additional information about the player's interactions with the game.
We use Pearson correlation coefficients in order to choose which features to include in our data (see figure 1).
Pearson correlation for the population is calculated as:
\[\rho_{X,Y}=\frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}}\]
and Pearson correlation for samples is calculated as:
\[r_{xy}=\frac{\sum_{i=1}^{n}(x_{i}-\bar{x}(y_{i}-\bar{y}}{\sqrt{\sum_{i=1}^{n}(x_ {i}-\bar{x})^{2}}\sqrt{\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}}}\]
(Pouriyeh, 2023b)
The result is a number from [-1, 1] where 1 represents a perfect positive correlation, -1 represents a perfect negative correlation, and 0 represents completely uncorrelated variables. Thus, larger absolute correlation values between the label feature and another feature indicates that the feature is more important. (Pouriyeh, 2023b) In figure 1, the rightmost column and bottom-most row represent the label and its correlation to the other features. None of the correlations are particularly large aside from the correct label being correlated to itself (which makes sense). However, there are a few features such as fqid_count that have slightly larger (absolute) values compared to the rest.
We selected the aggregation method that correlated the most with our label (whether the student answered the question correctly or not) and made sure that none of the features that we kept were correlated with each other.
Additionally, we used information gain as another method of checking which of our variables are most important. We calculated the mutual information criterion (information gain) between all of the features and the label. Mutual information measures the dependency between two variables, or how closely related two variables are, so the features that have higher mutual information scores are more important to our model compared to features that have lower scores that are close to 0. A feature score of 0 indicates that the features are independent of each other. (Pedregosa et al., 2022)
To calculate information gain, the following equation is used:
\[I(X;Y)=\sum_{x,y}p(x,y)log\frac{p(x,y)}{p(x)p(y)}\]
(Pouriyeh, 2023b)
The results of the information gain are shown below:
After examining the Pearson correlation coefficients and information gain (i.e., importance) values, we decided to use the following features based on the highest possible reductions in entropy; the corresponding information gain values are included for reference:
* room_coor_x_mean: the mean coordinate value of the x clicks in the in-game room; information gain value \(\approx 0.0567\)
* room_coor_y_mean: the mean coordinate value of the y clicks in the in-game room; information gain value \(\approx 0.0573\)
* screen_coor_x_mean: the mean coordinate value of the x clicks on the computer screen; information gain value \(\approx 0.0559\)
* screen_coor_y_mean: the mean coordinate value of the y clicks on the computer screen; information gain value \(\approx 0.0553\)
* elapsed_time_sum: the sum of the amount of time between the session start and the events; information gain value \(\approx 0.0558\)
* level_mean: the average of the levels; information gain value \(\approx 0.0431\)
* music_max: 1 if the music was on at all in the level, 0 otherwise; information gain value \(\approx 0.0394\)
* name_nunique: the number of unique event names; information gain value \(\approx 0.0269\)
* room_fqid_nunique: the number of unique room IDs; information gain value \(\approx 0.0268\)
* event_name_nunique: the number of unique event types; information gain value \(\approx 0.0249\)
* fqid_count: the number of IDs; information gain value \(\approx 0.0175\)
#### 4.2.3 Modeling Choices
In this study, we selected three distinct machine learning models to predict whether a student correctly solved a problem based on their performance in gameplay. These models were chosen due to their ability to capture non-linear patterns in the dataset, as identified by the correlation matrix and information gain graph analyses. The selected models are:
1. **Random Forest**: An ensemble learning technique that constructs multiple decision trees and aggregates their predictions, offering improved accuracy and reduced overfitting. This method is well-suited for handling linear and non-linear relationships between features and the target variable.
2. **k-Nearest Neighbors (KNN)**: A straightforward, non-parametric algorithm applicable for both classification and regression tasks. KNN computes the distance between a test point and its k-nearest neighbors in the training set, assigning the most frequent class among the neighbors.
3. **Multi-Layer Perceptron (MLP)**: A type of artificial neural network comprising one or more hidden layers, which can effectively capture complex, non-linear relationships between input features and the target variable.
To ensure a reliable evaluation of these models, we employed k-fold cross-validation with 5 folds for Random Forest and MLP, and 10 folds for KNN. This process involved dividing the dataset into k equal parts, training the model on k-1 parts, and testing it on the remaining part, iterating k times. We calculated the accuracy and F1-score for each fold, averaging these scores to obtain a single performance metric for each model.
Parameters for each model were determined as follows:
* **Random Forest**: We employed 100 estimators (decision trees) and set the random state to 42.
* **KNN**: We opted for 5 neighbors (k) and pre-processed the data using StandardScaler for numerical features and OneHotEncoder for categorical features before fitting the KNN model.
* **MLP**: The input dimension was set equal to the number of features in the dataset, the hidden layer size to 128, and the output dimension to 2 (assuming binary classification). The neural network was trained for
Figure 1: Pearson Correlation Matrix Heatmap.
Figure 2: Bar plot of importance.
100 epochs using the Adam optimizer with a learning rate of 0.001 and the CrossEntropy-Loss as the loss function.
In our analysis, we addressed the potential class imbalance issue by aggregating the data, which resulted in a more balanced dataset. Furthermore, we handled missing values using the mean imputation strategy and normalized the data to improve its quality. These preprocessing steps contributed to a more balanced class distribution, eliminating the need for specific class balancing techniques such as subsampling, oversampling, or alternative methods.
Although we did not incorporate subsampling, oversampling, or other class-balancing techniques in our analysis, it is important to note that these methods can be considered in cases where class imbalance persists despite aggregation and preprocessing. In such scenarios, implementing strategies like random undersampling, random oversampling, or the Synthetic Minority Oversampling Technique (SMOTE) can help to balance the dataset and potentially enhance the model's performance. However, for our specific dataset and problem, the aggregation and preprocessing steps effectively addressed the class imbalance issue, making the use of additional balancing techniques unnecessary.
#### 4.2.4 Empirical Results and Comparisons
Table 1 shows the F-1 score and accuracy for four different models.
As presented in Table 1, the French Touch model, which is considered the current state-of-the-art in predicting student problem-solving performance in gameplay, achieved an F-1 score of 0.72. While the accuracy metric was not available for the French Touch model, its F-1 score outperformed both the Random Forest and KNN models. This demonstrates the efficacy of the French Touch model in handling complex relationships and capturing nuances in the data.
However, the MLP model surpassed the French Touch model in terms of both F-1 score (0.83) and accuracy (0.74), suggesting that the MLP model may be more suitable for this particular dataset. The impressive performance of the MLP model can be attributed to its ability to capture complex, non-linear relationships between input features and the target variable.
## 5 Discussion
### Pre-Processing
Our experimental methods and results have highlighted the importance of applying proper data aggregation methods for training on a large dataset. Specifically, by compressing all numeric data to min/max/mean/sum and all categorical data to first/last/count/unique, we were able to reduce the size of the original training data from 4.6 GB to 48 MB of preprocessed training data. While this process resulted in a loss of information, we obtained a 1D feature set to 1 label trainable data, which our models could read in while maintaining a very high F1 score and accuracy. Additionally, we dropped feature columns that had little relation to other features or provided little information gain to resolve some of the overfitting issues and further improve our model's performance.
Our findings suggest that proper preprocessing techniques can be vital in enhancing the performance of non-deep-learning-based models. In particular, our Random Forest model achieved an accuracy of 0.73 after preprocessing, which was only 0.01 below the 0.74 accuracy achieved by our MLP model. Given the complexity of the dataset, this performance gain is remarkable.
### Model Performances
Our primary finding is the effectiveness of using deep learning models in predicting student game performance. Although our MLP model had similar accuracy to the Random Forest model and KNN model, the F1 score for MLP was significantly higher. This may be due to the non-regressional nature of the features in the dataset. For example, the preprocessed feature column count for event name has no proper threshold for the decision tree in the random forest model to determine. Similarly, the KNN model is unable to grasp the nuances between the features, such as differences in the type and quality of clicks. Our MLP model addressed these issues by fitting
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **F-1 Score** & **Accuracy** \\ \hline French Touch & 0.72 & N/A \\ Random Forest & 0.65 & 0.73 \\ KNN & 0.56 & 0.70 \\
**MLP** & **0.83** & **0.74** \\ \hline \hline \end{tabular}
\end{table}
Table 1: F-1 score and accuracy for 4 models
a multi-layer model that captured the complexity of the relationships between features and student performance.
### Future Directions
Our study has identified several promising avenues for future research. First, in order to incorporate our research models into real-world applications, we suggest using a larger dataset to provide more generalizable results and a better understanding of the relationships between features and student performance. Additionally, we recommend exploring other preprocessing techniques, such as dimensionality reduction or feature engineering, to further improve model performance.
Another area for future research is to investigate the use of more advanced deep learning techniques, such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), which have been successful in other machine learning domains. These techniques may be particularly useful for capturing temporal relationships between student actions and performance in the game.
Finally, we propose applying our model to real-world scenarios and evaluating its feasibility in providing personalized learning recommendations to students based on their predicted performance. This could involve collaborating with educators to implement our model in an educational setting and evaluating its effectiveness in improving student outcomes. Overall, our study provides a promising starting point for further investigation into using machine learning to improve student performance in educational games.
## Contributions
All members contributed to the pre-processing, feature extraction, feature selection, and model selection. Harry Jeon ran the KNN and multi-layer perceptron models and helped to write experiments/results sections. Harry He trained the Random Forest models and wrote the discussion section. Anthony Wang wrote the background and introduction sections. Susanna Spooner wrote the experiments/results and methods sections.
|
2310.00250 | The oracle property of the generalized outcome adaptive lasso | The generalized outcome-adaptive lasso (GOAL) is a variable selection for
high-dimensional causal inference proposed by Bald\'e et al. [2023, {\em
Biometrics} {\bfseries 79(1)}, 514--520]. When the dimension is high, it is now
well established that an ideal variable selection method should have the oracle
property to ensure the optimal large sample performance. However, the oracle
property of GOAL has not been proven. In this paper, we show that the GOAL
estimator enjoys the oracle property. Our simulation shows that the GOAL method
deals with the collinearity problem better than the oracle-like method, the
outcome-adaptive lasso (OAL). | Ismaila Baldé | 2023-09-30T04:45:20Z | http://arxiv.org/abs/2310.00250v2 | # The oracle property of the generalized outcome adaptive lasso
# The oracle property of the generalized outcome adaptive lasso
Ismaila Balde
Department of mathematics and statistics, Universite de Moncton, Canada
September 2023
**Abstract**
In this paper we show that with a proper choice of tuning parameters (\(\lambda_{1}\), \(\lambda_{2}\)), the generalized outcome adaptive lasso (GOAL) proposed by Balde, Yang and Lefebvre [2023, _Biometrics_**79(1)**, 514-520] enjoys the oracle property.
**Keywords:** Causal inference; GOAL; Oracle property; Propensity score; Variable selection.
## 1 Introduction
Let \(Y\) be a continuous outcome variable, \(\mathbf{X}=(X_{1},\ldots,X_{p})\) a potential confounders matrix and \(A\) a binary treatment. Assume that all \(p\) covariates \(X_{j}\), \(j=1,\ldots,p\) are measured prior to the treatment \(A\) which in turn is measured prior to the outcome \(Y\). We assume the propensity score (PS) model is defined as:
\[\text{logit}\{P(A=1|\mathbf{X})\}=\sum_{j=1}^{p}\alpha_{j}X_{j}.\]
Let \(\mathcal{C}\) and \(\mathcal{P}\) denote indices of confounders (covariates related to both outcome and treatment) and pure predictors of outcome, respectively. Our objective is to estimate the following PS model:
\[\text{logit}\{\pi(X,\hat{\alpha})\}=\sum_{j\in\mathcal{C}}\hat{\alpha}_{j}X_{ j}+\sum_{j\in\mathcal{P}}\hat{\alpha}_{j}X_{j}.\]
The negative log-likelihood function of \(\alpha\) is given by
\[\ell_{n}(\alpha;A,\mathbf{X})=\sum_{i=1}^{n}\left\{-a_{i}(x_{i}^{T}\alpha)+ \log\left(1+e^{x_{i}^{T}\alpha}\right)\right\}. \tag{1}\]
Balde et al. (2023) proposed the generalized outcome adaptive lasso (GOAL):
\[\hat{\alpha}(GOAL)=\arg\min_{\alpha}\left[\ell_{n}(\alpha;A,\mathbf{X})+ \lambda_{1}\sum_{j=1}^{p}\hat{w}_{j}|\alpha_{j}|+\lambda_{2}\sum_{j=1}^{p} \alpha_{j}^{2}\right], \tag{2}\]
where \(\hat{w}_{j}=\left|\hat{\beta}_{j}^{ols}\right|^{-\gamma}\) such that \(\gamma>1\) and \((\hat{\beta}_{A}^{ols},\hat{\beta}^{ols})=\arg\min_{(\beta_{A},\beta)}\left\| Y-\beta_{A}A-\mathbf{X}\beta\right\|_{2}^{2}\).
GOAL is designed to improve OAL (Shortreed and Ertefaie, 2017) for high-dimensional data analysis. Balde (2022) conjectured that GOAL must satisfy the oracle property. The main result of this paper is to prove that, with an appropriately chosen \(\lambda_{1}\), \(\lambda_{2}\) and \(\gamma\), the GOAL estimator performs as well as the oracle.
## 2 Results and proofs
Let \(\mathcal{A}=\mathcal{C}\cup\mathcal{P}=\{1,2,\ldots,p_{0}\}\) be the indices of desirable covariates to include in the estimated PS. Let \(\mathcal{A}^{c}=\mathcal{I}\cup\mathcal{S}=\{p_{0}+1,p_{0}+2,\ldots,p_{0}+(p-p_{ 0})\}\) be the indices of of covariates to exclude, where \(\mathcal{I}\) and \(\mathcal{S}\) are the indices of pure predictors of treatment and spurious covariates (covariates that are unrelated to both outcome and treatment), respectively. We write the Fisher information matrix as
\[\mathbf{F}(\alpha^{*})=E\left(\phi^{{}^{\prime\prime}}(x^{T}\alpha^{*})xx^{T} \right)=\begin{pmatrix}\mathbf{F}_{11}&\mathbf{F}_{12}\\ \mathbf{F}_{21}&\mathbf{F}_{22}\end{pmatrix}, \tag{3}\]
where \(\mathbf{F}_{11}\) is the Fisher information matrix (of size \(p_{0}\times p_{0}\)) for the parsimonious PS.
**Theorem 1**.: _Assume the following regularity conditions:_
1. _The Fisher information matrix_ \(\mathbf{F}(\alpha^{*})\) _defined in Equation_ 3 _is finite and positive definite._
2. _For each_ \(\alpha^{*}\in\mathbf{\Omega}\)_, there exist a function_ \(M_{1}(x)\) _and_ \(M_{2}(x)\) _such that for_ \(\alpha\) _in the neighborhood of_ \(\alpha^{*}\)_, we have:_ \[\left|\phi^{{}^{\prime\prime}}(x,\alpha)\right|\leq M_{1}(x)\quad\text{and} \quad\left|\phi^{{}^{\prime\prime\prime}}(x,\alpha)\right|\leq M_{2}(x)\] _such that_ \[\int M_{1}(x)dx<\infty\quad\text{and}\quad E(M_{2}(x)|x_{j},x_{k},x_{l})< \infty,\quad\forall\quad 1\leq j,k,l\leq p_{0},\] _where_ \(\mathbf{\Omega}\) _is an open parameter space for_ \(\alpha\)_._
3. \(\lambda_{1}/\sqrt{n}\to 0\) _and_ \(\lambda_{1}n^{\gamma/2-1}\rightarrow\infty\)_, for_ \(\gamma>1\)_._
4. \(\lambda_{2}/\sqrt{n}\to 0\)_._
_Then under conditions (C.1) - (C.4) the generalized outcome-adaptive lasso estimator \(\hat{\alpha}(GOAL)\) satisfies the following:_
1. _Consistency in variable selection:_ \(\lim_{n}P\{\hat{\alpha}_{j}(GOAL)=0|j\in\mathcal{I}\cup\mathcal{S}\}=1\)_;_
2. _Asymptotic normality:_ \(\sqrt{n}\{\hat{\alpha}(GOAL)-\alpha_{\mathcal{A}}^{*}\}\rightarrow_{d}N(0, \mathbf{F}_{11}^{-1})\)_._
**Proof of Theorem 1.**
The ideas of the proof are taken from Zou (2006), Khalili and Chen (2007), Slawski et al. (2010) and Shortreed and Ertefaie (2017).
First, we prove the asymptotic normality. Let \(\alpha=\alpha^{*}+\frac{b}{\sqrt{n}}\).
Define \(\mathcal{G}_{n}(b)\) by
\[\mathcal{G}_{n}(b) =\ell_{n}(\alpha^{*}+\frac{b}{\sqrt{n}};A,\mathbf{X})+\lambda_{1} \sum_{j=1}^{p_{0}}\hat{w}_{j}\left|\alpha_{j}^{*}+\frac{b}{\sqrt{n}}\right|+ \lambda_{2}\sum_{j=1}^{p_{0}}(\alpha_{j}^{*}+\frac{b}{\sqrt{n}})^{2}\] \[=\sum_{i=1}^{n}-a_{i}x_{i}^{T}\left(\alpha^{*}+\frac{b}{\sqrt{n}} \right)+\phi\left(x_{i}^{T}\left(\alpha^{*}+\frac{b}{\sqrt{n}}\right)\right)+ \lambda_{1}\sum_{j=1}^{p_{0}}\hat{w}_{j}\left|\alpha_{j}^{*}+\frac{b_{j}}{ \sqrt{n}}\right|+\lambda_{2}\sum_{j=1}^{p_{0}}\left(\alpha_{j}^{*}+\frac{b_{j }}{\sqrt{n}}\right)^{2}.\]
For \(b=0\), we have
\[\mathcal{G}_{n}(0)=\sum_{i=1}^{n}-a_{i}(x_{i}^{T}\alpha^{*})+\phi\left(x_{i}^{T }\alpha^{*}\right)+\lambda_{1}\sum_{j=1}^{p_{0}}\hat{w}_{j}\left|\alpha_{j}^{* }\right|+\lambda_{2}\sum_{j=1}^{p_{0}}{\alpha_{j}^{*}}^{2}.\]
Define \(\widehat{\mathcal{W}}(b)=\mathcal{G}_{n}(b)-\mathcal{G}_{n}(0)\). Thus
\[\widehat{\mathcal{W}}(b) =\sum_{i=1}^{n}\left\{-a_{i}x_{i}^{T}\left(\alpha^{*}+\frac{b}{ \sqrt{n}}\right)+\phi\left(x_{i}^{T}\left(\alpha^{*}+\frac{b}{\sqrt{n}}\right) \right)-\left(-a_{i}(x_{i}^{T}\alpha^{*})+\phi\left(x_{i}^{T}\alpha^{*}\right) \right)\right\}\] \[+\lambda_{1}\sum_{j=1}^{p_{0}}\hat{w}_{j}\left(\left|\alpha_{j}^{ *}+\frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right|\right)+\lambda_{2 }\sum_{j=1}^{p_{0}}\left(\left(\alpha_{j}^{*}+\frac{b_{j}}{\sqrt{n}}\right)^{2 }-{\alpha_{j}^{*}}^{2}\right)\] \[=\sum_{i=1}^{n}\left\{-a_{i}x_{i}^{T}\frac{b}{\sqrt{n}}+\phi\left( x_{i}^{T}\left(\alpha^{*}+\frac{b}{\sqrt{n}}\right)\right)-\phi\left(x_{i}^{T} \alpha^{*}\right)\right\}\] \[+\lambda_{1}\sum_{j=1}^{p_{0}}\hat{w}_{j}\left(\left|\alpha_{j}^ {*}+\frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right|\right)+\lambda_ {2}\sum_{j=1}^{p_{0}}\left(\left(\alpha_{j}^{*}+\frac{b_{j}}{\sqrt{n}}\right)^ {2}-{\alpha_{j}^{*}}^{2}\right)\] \[=\mathcal{L}_{n}(b)+\lambda_{1}\sum_{j=1}^{p_{0}}\hat{w}_{j}\left( \left|\alpha_{j}^{*}+\frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right| \right)+\lambda_{2}\sum_{j=1}^{p_{0}}\left(\alpha_{j}^{*}\left(\frac{2b_{j}}{ \sqrt{n}}\right)+\frac{b_{j}^{2}}{n}\right),\]
where \(\mathcal{L}_{n}(b)=\sum_{i=1}^{n}\left\{-a_{i}x_{i}^{T}\frac{b}{\sqrt{n}}+ \phi\left(x_{i}^{T}\left(\alpha^{*}+\frac{b}{\sqrt{n}}\right)\right)-\phi\left( x_{i}^{T}\alpha^{*}\right)\right\}\).
By using the second-order Taylor expansion of \(\mathcal{L}_{n}(b)\) around \(b=0\), we have:
\[\mathcal{L}_{n}(b) =\sum_{i=1}^{n}\left(-a_{i}x_{i}^{T}\frac{b}{\sqrt{n}}\right)+ \sum_{i=1}^{n}\left(\phi^{{}^{\prime}}\left(x_{i}^{T}\alpha^{*}\right)\left( \frac{x_{i}^{T}b}{\sqrt{n}}\right)\right)+\frac{1}{2}\sum_{i=1}^{n}\phi^{{}^{ \prime\prime}}\left(x_{i}^{T}\alpha^{*}\right)b^{T}\left(\frac{x_{i}x_{i}^{T} }{n}\right)b\] \[+n^{-3/2}\sum_{i=1}^{n}\frac{1}{6}\phi^{{}^{\prime\prime\prime}} \left(x_{i}^{T}\alpha^{*}\right)\left(x_{i}^{T}b\right)^{3}\] \[=-\sum_{i=1}^{n}\left(a_{i}-\phi^{{}^{\prime}}\left(x_{i}^{T} \alpha^{*}\right)\right)\frac{x_{i}^{T}b}{\sqrt{n}}+\frac{1}{2}\sum_{i=1}^{n} \phi^{{}^{\prime\prime}}\left(x_{i}^{T}\alpha^{*}\right)b^{T}\left(\frac{x_{i} x_{i}^{T}}{n}\right)b+n^{-3/2}\sum_{i=1}^{n}\frac{1}{6}\phi^{{}^{\prime\prime\prime}} \left(x_{i}^{T}\alpha^{*}\right)\left(x_{i}^{T}b\right)^{3}.\]
Thus, we can rewrite \(\widehat{\mathcal{W}}(b)\) as
\[\widehat{\mathcal{W}}(b)=R_{1}^{(n)}+R_{2}^{(n)}+R_{3}^{(n)}+R_{4}^{(n)}+R_{5}^ {(n)},\]
with
\[R_{1}^{(n)}=-\sum_{i=1}^{n}\left(a_{i}-\phi^{{}^{\prime}}\left(x_{i}^{T}\alpha^ {*}\right)\right)\frac{x_{i}^{T}b}{\sqrt{n}},\]
\[R_{2}^{(n)}=\frac{1}{2}\sum_{i=1}^{n}\phi^{{}^{\prime\prime}}\left(x_{i}^{T} \alpha^{*}\right)b^{T}\left(\frac{x_{i}x_{i}^{T}}{n}\right)b,\]
\[R_{3}^{(n)}=n^{-3/2}\sum_{i=1}^{n}\frac{1}{6}\phi^{{}^{\prime\prime\prime}} \left(x_{i}^{T}\alpha^{*}\right)\left(x_{i}^{T}b\right)^{3},\]
\[R_{4}^{(n)}=\lambda_{1}\sum_{j=1}^{p_{0}}\hat{w}_{j}\left(\left|\alpha_{j}^{*}+ \frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right|\right),\]
\[R_{5}^{(n)}=\lambda_{2}\sum_{j=1}^{p_{0}}\left(\alpha_{j}^{*}\left(\frac{2b_{j}} {\sqrt{n}}\right)+\frac{b_{j}^{2}}{n}\right).\]
By applying the central limit theorem and laws of large numbers, we have:
\[R_{1}^{(n)}=-\sum_{i=1}^{n}\left(a_{i}-\phi^{{}^{\prime}}\left(x_{i}^{T}\alpha^ {*}\right)\right)\frac{x_{i}^{T}b}{\sqrt{n}}\to_{d}b^{T}Z,\quad Z\sim N(0,\mathbf{ F}(\alpha^{*})).\]
For the term \(R_{2}^{(n)}\), we observe that
\[\sum_{i=1}^{n}\phi^{{}^{\prime\prime}}\left(x_{i}^{T}\alpha^{*}\right)\left( \frac{x_{i}x_{i}^{T}}{n}\right)\rightarrow_{p}\mathbf{F}(\alpha^{*}),\]
thus
\[R_{2}^{(n)}=\frac{1}{2}\sum_{i=1}^{n}\phi^{{}^{\prime\prime}}\left(x_{i}^{T} \alpha^{*}\right)b^{T}\left(\frac{x_{i}x_{i}^{T}}{n}\right)b\rightarrow_{p} \frac{1}{2}b^{T}\{\mathbf{F}(\alpha^{*})\}b.\]
By the condition (C.2), we observe that
\[6\sqrt{n}\left\{R_{3}^{(n)}\right\}=\frac{1}{n}\sum_{i=1}^{n}\phi^{{}^{\prime \prime\prime}}\left(x_{i}^{T}\tilde{\alpha}_{*}\right)\left(x_{i}^{T}b\right) ^{3}\leq\sum_{i=1}^{n}\frac{1}{n}M_{2}\left(x_{i}\right)|x_{i}^{T}b|^{3} \rightarrow_{p}E\left(M_{2}(x)\left|x^{T}b\right|^{3}\right)<\infty, \tag{4}\]
where \(\tilde{\alpha}_{*}\) is between \(\alpha^{*}\) and \(\alpha^{*}+\frac{b}{\sqrt{n}}\). Equation 4 show that \(R_{3}^{(n)}\) is bounded. The behavior of \(R_{4}^{(n)}\) and \(R_{5}^{(n)}\) depend on the covariate type. If a covariate \(X_{j}\) is a confounder (\(j\in\mathcal{C}\)) or a pure predictor of the outcome (\(j\in\mathcal{P}\)), this is \(\alpha_{j}^{*}\neq 0\) since \(j\in\mathcal{A}=\mathcal{C}\cup\mathcal{P}\). If \(\alpha_{j}^{*}\neq 0\), then we have:
\[\frac{\lambda_{1}}{\sqrt{n}}\hat{w}_{j}\sqrt{n}\left(\left|\alpha_{j}^{*}+ \frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right|\right)=\left(\frac{ \lambda_{1}}{\sqrt{n}}\right)(\hat{w}_{j})\left(\sqrt{n}\left(\left|\alpha_{j }^{*}+\frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right|\right)\right),\]
with \(\frac{\lambda_{1}}{\sqrt{n}}\rightarrow_{p}0\), \(\hat{w}_{j}\rightarrow_{p}\left|\beta_{j}^{*}\right|^{-\gamma}\) and \(\sqrt{n}\left(\left|\alpha_{j}^{*}+\frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_ {j}^{*}\right|\right)\rightarrow_{p}b_{j}\text{sign}(\alpha_{j}^{*})\).
By using the Slutsky's theorem we have:
\[\frac{\lambda_{1}}{\sqrt{n}}\hat{w}_{j}\sqrt{n}\left(\left|\alpha_{j}^{*}+ \frac{b_{j}}{\sqrt{n}}\right|-\left|\alpha_{j}^{*}\right|\right)\rightarrow_{p }0.\]
For the behavior of \(R_{5}^{(n)}\), we have
\[\left(\frac{\lambda_{2}}{\sqrt{n}}\right)\left(\sqrt{n}\left(\alpha_{j}^{*} \left(\frac{2b_{j}}{\sqrt{n}}\right)+\frac{b_{j}^{2}}{n}\right)\right) \rightarrow_{p}0,\]
since \(\frac{\lambda_{2}}{\sqrt{n}}\rightarrow_{p}0\) by assumption and \(\sqrt{n}\left(\alpha_{j}^{*}\left(\frac{2b_{j}}{\sqrt{n}}\right)+\frac{b_{j}^ {2}}{n}\right)\rightarrow_{p}2\alpha_{j}^{*}b_{j}\) and then using the Slutsky's theorem.
Using the convexity of \(\widehat{\mathcal{W}}(b)\) and following the epi-convergence results of Geyer (1994), we have:
\[\arg\min\widehat{\mathcal{W}}(b)\rightarrow_{d}\arg\min\mathcal{W}(b),\]
where \(\arg\min\widehat{\mathcal{W}}(b)=\sqrt{n}(\hat{\alpha}-\alpha^{*})\). Thus, again by applying the theorem of Slutsky, we have
\[\arg\min\mathcal{W}(b_{\mathcal{A}})=\mathbf{F}_{11}^{-1}Z_{\mathcal{A}},\quad Z _{\mathcal{A}}\sim N(0,\mathbf{F}_{11}).\]
Finally, we prove the asymptotic normality part.
Now we show the consistency in variable selection part, i.e. \(\lim_{n}P\{\hat{\alpha}_{\mathcal{A}^{c}}=0\}=1\). Let \(\alpha=(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})\) and define a penalized negative log-likelihood function as
\[\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}}) =\ell_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})+\lambda _{1}\sum_{j=1}^{p}\hat{w}_{j}|\alpha_{j}|+\lambda_{2}\sum_{j=1}^{p}\alpha_{j}^{ 2},\] \[=\ell_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})+\lambda _{1}\sum_{j\in\mathcal{A}\cup\mathcal{A}^{c}}\hat{w}_{j}|\alpha_{j}|+\lambda_ {2}\sum_{j\in\mathcal{A}\cup\mathcal{A}^{c}}\alpha_{j}^{2},\]
where \(\ell_{n}\) is the negative log-likelihood function defined in Equation 1. Thus, to prove sparsity it suffices to show that \(\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\tilde{\ell}_{n} (\alpha_{\mathcal{A}},0)>0\) with probability tending to \(1\) as \(n\to\infty\). We observe that
\[\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})- \tilde{\ell}_{n}(\alpha_{\mathcal{A}},0) =[\ell_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\ell_{ n}(\alpha_{\mathcal{A}},0)]\] \[+\lambda_{1}\left[\sum_{j\in\mathcal{A}\cup\mathcal{A}^{c}} \hat{w}_{j}|\alpha_{j}|-\sum_{j\in\mathcal{A}}\hat{w}_{j}|\alpha_{j}|\right]+ \lambda_{2}\left[\sum_{j\in\mathcal{A}\cup\mathcal{A}^{c}}\alpha_{j}^{2}-\sum_ {j\in\mathcal{A}}\alpha_{j}^{2}\right]\] \[=[\ell_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\ell_ {n}(\alpha_{\mathcal{A}},0)]+\lambda_{1}\sum_{j\in\mathcal{A}^{c}}\hat{w}_{j} |\alpha_{j}|+\lambda_{2}\sum_{j\in\mathcal{A}^{c}}\alpha_{j}^{2}.\]
By the mean value theorem, we have:
\[\ell_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\ell_{n}(\alpha_{ \mathcal{A}},0)=\left[\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{ \partial\alpha_{\mathcal{A}^{c}}}\right]^{T}\alpha_{\mathcal{A}^{c}},\]
for some \(\|\xi\|\leq\|\alpha_{\mathcal{A}^{c}}\|\). By the mean value theorem again, we have:
\[\left\|\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{ \partial\alpha_{\mathcal{A}^{c}}}-\frac{\partial\ell_{n}(\alpha_{\mathcal{A}} ^{*},0)}{\partial\alpha_{\mathcal{A}^{c}}}\right\| =\left\|\left[\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{ \partial\alpha_{\mathcal{A}^{c}}}-\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},0)}{\partial\alpha_{\mathcal{A}^{c}}}\right]+\left[\frac{\partial\ell_{n}( \alpha_{\mathcal{A}},0)}{\partial\alpha_{\mathcal{A}^{c}}}-\frac{\partial\ell _{n}(\alpha_{\mathcal{A}}^{*},0)}{\partial\alpha_{\mathcal{A}^{c}}}\right]\right\|\] \[\leq\left\|\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{ \partial\alpha_{\mathcal{A}^{c}}}-\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},0)}{\partial\alpha_{\mathcal{A}^{c}}}\right\|+\left\|\frac{\partial\ell_{n}( \alpha_{\mathcal{A}},0)}{\partial\alpha_{\mathcal{A}^{c}}}-\frac{\partial\ell _{n}(\alpha_{\mathcal{A}}^{*},0)}{\partial\alpha_{\mathcal{A}^{c}}}\right\|\] \[\leq\left(\sum_{i=1}^{n}M_{1}(x_{i})\right)\|\xi\|+\left(\sum_{i =1}^{n}M_{1}(x_{i})\right)\|\alpha_{\mathcal{A}}-\alpha_{\mathcal{A}}^{*}\| \,,\quad\text{by condition (C.2)}\] \[=(\|\xi\|+\|\alpha_{\mathcal{A}}-\alpha_{\mathcal{A}}^{*}\|)\left( \sum_{i=1}^{n}M_{1}(x_{i})\right)=(\|\xi\|+\|\alpha_{\mathcal{A}}-\alpha_{ \mathcal{A}}^{*}\|)\,O_{p}(n).\]
Thus, the limiting behavior of \(\left(\|\xi\|+\|\alpha_{\mathcal{A}}-\alpha_{\mathcal{A}}^{*}\|\right)O_{p}(n)\) depends wether \(j\in\mathcal{S}\) or \(j\in\mathcal{I}\).
If \(j\in\mathcal{S}\), we have \(\|\xi\|\leq\|\alpha_{\mathcal{S}}\|=O_{p}(n^{-1/2})\). Thus,
\[\left\|\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{\partial\alpha_{ \mathcal{A}^{c}}}-\frac{\partial\ell_{n}(\alpha_{\mathcal{A}}^{*},0)}{ \partial\alpha_{\mathcal{A}^{c}}}\right\|\leq(\|\xi\|+\|\alpha_{\mathcal{A}}- \alpha_{\mathcal{A}}^{*}\|)\,O_{p}(n)=O_{p}(n^{1/2}).\]
If \(j\in\mathcal{I}\), we have \(\|\xi\|\leq\|\alpha_{\mathcal{I}}\|=O_{p}(1)\). Thus,
\[\left\|\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{\partial\alpha_{ \mathcal{A}^{c}}}-\frac{\partial\ell_{n}(\alpha_{\mathcal{A}}^{*},0)}{ \partial\alpha_{\mathcal{A}^{c}}}\right\|\leq(\|\xi\|+\|\alpha_{\mathcal{A}}- \alpha_{\mathcal{A}}^{*}\|)\,O_{p}(n)=O_{p}(n).\]
Then, for \(j\in\mathcal{A}^{c}=\mathcal{I}\cup\mathcal{S}\), we have
\[\left\|\frac{\partial\ell_{n}(\alpha_{\mathcal{A}},\xi)}{\partial\alpha_{ \mathcal{A}^{c}}}-\frac{\partial\ell_{n}(\alpha_{\mathcal{A}}^{*},0)}{\partial \alpha_{\mathcal{A}^{c}}}\right\|\leq O_{p}(n).\]
Hence, we have
\[\ell_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\ell_{n}(\alpha_{ \mathcal{A}},0)=O_{p}(n)\sum_{j\in\mathcal{A}^{c}}-|\alpha_{j}|.\]
Thus,
\[\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})- \tilde{\ell}_{n}(\alpha_{\mathcal{A}},0) =\sum_{j\in\mathcal{A}^{c}}\left[-|\alpha_{j}|O_{p}(n)+\lambda_{1} \hat{w}_{j}|\alpha_{j}|+\lambda_{2}\alpha_{j}^{2}\right] \tag{5}\] \[=\sum_{j\in\mathcal{A}^{c}}\left[-|\alpha_{j}|O_{p}(n)+\lambda_{1} \left(O_{p}(1)n^{\gamma/2}\right)|\alpha_{j}|+\lambda_{2}\alpha_{j}^{2}\right]. \tag{6}\]
By shrinking neighborhood of \(0\), \(|\alpha_{j}|O_{p}(n)\leq\lambda_{1}\left(O_{p}(1)n^{\gamma/2}\right)|\alpha_{j}|+ \lambda_{2}\alpha_{j}^{2}\) in probability. This show that \(\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\tilde{\ell}_{ n}(\alpha_{\mathcal{A}},0)>0\) with probability tending to \(1\) as \(n\to\infty\). Let \((\widehat{\alpha}_{\mathcal{A}},0)\) be the minimizer of the penalized negative log-likelihood function \(\tilde{\ell}_{n}(\alpha_{\mathcal{A}},0)\), where \(\tilde{\ell}_{n}(\alpha_{\mathcal{A}},0)\) is a function of \(\alpha_{\mathcal{A}}\). Now it suffices to prove \(\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})-\tilde{\ell} _{n}(\widehat{\alpha}_{\mathcal{A}},0)>0\). By adding and subtracting \(\tilde{\ell}_{n}(\alpha_{\mathcal{A}},0)\), we have:
\[\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c}})- \tilde{\ell}_{n}(\widehat{\alpha}_{\mathcal{A}},0) =\left[\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A} ^{c}})-\tilde{\ell}_{n}(\alpha_{\mathcal{A}},0)\right]+\left[\tilde{\ell}_{n} (\alpha_{\mathcal{A}},0)-\tilde{\ell}_{n}(\widehat{\alpha}_{\mathcal{A}},0)\right] \tag{7}\] \[\geq\tilde{\ell}_{n}(\alpha_{\mathcal{A}},\alpha_{\mathcal{A}^{c} })-\tilde{\ell}_{n}(\alpha_{\mathcal{A}},0). \tag{8}\]
By the results in Equation 6, the right-hand side of the Equation 8 is positive with probability tending to \(1\) as \(n\to\infty\). This completes the proof of the sparsity part.
## Data availability
No data was used for the research described in the article.
## Acknowledgements
This work was funded by grants from New Brunswick Innovation Foundation (NBIF)[FINB TRF2023-007].
|
2309.15058 | Monoidal Structures in Orthogonal Calculus | Orthogonal Calculus, first developed by Weiss in 1991, provides a calculus of
functors for functors from real inner product spaces to spaces. Many of the
functors to which Orthogonal Calculus has been applied since carry an
additional lax symmetric monoidal structure which has so far been ignored. For
instance, the functor $V \mapsto \text{BO}(V)$ admits maps $$\text{BO}(V)
\times \text{BO}(W) \to \text{BO}(V \oplus W)$$ which determine a lax symmetric
monoidal structure.
Our first main result, Corollary 4$.$2$.$0$.$2, states that the Taylor
approximations of a lax symmetric monoidal functor are themselves lax symmetric
monoidal. We also study the derivative spectra of lax symmetric monoidal
functors, and prove in Corollary 5$.$4$.$0$.$1 that they admit
$O(n)$-equivariant structure maps of the form $$\Theta^nF \otimes \Theta^nF \to
D_{O(n)} \otimes \Theta^nF$$ where $D_{O(n)} \simeq S^{\text{Ad}_n}$ is the
Klein-Spivak dualising spectrum of the topological group $O(n)$.
As our proof methods are largely abstract and $\infty$-categorical, we also
formulate Orthogonal Calculus in that language before proving our results. | Leon Hendrian | 2023-09-26T16:39:02Z | http://arxiv.org/abs/2309.15058v2 | # Monoidal Structures in Orthogonal Calculus
###### Abstract
Orthogonal Calculus, first developed by Weiss in 1991 [39], provides a "calculus of functors" for functors from real inner product spaces to spaces. Many of the functors to which Orthogonal Calculus has been applied since carry an additional lax symmetric monoidal structure which has so far been ignored. For instance, the functor \(V\mapsto\mathrm{BO}(V)\) admits maps
\[\mathrm{BO}(V)\times\mathrm{BO}(W)\to\mathrm{BO}(V\oplus W)\]
which determine a lax symmetric monoidal structure.
Our first main result, Corollary 4.2.0.2, states that the Taylor approximations of a lax symmetric monoidal functor are themselves lax symmetric monoidal. We also study the derivative spectra of lax symmetric monoidal functors, and prove in Corollary 5.4.0.1 that they admit \(O(n)\)-equivariant structure maps of the form
\[\Theta^{n}F\otimes\Theta^{n}F\to D_{O(n)}\otimes\Theta^{n}F\]
where \(D_{O(n)}\simeq S^{\mathrm{Ad}_{n}}\) is the Klein-Spivak dualising spectrum of the topological group \(O(n)\).
As our proof methods are largely abstract and \(\infty\)-categorical, we also formulate Orthogonal Calculus in that language before proving our results.
This article is largely identical to the authors PhD thesis [20]. |
2310.20180 | Coherent population transfer with polariton states in circuit QED | This article proposes a new method to increase the efficiency of stimulated
Raman adiabatic passage (STIRAP) in superconducting circuits using a shortcut
to the adiabaticity (STA) method. The STA speeds up the adiabatic process
before decoherence has a significant effect, thus leading to increased
efficiency. This method achieves fast, high-fidelity coherent population
transfer, known as super-adiabatic STIRAP (saSTIRAP), in a dressed
state-engineered $\Lambda$ system with polariton states in circuit QED. | Madan Mohan Mahana, Sankar Davuluri, Tarak Nath Dey | 2023-10-31T05:06:05Z | http://arxiv.org/abs/2310.20180v1 | # Coherent population transfer with polariton states in circuit QED
###### Abstract
This article proposes a new method to increase the efficiency of stimulated Raman adiabatic passage (STIRAP) in superconducting circuits using a shortcut to the adiabaticity (STA) method. The STA speeds up the adiabatic process before decoherence has a significant effect, thus leading to increased efficiency. This method achieves fast, high-fidelity coherent population transfer, known as super-adiabatic STIRAP (saSTIRAP), in a dressed state-engineered \(\Lambda\) system with polariton states in circuit QED.
## I Introduction
Superconducting quantum circuits (SQCs) are considered the leading solid-state quantum platforms owing to their extensive applications in quantum information processing and microwave quantum optics [1; 2]. The SQCs with state-of-the-art dilution refrigerators have minimal decoherence rates at milli Kelvin temperatures, which is the key success for realization quantum experiments. The qubits of Josephson-junction-based SQCs at cryogenic temperature are the heart of quantum processors. The striking achievement on losses, radiation confinement, scalability and robustness of circuit quantum electrodynamics (circuit QED) has emerged as a new field of research that studies the interaction of superconducting artificial atoms (SAAs) with microwave photons in SQCs [3]. The high coherence time and tunability of qubits/atoms are paramount for quantum experiments. Natural atoms have very high coherence times, but precisely controlling their quantum parameters is extremely difficult. The artificially engineered analogues of atoms in solid-state quantum platforms enable us to control the coherence time and tunability to our liking [4]. SAAs like Transmon [5] and Fluxonium [6] are the best among the currently available artificial atoms in terms of coherence time and tunability.
In quantum optics, counter-intuitive phenomena such as Electromagnetically induced transparency (EIT) [7], Autler-Townes splitting (ATS) [8], and coherent population trapping (CPT) [9] has a significant role in the precise control of the optical property of a medium. These atomic coherence-based experiments demand an atomic configuration with a larger atomic coherence lifetime. A three-level \(\Lambda\) system containing two longer-lived lower-level ground states can be fulfilled the said criterion. Hence, the three-level \(\Lambda\) systems are more suitable for realizing EIT, ATS and CPT than ladder (\(\Xi\)) and V-type configurations. The EIT is an essential mechanism to suppress a weak probe field's absorption in the presence of a strong control field. The strong control field opens up an extra excitation pathway, which destructively interferes with the probe-assisted pathway. As a result, it creates a narrow transparency window that appears in the probe field's absorption spectrum [10]. ATS is similarly associated with a dip in the spectral line of the probe field, resulting from splitting the spectral line by a resonant strong control field [11]. The three-level quantum systems in SQCs have been used to demonstrate EIT [12; 13], ATS [14; 15; 16], and CPT [17].
STIRAP is another example of a counter-intuitive phenomenon where robust population transfer between two nondegenerate metastable levels is possible without loss of generality [18]. In STIRAP, a suitable choice of two time-dependent coherent pulses that are coupled to two arms of a three-level \(\Lambda\) system allows a complete population transfer from the ground state to the target metastable state without populating the intermediate excited state. STIRAP has been experimentally realized in many quantum optical systems, including SQCs [19; 20]. Many studies have been devoted to the process of STIRAP systems with SQCs [21; 22; 23; 24; 25]. STIRAP-based population transfer has numerous applications in quantum optics and quantum information processing [26].
The adiabatic processes are associated with slow change of controls, which leave some dynamical properties invariant. In the quantum regime, slow processes with long operational times are affected by decoherence, which produces losses and errors. The STA methods are well-established techniques to speed up the adiabatic protocols and achieve the same final results [27]. Counterdiabatic driving (CD) [28], Lewis-Reisenfeld invariant (LRI) method [29; 30], the dressed-state approach [31] are useful tools for STA techniques [32] to speed up the adiabatic quantum protocols [33]. Remarkably, STIRAP can be speed up by applying STA methods. The successful application of CD protocol with STIRAP has been implemented in a three-level ladder-type superconducting qutrit [34]. The super-adiabatic population transfer (saSTIRAP) from the ground state to the second excited state can be accomplished at a two-photon process. However, two-photon detuning produces small ac-Stark shifts to all the energy levels hold drawback of the system. This issue can be resolved by dynamically modifying the phases of all applied drives. The application of the CD control field in a \(\Lambda\) system, which drives the transition from the initial ground state to the target metastable state forms a closed loop \(\Delta\) system. To the best of our
knowledge, there has not been a theoretical investigation of saSTIRAP with a closed loop \(\Lambda\) system in SQCs yet.
The experimental realization of \(\Lambda\) systems with meta-stable states with SQCs has been elusive. The implementation of a dressed-state engineered impedance-matched \(\Lambda\) system in the polariton basis has been investigated [35]. It opened up the avenue for theoretical and experimental demonstration of several quantum optical applications with \(\Lambda\) system in SQCs [36; 37; 38]. The implementation of EIT with an identical system has been theoretically proposed in [39] and has been experimentally realized in polariton states generated with a rf-biased two-level system coupled to a resonator [40]. With all the dipole allowed transitions, a closed loop \(\Delta\) configuration is possible which is rather impossible in natural atoms. We exploit this advantage to theoretically study the possibility of the experimental realization of STIRAP and saSTIRAP protocols with a driven circuit QED system. We use the doubly-dressed polariton states instead of the qutrit states [34] used recently to study the coherent transfer of population in SQCs. The SAAs like flux qubits [41] are operated away from the sweet spot to break the parity-selection rule and form \(\Delta\)-type configurations. However, the driven circuit QED system suggested in this paper remedies this issue while still maintaining the coherence properties of the sweet spot.
The paper is organized as follows. In section II, we describe the theoretical model of the Hamiltonian and the tunable transition rates of a \(\Lambda\) system in circuit QED. We discuss the theoretical proposal for implementing CD protocol in section III. Section IV thoroughly discusses the significant results. Finally, we conclude in section V.
## II Theoretical model
This section deliberates the theoretical model for realizing a \(\Lambda\) system in circuit QED. First, we describe the Hamiltonian of the model, then derive the expression for transition rates of the \(\Lambda\) system.
### Model Hamiltonian
The ability to precisely control a quantum system's population at various levels is challenging due to decay-induced population loss. The STIRAP is an indispensable tool for transferring the population to the desired levels and overcoming the decay-induced population loss. A counter-intuitive pair of laser pulses is used in the STIRAP. A pump pulse couples between an initial populated ground state with an excited state, whereas an advanced Stokes pulse couples an unoccupied metastable state and an excited state. The two lower-level metastable states coupling with the intermediate excited state by the coherent pulses form \(\Lambda\) type configuration. In this level geometry, the efficiency and robustness of the population transfer are sensitive to the overlaps between the Stoke and the pump fields and the individual pulse area. The slow rate of population transfer in STIRAP is the obstacle to efficient population transfer and the reason behind the population loss. Hence, faster population transfer processes such as saSTIRAP can avoid inherent decay and decoherence limitations. This work explores speeding up STIRAP passage in circuit quantum electrodynamics by considering a closed loop \(\Lambda\) system in which the ground and meta-stable states can be directly coupled. The scheme for a coupled transmon-cavity system is shown schematically in Fig. 1. The cavity and the transmon are connected to a semi-infinite waveguide 1 and 2, respectively. From waveguide 2, a microwave field with coupling strength \(\Omega_{d}\) drives a two-level transmon with frequency \(\omega_{d}\), further capacitively coupled to a single mode cavity with coupling strength \(g\). The total Hamiltonian of the system can be cast in the following form
\[H_{0} =\frac{\hbar}{2}\omega_{q}\sigma_{z}+\hbar\omega_{r}\left(a^{ \dagger}a+\frac{1}{2}\right)+\hbar g(a^{\dagger}\sigma_{-}+a\sigma_{+})\] \[+\hbar\Omega_{d}\left(\sigma_{-}e^{i\omega_{d}t}+\sigma_{+}e^{-i \omega_{d}t}\right), \tag{1}\]
where \(\omega_{q}\) and \(\omega_{r}\) stand for the transmon and the cavity frequencies, respectively. The annihilation and creation operator of the cavity are denoted by \(a\) and \(a^{\dagger}\), whereas the atomic lowering and raising operators for transmon are \(\sigma_{-}\) and \(\sigma_{+}\). The interaction strength and frequency of the classical microwave field are expressed by the parameters \(\Omega_{d}\) and \(\omega_{d}\), respectively. We eliminate the explicit time-dependent factors of the Hamiltonian by transforming the Hamiltonian into a rotating frame using a unitary operator \(U=exp(-i\omega_{d}(\sigma_{z}/2+a^{\dagger}a)t)\) and obtain an effective Hamiltonian
\[H_{RWA}= \frac{\hbar}{2}\tilde{\omega}_{q}\sigma_{z}+\hbar\tilde{\omega}_{ r}\left(a^{\dagger}a+\frac{1}{2}\right)+\hbar g(a^{\dagger}\sigma_{-}+a \sigma_{+})\] \[+\hbar\Omega_{d}[\sigma_{-}+\sigma_{+}], \tag{2}\]
Figure 1: A schematic lumped-element circuit diagram of a driven circuit QED system. Here, a classical microwave field with coupling strength \(\Omega_{d}\) drives a transmon with frequency \(\omega_{d}\), further capacitively coupled to a single mode cavity with coupling strength \(g\). Both the transmon and cavity are capacitively coupled to two independent semi-infinite waveguides.
under rotating wave approximation. Here, \(\tilde{\omega}_{q}=\omega_{q}-\omega_{d}\), \(\tilde{\omega}_{r}=\omega_{r}-\omega_{d}\) and \(\tilde{\Delta}=\tilde{\omega}_{r}-\tilde{\omega}_{q}\) is the cavity-transmon detuning. The first three terms in the Hamiltonian can be identified as the celebrated Jaynes-cummings model. The last term represents the interaction between the external classical microwave drive field and the two-level transmon. The eigenstates of the Jaynes-Cummings Hamiltonian are known as the dressed states, which can be deliberate as
\[|+,n\rangle =\cos\frac{\theta_{n}}{2}|e,n\rangle+\sin\frac{\theta_{n}}{2}|g,n+1\rangle, \tag{3}\] \[|-,n\rangle =-\sin\frac{\theta_{n}}{2}|e,n\rangle+\cos\frac{\theta_{n}}{2}|g, n+1\rangle, \tag{4}\]
where \(\tan\theta_{n}=-2g\sqrt{n+1}/\tilde{\Delta}\). Here, \(|e,n\rangle\) and \(|g,n\rangle\) denotes that the qubit is in the excited state \(|e\rangle\) and ground state \(|g\rangle\), respectively, whereas the single-mode cavity is in the state \(|n\rangle\). The corresponding eigenvalues of the dressed states are
\[E_{\pm,n}=\hbar\tilde{\omega}_{r}(n+1)\pm\frac{\hbar}{2}\sqrt{\tilde{\Delta}^{ 2}+4g^{2}(n+1)}. \tag{5}\]
Further mixing of these dressed states in the dispersive regime (\(g\ll\tilde{\Delta}\)) by the external microwave field applied to drive the transmon gives doubly dressed polariton states. Polaritons are referred to as quasi-particles carrying elementary excitations of the light-matter interaction. These polariton states can be denoted by \(|i\rangle,\ |j\rangle\) with the corresponding eigenenergies \(\omega_{i}\), \(\omega_{j}\) (\(i,j=1,2,3,4,..\)). The polariton states can be engineered to obtain a nested four-level system consisting of the lowest four eigenstates of (2) by restricting the driving field to satisfy the condition \(\omega_{q}-3\chi<\omega_{d}<\omega_{q}-\chi\), where \(\chi=g^{2}/\tilde{\Delta}\) denotes the dispersive frequency shift [39; 35]. Under the so-called nesting regime, the levels \(|1\rangle,|3\rangle\) (or \(|4\rangle\)), and \(|2\rangle\) form a \(\Lambda\) system configuration as shown below in Fig. (2).
### Transition rates
Waveguides 1 and 2 are coupled to the cavity and transmon to apply drive and readout signals. We consider two waveguides as external environments, and the total Hamiltonian of the open quantum system can be written as
\[H_{T}=H_{S}+H_{E}+H_{I}, \tag{6}\]
where \(H_{S}\), \(H_{E}\), and \(H_{I}\) denote Hamiltonians of the system, the environment, and the system-environment interaction, respectively. We consider Eq. (1) as the Hamiltonian of the system \(H_{S}\). The Hamiltonian of the environment \(H_{E}\) is expressed as
\[H_{E}=\hbar\int\omega b^{\dagger}(\omega)b(\omega)d\omega+\hbar\int\omega^{ \prime}c^{\dagger}(\omega^{\prime})c(\omega^{\prime})d\omega^{\prime}, \tag{7}\]
where \(b\) and \(c\) denote the annihilation operator in waveguide 1 and waveguide 2, respectively, and \(b^{\dagger}\) and \(c^{\dagger}\) represent the corresponding creation operator. Finally, the system-environment interaction Hamiltonian is denoted by \(H_{I}\) and can be defined as
\[H_{I}= \hbar\left[\int d\omega K(\omega)b^{\dagger}(\omega)a+H.C.\right] \tag{8}\] \[+ \hbar\left[\int d\omega^{\prime}\eta(\omega^{\prime})c^{\dagger} (\omega^{\prime})\sigma_{-}+H.C.\right].\]
The Hamiltonian \(H_{E}\) and \(H_{I}\) remain unchanged in the rotating frame. For simplicity, we can consider having flat spectra in the environments so that both \(K(\omega)\) and \(\eta(\omega^{\prime})\) can be constants. Introducing the first Markov approximation, we can deliberate
\[K(\omega)= \sqrt{\frac{\kappa}{2\pi}}, \tag{9}\] \[\eta(\omega^{\prime})= \sqrt{\frac{\gamma^{q}}{2\pi}}, \tag{10}\]
where \(\kappa\) and \(\gamma^{q}\) denote the decay rate of the cavity and transmon, respectively, into waveguides 1 and 2. Let us write the operators \(a\) and \(\sigma_{-}\) in the polariton basis
\[a= \sum_{ij}\langle i|a|j\rangle\sigma_{ij}, \tag{11}\] \[\sigma_{-}= \sum_{ij}\langle i|\sigma_{-}|j\rangle\sigma_{ij}, \tag{12}\]
where \(|i\rangle\), \(|j\rangle\) denote the polariton states and \(\sigma_{ij}=|i\rangle\langle j|\). In the polariton basis, the Hamiltonian \(H_{I}\) can be recast into the following form
\[H_{I}= \hbar\int d\omega\sum_{ij}\left[\sqrt{\frac{\kappa_{ij}}{2\pi}}b^ {\dagger}(\omega)\sigma_{ij}+H.C.\right] \tag{13}\] \[+\hbar\int d\omega^{\prime}\sum_{ij}\left[\sqrt{\frac{\gamma^{q}_{ ij}}{2\pi}}c^{\dagger}(\omega^{\prime})\sigma_{ij}+H.C.\right],\]
where \(\kappa_{ij}\) and \(\gamma^{q}_{ij}\) are the radiative decay rates into waveguide 1 and waveguide 2, respectively, for the transition from polariton state \(|i\rangle\) to \(|j\rangle\). The transition rates \(\kappa_{ij}\), \(\gamma^{q}_{ij}\) are defined as
\[\kappa_{ij}= \kappa|\langle i|a^{\dagger}|j\rangle|^{2}, \tag{14}\] \[\gamma^{q}_{ij}= \gamma^{q}|\langle i|a^{\dagger}|j\rangle|^{2}. \tag{15}\]
Figure 2: Energy-level diagram of the doubly-dressed polariton states in the driven circuit QED system.
Hence, we can determine the total radiative decay rate \(\gamma_{ij}\) for transition between polariton states \(|i\rangle\) and \(|j\rangle\) as follows
\[\gamma_{ij}=\kappa_{ij}+\gamma_{ij}^{q}=\kappa C_{ij}^{2}+\gamma^{q}Q_{ij}^{2}, \tag{16}\]
where the parameters \(C_{ij}=|\langle i|a^{\dagger}|j\rangle|\) and \(Q_{ij}=|\langle i|\sigma_{+}|j\rangle|\) represent the transition matrix elements corresponding to external drives applied to the cavity and the qubit respectively.
The energies of the polariton states can be tuned by the frequency, \(\omega_{d}\) and the Rabi frequency \(\Omega_{d}\) of the classical microwave drive field applied to the transmon through waveguide 2. Thus, the decay rates \(\gamma_{ij}\) can also be tuned by varying the above parameters. By assigning constant values to these parameters, one can design a \(\Lambda\) system with fixed energy levels and transition rates. The list of numerically computed values of the relevant parameters for our \(\Lambda\) system is tabulated in the table 1.
## III Counteradiabatic driving
The STIRAP process can be implemented with the three-level \(\Lambda\) system described in section II. The matrix representation of the STIRAP Hamiltonian under rotating wave approximation is
\[H(t)=\frac{\hbar}{2}\begin{pmatrix}0&\Omega_{p}(t)&0\\ \Omega_{p}(t)&2\Delta&\Omega_{s}(t)\\ 0&\Omega_{s}(t)&2\delta\end{pmatrix}, \tag{17}\]
where \(\Omega_{p}(t)\) and \(\Omega_{s}(t)\) denote the coupling strength of the time-dependent pump and Stokes field for \(|3\rangle\leftrightarrow|1\rangle\) and \(|3\rangle\leftrightarrow|2\rangle\) atomic transitions with frequencies \(\omega_{p}\) and \(\omega_{s}\), respectively. The parameter \(\Delta\) and \(\delta\) denotes the one-photon detuning \(\Delta=(\omega_{31}-\omega_{p})\) and two-photon detuning \(\delta=(\omega_{31}-\omega_{p})-(\omega_{32}-\omega_{s})\), respectively. Here, the energy levels of the \(\Lambda\) system satisfy \(E_{1}<E_{2}<E_{3}\). The one-photon detuning \(\Delta\) differs from the cavity-transmon detuning \(\tilde{\Delta}\), discussed in the last section. For a perfectly resonant STIRAP process, _i.e._, \(\Delta=\delta=0\), the instantaneous eigenvalues of the above Hamiltonian are \(E_{0}=0\) and \(E_{\pm}=\pm\hbar\Omega_{0}(t)/2\) with \(\Omega_{0}(t)=\sqrt{\Omega_{p}^{2}(t)+\Omega_{\rm s}^{2}(t)}\). The corresponding instantaneous eigenstates are written as
\[|n_{0}(t)\rangle=\begin{pmatrix}\cos\theta(t)\\ 0\\ -\sin\theta(t)\end{pmatrix},|n_{\pm}(t)\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix} \sin\theta(t)\\ \pm 1\\ \cos\theta(t)\end{pmatrix}, \tag{18}\]
where \(\tan\theta(t)=\Omega_{p}(t)/\Omega_{s}(t)\). Perfect adiabatic population transfer from state \(|1\rangle\) to \(|2\rangle\) can be achieved by following the dark state \(|n_{0}(t)\rangle\), under the local adiabatic condition \(|\dot{\theta}|\ll|\Omega_{0}|\) need to be fulfilled [42].
To speed up the STIRAP protocol, one can apply an additional coupling field driving the \(|2\rangle\leftrightarrow|1\rangle\) transition [43; 28; 44]. The additional drive is termed a counter-diabatic drive (CD) or transition-less quantum drive (TQD), and it can be expressed as
\[H^{CD}(t)=i\hbar\sum_{n}[|\partial_{t}n(t)\rangle\langle n(t)|-\langle n(t)| \partial_{t}n(t)\rangle|n(t)\rangle\langle n(t)|]. \tag{19}\]
We derive the Hamiltonian \(H^{CD}(t)\) using the adiabatic basis states \(|n(t)\rangle=(|n_{0}(t)\rangle,|n_{\pm}(t)\rangle)\), which reads
\[H^{CD}(t)=\frac{\hbar}{2}\begin{pmatrix}0&0&i\Omega_{a}(t)\\ 0&0&0\\ -i\Omega_{a}(t)&0&0\end{pmatrix}, \tag{20}\]
where \(\Omega_{a}(t)=2\dot{\theta}(t)\) and the overdot denotes the first derivative with respect to time. We assume that the external drives are applied to the dressed state-engineered \(\Lambda\) system with polariton states by driving the cavity and the transmon qubit. The Hamiltonian representing the interaction between the transmon and external drive fields is thus given by
\[H_{d}=\frac{\hbar}{2}(A_{p}a^{\dagger}e^{-i\omega_{p}t}+A_{s}a^{\dagger}e^{- i\omega_{s}t}+A_{a}\sigma_{+}e^{-i\omega_{a}t}+H.C.), \tag{21}\]
where the pump field and Stokes field with frequency \(\omega_{p}\) and \(\omega_{s}\) respectively are coupled to the cavity with coupling strength \(A_{p}\) and \(A_{s}\). The additional drive field _i.e.,_ counterdiabatic drive with frequency \(\omega_{a}\) is coupled to the tanson with coupling strength \(A_{a}\). Considering the pump field, Stokes field, and the correliabatic drive field are driving the \(|3\rangle\leftrightarrow|1\rangle\), \(|3\rangle\leftrightarrow|2\rangle\) and \(|2\rangle\leftrightarrow|1\rangle\) transition in the polariton basis respectively, we define the amplitudes of the respective Rabi frequencies of the external drive fields in the polariton basis as
\[\Omega_{p}\approx A_{p}C_{31},\ \Omega_{s}\approx A_{s}C_{32},\ \Omega_{a} \approx A_{a}Q_{21}, \tag{22}\]
where the parameters \(C_{31}\), \(C_{32}\) and \(Q_{21}\) are the transition matrix elements already defined in sec. II. Here, \(\Omega_{p},\Omega_{s}\), and \(\Omega_{a}\) denote the Rabi frequencies of the external drives coupled to the \(\Lambda\) system in the polariton basis as shown
\begin{table}
\begin{tabular}{c c c c} Parameter & value & Parameter & Value \\ \hline \(C_{31}\) & 0.77 & \(\omega_{31}\) & 5101 \\ \(C_{32}\) & 0.64 & \(\omega_{32}\) & 5023 \\ \(C_{21}\) & 0.08 & \(\omega_{21}\) & 78 \\ \(Q_{31}\) & 0.00 & \(\gamma_{31}\) & 7.47 \\ \(Q_{32}\) & 0.10 & \(\gamma_{32}\) & 5.18 \\ \(Q_{21}\) & 0.82 & \(\gamma_{21}\) & 0.96 \\ \end{tabular}
\end{table}
Table 1: Numerically calculated values for the transition matrix elements (\(C_{ij},Q_{ij}\)), radiative transition rates (\(\gamma_{ij}\)), and the transition frequencies (\(\omega_{ij}=\omega_{i}-\omega_{j}\)) in the polariton basis. The units of \(\gamma_{21}\), \(\gamma_{32}\), \(\gamma_{31}\), \(\omega_{21}\), \(\omega_{32}\) and \(\omega_{31}\) are in \(2\pi\) MHz units. The parameters \(\omega_{q}/2\pi=5\) GHz, \(\omega_{r}/2\pi=10\) GHz, \(\omega_{d}/2\pi=4.9\) GHz, \(g/2\pi=0.5\) GHz, \(\Omega_{d}/2\pi=30\) MHz, \(\kappa/2\pi=3\) MHz and \(\gamma^{q}/2\pi=0.2\) MHz, and the exact eigenstates of Hamiltonian (2) are used for the numerical calculation of the above parameters.
in Fig. (3). Here, the Gaussian envelopes of the pump and Stokes fields are considered and stated as
\[\Omega_{p}(t)= \Omega_{p}e^{-\frac{t^{2}}{2\sigma^{2}}}, \tag{23}\] \[\Omega_{s}(t)= \Omega_{s}e^{-\frac{(t-t_{s})^{2}}{2\sigma^{2}}}. \tag{24}\]
Using Eqs. (17)-(20), one can obtain
\[\Omega_{a}(t)=-\frac{t_{s}}{\sigma^{2}}\operatorname{sech}\bigg{[}-\frac{t_{s} }{\sigma^{2}}\bigg{(}t-\frac{t_{s}}{2}\bigg{)}\bigg{]}, \tag{25}\]
with assumption of \(\Omega_{p}=\Omega_{s}\) for brevity. Eq. (25) shows that the counteradiabatic drive should have a Rabi frequency \(\Omega_{a}=-t_{s}/\sigma^{2}\) with a sec-hyperbolic shape for the above pump and stokes fields. Thus, we can write the total Hamiltonian of the system under rotating wave approximation as
\[\tilde{H}(t)=\frac{\hbar}{2}[\Omega_{p}(t)|3\rangle\langle 1|+\Omega_{s}(t)|3 \rangle\langle 2|-i\Omega_{a}(t)|2\rangle\langle 1|+H.C.], \tag{26}\]
where we consider the resonant drive conditions (\(\delta=\Delta=0\)), \(\omega_{p}=\omega_{31}\), \(\omega_{s}=\omega_{32}\) and \(\omega_{a}=\omega_{21}\). In order to solve the time evolution of the system we adopt the Lindblad master equation [45; 46]
\[\dot{\rho}=\frac{1}{i\hbar}[\tilde{H},\rho]+\sum_{j=1}^{3}\mathcal{L}( \mathcal{O}_{j})\rho, \tag{27}\]
where \(\mathcal{L}(\mathcal{O}_{j})\rho=(2\mathcal{O}_{j}\rho\mathcal{O}_{j}^{ \dagger}-\rho\mathcal{O}_{j}^{\dagger}\mathcal{O}_{j}-\mathcal{O}_{j}^{ \dagger}\mathcal{O}_{j}\rho)/2\). Here, the operators \(\mathcal{O}_{j}\) denote the jump operators given by \(\mathcal{O}_{1}=\sqrt{\gamma_{31}}|1\rangle\langle 3|\), \(\mathcal{O}_{2}=\sqrt{\gamma_{32}}|2\rangle\langle 3|\) and \(\mathcal{O}_{3}=\sqrt{\gamma_{21}}|1\rangle\langle 2|\). We substitute the drive fields given in Eqs. (23-25) in Eq. (26) and solve the time evolution of the system using the Lindblad master equation given in Eq. (27). The well-established \(mesolver\) routine in Qutip [47; 48] is used for solving the time-dependent Lindblad master equations. The numerical results are discussed in the following section.
## IV Results and discussion
We split this section into three parts describing the significant findings of this work. The Sec. IV.1 highlights the dynamics of coherent population transfer in our system. The sensitivity of the coherent transfer protocols to variations in the parameters is discussed in Sec. IV.2. We quantitatively compare the efficiencies of coherent population transfer protocols by numerically computing the fidelity in Sec. IV.3.
### Coherent population transfer
We investigate the population dynamics in each energy level of the \(\Lambda\) system described in section III. Fig. 4(a) shows the pulse sequence of three external drive fields applied to the polariton state-\(\Lambda\) system by driving the cavity mode. The Lindblad master equations for the STIRAP and the saSTIRAP protocols for the \(\Lambda\) system are numerically solved to study the population dynamics. A counterdiabatic drive is applied to the \(\Lambda\) system to realize the saSTIRAP by coupling \(|3\rangle\leftrightarrow|1\rangle\) transition. Fig. 4(b) substantiates that one can achieve faster coherent population transfer from level \(|1\rangle\) to \(|2\rangle\) by applying the CD protocol as compared to the STIRAP in the \(\Lambda\) system in the polariton basis. The populations \(P_{1},P_{2},\) and \(P_{3}\) denote the populations in polariton states \(|1\rangle,|2\rangle\), and \(|3\rangle\), which are simply the density matrix elements \(\rho_{11},\rho_{22}\), and \(\rho_{33}\) respectively in the polariton basis. The numer
Figure 3: The schematic diagram of a three-level \(\Delta\) system driven by three external classical fields \(\Omega_{p}\), \(\Omega_{s}\) and \(\Omega_{a}\) respectively.
ical results proclaim that up to \(78.81\%\) population can be transferred from the ground state to the meta-stable state by the STIRAP protocol with the used parameters. Furthermore, one can achieve a much higher efficiency up to \(96.90\%\) population transfer with the saSTIRAP protocol. Moreover, the efficiency of these protocols is also dependent on other important parameters such as the pulse amplitudes and the pulse widths of the external drive fields, normalized pulse separation, etc. that we elaborate on in the next section.
### Sensitivity to parameters
This section benchmarks the efficiency of the STIRAP and saSTIRAP protocols with the variation in the parameters used for numerical analysis. At the end of these protocols, we quantify the population transfer efficiency by the final population in the state \(|2\rangle\). Figs. 5(a) and 5(c) show how the efficiency varies in the parameter space of the pulse width \(\sigma\) and the normalized pulse separation \(|t_{s}|/\sigma\) in the STIRAP and the saSTIRAP protocols respectively. The parameters \(\sigma=20\) ns, \(t_{s}=-30\) ns for \(\Omega_{p}/2\pi=\Omega_{s}/2\pi=25.5\) MHz lie in that highly efficient bright yellow regions in both the figures. The sensitivity of the STIRAP and saSTIRAP protocols to are presented in Figs. 5(b) and 5(d) respectively. Fig. 5(b) shows that the resonant driving condition (\(\Delta=\delta=0\)) is ideal for highly efficient coherent population transfer in the STIRAP protocol. Fig. 5(d) shows the saSTIRAP protocol is more robust against one-photon detuning \(\Delta\) than the two-photon detuning \(\delta\). These figures indicate that the saSTIRAP protocol is more efficient and robust than the STIRAP protocol for a \(\Lambda\) system in circuit QED.
### Fidelity
In quantum information theory and quantum mechanics, the quantitative measure of the closeness of a quantum state at a final time '\(t_{f}\)' to the ideal target state is given by fidelity (\(\mathcal{F}\)) [49], and is defined as
\[\mathcal{F}(\rho_{f},\rho_{t})=\left(Tr\sqrt{\sqrt{\rho_{t}}\rho_{f}\sqrt{ \rho_{t}}}\right)^{2}, \tag{28}\]
where the density matrix operators \(\rho_{f}\) and \(\rho_{t}\) describe the quantum state of the system at time '\(t_{f}\)' and the ideal target state (here, \(\rho_{22}\)).
In Fig. 6(a), we have shown the fidelity between the final states of the time-evolution and the target state \(|2\rangle\). One can observe that the fidelity of the saSTIRAP protocol is significantly higher than that of STIRAP over the operation time of the protocols.
Our numerical calculation suggests that the maximum fidelity for the saSTIRAP protocol is \(98.44\%\), much
Figure 5: The efficiency of STIRAP (saSTIRAP) protocol in the parameter space of (a) ((c)) the pulse width (\(\sigma\)) and the normalized pulse separation (\(|t_{s}|/\sigma\)), (b) ((d)) the one-photon detuning (\(\Delta\)) and the two-photon detuning (\(\delta\)). The parameters used for the numerical simulation are \(t_{s}=-30\) ns, \(\sigma=20\) ns, and the parameters used in Fig.4.
Figure 6: (a) Fidelity of the STIRAP (saSTIRAP) protocol with radiative transitions is shown with the solid-yellow (dashed-green) line and the dotted-red (dash-dotted-blue) line shows the fidelity with the radiative transition rates (\(\gamma_{21},\gamma_{32}\), and \(\gamma_{31}\)) set to zero. (b) The time evolution of the populations in STIRAP and saSTIRAP protocols with the radiative transition rates (\(\gamma_{21},\gamma_{32}\), and \(\gamma_{31}\)) set to zero. All the parameters used in Fig.5 are used for the numerical simulation.
higher than the maximum fidelity of \(88.77\%\) for STIRAP, as shown in a solid black curve and long dashed curve. The dashed and dashed-dotted-lined plots in the above figure show the fidelity of both protocols in the absence of the radiative transition rates. We can observe a significant increment in the fidelity for each protocol without the radiative decay of the polariton states. Fig. 6(b) shows the time evolution of the populations for each protocol without the radiative transitions, indicating higher efficiency of the population transfer. The numerical simulations suggest that a maximum of up to \(84.88\%\) population can be transferred with \(92.13\%\) fidelity with the STIRAP protocol without the radiative decay of the polariton states. The saSTIRAP protocol can transfer a maximum of up to \(99.99\%\) population with \(99.99\%\) fidelity with zero radiative transitions. Thus, we can realize highly efficient, robust, and high-fidelity coherent population transfer by increasing the coherence time of the transmon qubit and superconducting microwave resonator, reducing the radiative transitions in the polariton basis.
One can also implement the other STA techniques to speed up the transfer protocol with such systems in circuit QED and compare the efficiency and fidelity.
## V Conclusion
In conclusion, we studied the application of STA to STIRAP using a dressed state-engineered system in circuit QED to achieve fast and high-fidelity coherent population transfer, known as saSTIRAP. An experimental realization of our theoretical proposal may use the currently available SQCs technologies. We further showed that the saSTIRAP technique leads to quantum state transfer with better fidelity than the STIRAP. The experimental realization of STIRAP and saSTIRAP in SQCs can find valuable applications in designing fast, high-fidelity quantum gates for efficient quantum computing and quantum information processing [50, 51].
|
2302.00041 | Band structures and $\mathbb{Z}_2$ invariants of two-dimensional
transition metal dichalcogenide monolayers from fully-relativistic
Dirac-Kohn-Sham theory using Gaussian-type orbitals | Two-dimensional (2D) materials exhibit a wide range of remarkable phenomena,
many of which owe their existence to the relativistic spin-orbit coupling (SOC)
effects. To understand and predict properties of materials containing heavy
elements, such as the transition-metal dichalcogenides (TMDs), relativistic
effects must be taken into account in first-principles calculations. We present
an all-electron method based on the four-component Dirac Hamiltonian and
Gaussian-type orbitals (GTOs) that overcomes complications associated with
linear dependencies and ill-conditioned matrices that arise when diffuse
functions are included in the basis. Until now, there has been no systematic
study of the convergence of GTO basis sets for periodic solids either at the
nonrelativistic or the relativistic level. Here we provide such a study of
relativistic band structures of the 2D TMDs in the hexagonal (2H), tetragonal
(1T), and distorted tetragonal (1T') structures, along with a discussion of
their SOC-driven properties (Rashba splitting and $\mathbb{Z}_2$ topological
invariants). We demonstrate the viability of our approach even when large basis
sets with multiple basis functions involving various valence orbitals (denoted
triple- and quadruple-$\zeta$) are used in the relativistic regime. Our method
does not require the use of pseudopotentials and provides access to all
electronic states within the same framework. Our study paves the way for direct
studies of material properties, such as the parameters in spin Hamiltonians,
that depend heavily on the electron density near atomic nuclei where
relativistic and SOC effects are the strongest. | Marius Kadek, Baokai Wang, Marc Joosten, Wei-Chi Chiu, Francois Mairesse, Michal Repisky, Kenneth Ruud, Arun Bansil | 2023-01-31T19:19:48Z | http://arxiv.org/abs/2302.00041v3 | Band structures and \(\mathbb{Z}_{2}\) invariants of 2D transition metal dichalcogenide monolayers from fully-relativistic Dirac-Kohn-Sham theory using Gaussian-type orbitals
###### Abstract
Two-dimensional (2D) materials exhibit a wide range of remarkable phenomena, many of which owe their existence to the relativistic spin-orbit coupling (SOC) effects. To understand and predict properties of materials containing heavy elements, such as the transition-metal dichalcogenides (TMDs), relativistic effects must be taken into account in first-principles calculations. We present an all-electron method based on the four-component Dirac Hamiltonian and Gaussian-type orbitals (GTOs) that overcomes complications associated with linear dependencies and ill-conditioned matrices that arise when diffuse functions are included in the basis. Until now, there has been no systematic study of the convergence of GTO basis sets for periodic solids either at the nonrelativistic or the relativistic level. Here we provide such a study of relativistic band structures of the 2D TMDs in the hexagonal (2H), tetragonal (1T), and distorted tetragonal (1T') structures, along with a discussion of their SOC-driven properties (Rashba splitting and \(\mathbb{Z}_{2}\) topological invariants). We demonstrate the viability of our approach even when large basis sets with multiple basis functions involving various valence orbitals (denoted triple- and quadruple-\(\zeta\)) are used in the relativistic regime. Our method does not require the use of pseudopotentials and provides access to all electronic states within the same framework. Our study paves the way for direct studies of material properties, such as the parameters in spin Hamiltonians, that depend heavily on the electron density near atomic nuclei where relativistic and SOC effects are the strongest.
## I Introduction
Two-dimensional (2D) materials [1; 2] are solids in which atoms or compounds are bound together by strong bonds (_e.g._ covalent or ionic) along two spatial dimensions, confining electron transport to a plane. In the out-of-plane dimension, weaker (van der Waals) forces enable the synthesis of materials with thicknesses of only a few atomic layers [3; 4; 5; 6]. 2D materials have recently become very attractive as they exhibit a number of remarkable transport [7; 8], topological [9; 10; 11], thermoelectric [12], and optoelectronic [13] properties that can be exploited to develop novel devices for quantum computing [14; 15], field-effect transistors [16; 17], low-power logic devices [18], and strain-controllable light-emitting devices [19]. Advantages of 2D materials over conventional solids stem from their thin surfaces which enable considerable manipulation of their properties that can be controlled using defects, adatoms, and electric-field gating [20; 21; 22]. In addition, atomically thin materials can be manually assembled to form multilayered heterostructures [11; 23; 24; 17] to combine functionalities of individual layers [25].
Monolayers of the transition-metal dichalcogenides (TMDs) [26] are 2D materials of type MX\({}_{2}\), where M is a transition-metal atom (Mo, W,...), and X is a chalcogen atom (S, Se, Te), and constitute basic building blocks for many heterostructures. TMDs have been found to host a number of exotic physical phenomena, such as the quantum spin Hall (QSH) [11] and nonlinear anomalous Hall [27] effects, higher-order topology [28], giant Rashba spin-splittings of valence bands [29], as well as various correlated phases, _e.g._ charge density waves [30; 31], superconductivity and ferromagnetism [32]. TMDs typically contain elements from the lower part of the periodic table, where the relativistic theory of electrons is unavoidable. Many interesting properties of the TMDs owe their existence to the relativistic link between the spin and orbital degrees of freedom, _i.e._ the spin-orbit coupling (SOC) effects. The ability to control and tune SOC in TMDs opens new possibilities for spintronics and valvlectronics devices based on non-magnetic materials, where spin is manipulated by electric means only [33; 34; 35; 36]. Furthermore, SOC generates opposite effective Zeeman fields at the K and K' valleys of the TMDs, which enables formation of Cooper pairs and opens the possibility of observing topological superconductivity and Majorana fermions [37; 38]. Recently, antisite defects in TMDs were proposed to be suitable for hosting solid-state spin qubits, where SOC enables transitions between different spin configurations required for qubit operations [39].
First-principles computational approaches enable systematic parameter-free material-specific predictions in novel materials. However, due to the presence of heavy elements in the TMDs, it is important that relativistic and SOC effects are accounted for in the theoretical framework. In this connection, the four-component (4c) Dirac Hamiltonian [40; 41; 42]
\[\hat{H}=\begin{pmatrix}V(\mathbf{r})&c\mathbf{\sigma}\cdot\mathbf{\pi}\\ c\mathbf{\sigma}\cdot\mathbf{\pi}&V(\mathbf{r})-2c^{2}\end{pmatrix}, \tag{1}\]
is the commonly accepted gold standard due being exact in the limit of noninteracting relativistic electrons. Here, \(\mathbf{\sigma}\) are the Pauli matrices, \(\mathbf{\pi}\equiv-i\mathbf{\nabla}+\mathbf{A}\) is the canonical momentum operator of the electron, \(c\) is the speed of light, and atomic units (\(e=\hbar=m_{e}=1\)) are used. The scalar and vector potentials \(V\) and \(\mathbf{A}\), respectively, describe the interaction of the electron with external electromagnetic fields; \(V\) typically contains interactions with nuclear charges. The lower components of the Dirac bispinor wave functions are associated with the negative-energy states, and various strategies for their elimination lead to a number of approximate two-component (2c) schemes [43; 44].
The multiple wave-function components needed in relativistic simulations significantly increase the computational cost as larger matrix and vector dimensions are involved. Kohn-Sham (KS) density functional theory (DFT) [45; 46] brings relativistic calculations to an affordable level and remains the method of choice for large solid-state systems containing heavy elements. However, we note the work of Yeh _et al._[47] on the self-consistent \(GW\) method at the relativistic 2c level of theory. In KS theory, the potential in Eq. (1) also contains terms that depend on the electron density and its gradient: the mean-field Coulomb potential generated by other electrons, and the exchange-correlation potential [48; 49; 50; 41; 41]. The relativistic corrections to the instantaneous electron-electron Coulomb interactions are here neglected. If SOC is treated self-consistently, expensive KS Hamiltonian matrix constructions must be carried out in the multicomponent regime. As a consequence, most internal parts of quantum mechanical codes must be reconsidered and adapted. A transparent theoretical formalism that isolates the SOC terms, reduces computational cost, and simplifies the implementation of fully relativistic theories, can be achieved using quaternion algebra [52; 53; 54; 55].
The most commonly applied strategies incorporate SOC in relativistic pseudopotentials and the related projector-augmented wave methods [56; 57; 58; 59; 60], where the oscillating wave function with complicated nodal structure in the core region close to the nuclei is replaced by a smooth pseudo-wavefunction. However, all-electron approaches are necessary to describe Rashba-like spin-splitting induced by distortions of the wave function close to nucleus [61], when nuclear spins interact with the electrons [62], or in situations where high accuracy is desired [63; 64]. Linearized augmented plane-wave [65; 66; 67] and full-potential linear muffin-tin orbital [68; 69; 70] methods enable all-electron calculations by constructing muffin-tin spheres around all atoms and expressing the all-electron wave functions using orbitals inside these spheres and plane waves in the interstitial region. In such cases, treatment of SOC must be handled separately in the two distinct regions and is sometimes neglected outside of the muffin-tin spheres.
Our goal is to provide an alternative approach for modelling heavy-element-containing 2D materials and their properties. The technique is based on expanding the 4c Bloch wave functions \(\psi_{n}(\mathbf{k};\mathbf{r})\) using linear combinations of atom-centered real-space bispinor basis functions \(\chi_{\mu}(\mathbf{r})\):
\[\psi_{n}(\mathbf{k};\mathbf{r})=\frac{1}{\sqrt{|\mathcal{K}|}}\sum_{\mathbf{R},\mu}e^{i\bm {k}\cdot\mathbf{R}}\chi_{\mu}(\mathbf{r}-\mathbf{R})c^{\mu}_{n}(\mathbf{k}), \tag{2}\]
where \(n\) is the band index, \(\mathbf{k}\) is the reciprocal-space vector (quasi-momentum), \(\mathbf{R}\) denotes the Bravais lattice vector of the respective unit cell in the translationally invariant system (\(\mathbf{R}=\mathbf{0}\) is the reference unit cell), index \(\mu\) runs over the scalar basis functions in the unit cell, and the normalization constant is the inverse square root of the volume of the primitive reciprocal unit cell \(|\mathcal{K}|\). The expansion coefficients \(c^{\mu}_{n}(\mathbf{k})\) and the ground-state electronic structure are obtained by solving the matrix form of the reciprocal-space Dirac-Kohn-Sham (DKS) generalized eigenvalue equation
\[H(\mathbf{k})c(\mathbf{k})=S(\mathbf{k})c(\mathbf{k})\varepsilon(\mathbf{k}), \tag{3}\]
where \(\varepsilon(\mathbf{k})\) is the diagonal matrix of eigenvalues (band energies), and \(H(\mathbf{k})\) and \(S(\mathbf{k})\) are the reciprocal-space DKS Hamiltonian and overlap matrices, respectively, with the elements:
\[H_{\mu\mu^{\prime}}(\mathbf{k}) =\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\int_{\mathbb{R}^{3}}\chi_{ \mu}^{\dagger}(\mathbf{r})\hat{H}\chi_{\mu^{\prime}}(\mathbf{r}-\mathbf{R})d^{3}\mathbf{r}, \tag{4a}\] \[S_{\mu\mu^{\prime}}(\mathbf{k}) =\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\int_{\mathbb{R}^{3}}\chi_{\mu }^{\dagger}(\mathbf{r})\chi_{\mu^{\prime}}(\mathbf{r}-\mathbf{R})d^{3}\mathbf{r}. \tag{4b}\]
Due to the dependence of \(H(\mathbf{k})\) on the electron density and its gradient that are determined from \(c(\mathbf{k})\), Eq. (3) must be solved self-consistently, though the SOC terms are often neglected in the self-consistent procedure and only included a posteriori as a perturbation to save computational time [71]. Techniques that employ various choices for the basis functions \(\chi_{\mu}\), such as numerical [72; 73; 74], Slater-type [75; 76], or Gaussian-type basis sets [77; 54], have been developed for periodic systems at various relativistic levels of the theory.
The method presented in this work is based on Gaussian-type orbitals (GTOs) commonly used in quantum chemistry for calculations of properties, electronic structures, and electron correlations in molecules. The use of GTOs for solids offers several advantages. For instance, a unified representation of wave functions in a variety of systems (molecules, polymers, 2D materials,
crystals) allows for building on existing quantum chemistry approaches, such as the accurate treatment of correlation [78; 79; 80; 81], reduced cost of evaluating the exact exchange [82], and efficient algorithms that scale linearly with system size [83; 84]. In 2D materials that are only a few atomic layers thick, Bloch functions are straightforwardly constructed to satisfy the Bloch theorem only across the two periodic dimensions by explicitly restricting \(\mathbf{R}\) and \(\mathbf{k}\) in Eq. (2) to 2D lattice vectors. As a consequence, the system is not artificially replicated in the nonperiodic dimension. For comparison with calculations employing plane waves, this situation would correspond to the limit of infinitely large vacuum layers (or zero hopping) between the replicated images. Finally, explicit self-consistent treatment of SOC for all electronic states on an equal footing without adopting the muffin-tin or pseudopotential approximations provides access for studies of x-ray absorption spectroscopy [85] and magnetic response properties associated with nuclear spins [86; 87].
Modelling solid-state systems with GTOs in the nonrelativistic framework was pioneered by Pisani and Dovesi [88; 89; 90], later joined by several other groups [83; 84; 91]. However, standard GTO basis sets are constructed by optimizing calculations on atoms and thus contain basis functions with small exponents to describe the asymptotic behavior of the atomic wave functions. These diffuse functions severely hamper the application of GTOs to solids and cause numerical instabilities [83; 92; 93; 94]. The numerical issues can be circumvented by removing the most diffuse basis functions at the onset of the calculation [95; 96; 97] or by constructing system-specific basis sets using reoptimized Gaussian exponents and contraction coefficients [93; 98]. The former strategy risks producing low quality results [54; 99] whereas the latter strategy sacrifices the transferability of the so-constructed basis sets and requires availability of advanced code features (basis set optimization). Pre-optimized all-electron basis sets are also not available for heavy elements [93]. The importance of numerical stability is even more pronounced in relativistic theories where the additional wave-function components increase the variational freedom, and the restricted kinetic balance (RKB) condition [100] must be satisfied to prevent the collapse of the spectral gap between the positive- and negative-energy states.
In this paper, we demonstrate that accurate and converged results can be obtained for 2D TMD monolayers in various structural phases using commonly available all-electron valence triple-\(\zeta\) basis sets in the fully relativistic framework without the need of modifying or reoptimizing the basis functions. Our approach builds on the quaternion algebra-based theory [54] implemented here with careful numerical considerations to ensure robustness of the implementation and the resolution-of-the-identity (RI) approximation for the Coulomb four-center integrals [101; 102; 103] that reduces the computational cost of otherwise time-consuming calculations by more than three orders of magnitude. We show how the canonical orthogonalization can be used to construct numerically well-behaved orthonormal bases in momentum space in the 4c setting and derive a criterion for assessing the quality of the RKB condition in such bases. We discuss a parallel algorithm for the electronic structure solver exhibiting minimal input/output (IO) disk communication and memory overhead that is applicable to unit cells with several thousand basis functions and thousands of \(\mathbf{k}\) points. The method presented here is implemented in the ReSpect code [55] and used to calculate the band structures of the hexagonal (2H), tetragonal (1T), and distorted tetragonal (1T') phases of selected TMDs. For the 1T' phase, we evaluate the \(\mathbb{Z}_{2}\) invariant within our real-space GTO-based scheme and confirm the findings of Qian [11] that the 1T' phase of MX\({}_{2}\) is topologically nontrivial.
## II Results and discussion
We calculated the electronic band structures of six TMDs (MoS\({}_{2}\), MoSe\({}_{2}\), MoTe\({}_{2}\), WS\({}_{2}\), WSe\({}_{2}\), WTe\({}_{2}\)) in the 2H, 1T, and 1T' structural phases using our 4c all-electron GTO method. We compared our 2H and 1T results with the band structures presented in Ref. [6] obtained from the method of Te Velde and Baerends [104]. For the 1T' phase, we used the Vasp program package [105] to investigate how our all-electron method performs compared to the pseudopotential approximation. Due to the presence of heavy atoms, all these TMDs exhibit significant scalar relativistic and SOC effects. We investigated three different levels of theory: nonrelativistic (nr) theory with the infinite speed of light, Dirac-based 4c scalar relativistic (sr) theory without SOC, and Dirac-based 4c fully relativistic (fr) theory with SOC included. For all calculations reported in this work, we employed the nonrelativistic GGA-type XC functional PBE [106]. For the 4c calculations, the Gaussian finite nucleus model of Visscher and Dyall [107] was used instead of point nuclei in order to regularize the singularity of the lower components of the wave function evaluated at the atomic centers. All electronic structure optimizations were performed using the relaxed unit cell geometries taken from Ref. [6] (2H and 1T phases) and Ref. [11] (1T' phase). For the momentum-space Brillouin zone integration, we used the \(\Gamma\)-centered mesh of \(\mathbf{k}\) points of \(11\times 11\) for the 2H and 1T' phases and \(33\times 33\) for the metallic 1T phase. When comparing band structures \(\varepsilon_{n}^{(1)}(\mathbf{k})\) and \(\varepsilon_{n}^{(2)}(\mathbf{k})\) obtained using two different methods or computational settings, we evaluate the maximum difference between the energy eigenvalues as well as the root-mean-square deviation ("band delta") [71]
\[\Delta_{\text{b}}(\mathcal{W})=\sqrt{\frac{1}{N_{E}}\sum_{\begin{subarray}{c} \mathbf{k},\\ \varepsilon_{n}^{(1)}(\mathbf{k})\in\mathcal{W}\\ \varepsilon_{n}^{(2)}(\mathbf{k})\in\mathcal{W}\end{subarray}}\left(\varepsilon_{ n}^{(1)}(\mathbf{k})-\varepsilon_{n}^{(2)}(\mathbf{k})\right)^{2}}, \tag{5}\]
where \(\mathcal{W}=[\varepsilon_{1},\varepsilon_{u}]\) is an energy window chosen for comparison, the summations run over all energy states along a path of \(\mathbf{k}\) points for which both \(\varepsilon_{n}^{(1)}(\mathbf{k})\) and \(\varepsilon_{n}^{(2)}(\mathbf{k})\) lie inside the window \(\mathcal{W}\), and \(N_{E}\) is the total number of such states.
### Numerical and basis-set accuracy
Numerical instability in simulations involving periodic systems with GTOs originate from two distinct, albeit related, reasons [83; 92]. First, the numerical accuracy of the KS Hamiltonian matrix can be insufficient when various approximations necessary for solid-state simulations are introduced for basis sets with diffuse functions. Diffuse basis functions can extend over a very large number of unit cells, and the overlap between two such functions exhibits a much slower decay than their more localized counterparts. If these overlaps are neglected too soon, the summations in Eqs. (4) are not converged, causing the overlap matrix \(S_{\mu\mu^{\prime}}(\mathbf{k})\) to be indefinite, _i.e._ having negative eigenvalues. This problem can be mitigated by including a large number of basis function products \(\chi_{\mu}^{\dagger}(\mathbf{r})\chi_{\mu^{\prime}}(\mathbf{r}-\mathbf{R})\), which in turn increases the memory requirements for storage of matrices. To this end, in the periodic module of ReSpect, we implemented sparse data structures in real space in combination with quaternion algebra to efficiently handle spin-orbit-coupled wave-function components. Such a versatile approach allows us to retain the accuracy necessary for numerical robustness with a very small additional memory footprint.
The second problem is associated with overcompleteness (linear dependencies) of the basis that causes matrices in Eq. (3) to be ill-conditioned and introduces errors and instabilities in matrix operations such as diagonalization. This occurs when the eigenvalues of the momentum-space overlap matrix drop below \(10^{-7}\), which is often the case for larger basis sets (such as TZ). For instance, the smallest eigenvalue of the overlap matrix of WTe\({}_{2}\) in the 1T' phase is \(10^{-11}\), while the remaining eigenvalues span over 12 orders of magnitude. Subsequently, very large elements \(\sim 10^{8}\) are accumulated in the density matrix, which prohibits the successful completion of the ground-state convergence procedure. To avoid such errors, we construct an orthonormal basis so that the redundant basis functions are projected out. However, we found that truncating the space spanned by orthonormal basis functions used for describing the lower wave-function components lead to the formation of artificial states inside the band gap for some \(\mathbf{k}\) points of the band structure. This can be understood by realizing that the RKB condition is violated by the truncation process. In order to numerically control how much of the RKB is lost, we derived the matrix form of the RKB condition in a general basis and the expression for the truncation error. More details on the process of constructing the orthonormal basis in the 4c framework is described in the Methods Sec. IV.1.
For all atoms, we employed the relativistic triple-\(\zeta\) (TZ) basis sets developed by Dyall for d elements [108; 109; 110] and for p elements [111; 112]. The basis sets used were uncontracted and contained high-angular momentum correlating functions for several outer shells and functions for dipole polarization of the valence shells. Our initial tests using the smaller double-\(\zeta\) (DZ) basis from the same family yielded results with inconsistent quality across \(\mathbf{k}\) points - for instance, the band energy at the \(M\) point of 2H MoS\({}_{2}\) was improved by 20 meV when the TZ basis was used, while for other \(\mathbf{k}\) points, this difference was only about 6 meV. Similarly, the differences in the band structures of 2H WS\({}_{2}\) (including the band gap) between the DZ and TZ basis sets were 20-30 meV. Hence, to ensure that our results are well-converged for all \(\mathbf{k}\) points and that the target accuracy is sufficient for reliable comparisons with other methods, we chose the TZ basis in all calculations reported here.
The basis sets employed contained very diffuse functions with several exponents \(<10^{-2}\) a.u. The extent of the most diffuse function on Mo and W was 18.2 A and 17.2 A, respectively, causing the basis function to span more than a hundred unit cells of the 2D film. This in turn results in several hundred thousand charge distributions interacting with themselves and their periodic replicas when the four-center Coulomb integrals are evaluated. Such calculations are currently unfeasible on common supercomputers as they can consume a notable portion of allocated resources. However, within the RI approximation, the charge distributions only interact with the electron density that is expanded using an atom-centered auxiliary basis. Since the number of the three-center terms needed is smaller than the number of four-center terms typically by three orders of magnitude, the evaluation of the Coulomb operator becomes affordable. Tests that we performed on various systems including selected 2H and 1T' structures studied here in both the nr and rf settings indicate that the maximum error introduced by the RI approximation is very small - only a few tens of \(\mu\)eV.
Our implementation allowed us to explore the validity of removing diffuse functions from the GTO basis sets before starting the electronic-structure optimization -- an approach that is commonly employed for the sake of accelerating and numerically stabilizing calculations of periodic systems using GTOs [95; 96; 97]. From the TZ basis sets, we deleted all exponents \(<0.1\) which for most atoms meant removing one to two functions in each of the s, p, d, and f shells. Even though this modification reduced the computational cost by a factor between five and six in case of the nr theory and a factor of four in case of the 4c theory, we observed lowered accuracy of the eigenenergies of some bands and \(\mathbf{k}\) points. For instance, the difference between the energy of the \(\Gamma\) point of the valence band of 2H WTe\({}_{2}\) obtained from the calculation using the truncated and full basis sets was 0.1 eV. The virtual bands located 2.5 eV above the Fermi level
were, in general, poorly described with the truncated basis set. Hence, our results indicate that removing the diffuse functions on the onset of the calculation is neither justified nor necessary.
### Stability of phases
Before inspecting individual structural phases and validating our approach, we compare the total energies of each phase. Our results confirm the findings of Qian _et al._[11] -- for all systems except WTe\({}_{2}\), the 2H phase was found to be the ground-state structure with the lowest energy, followed by the higher-energy 1T' phase. The metallic 1T phase is thermodynamically unstable and the systems undergo Jahn-Teller distortion, relaxing to the 1T' phase [11]. In the case of WTe\({}_{2}\), the energy of the 1T' phase is below the 2H phase, which makes the 1T' phase the most stable.
Our study allows for a uniform treatment of relativistic effects for all atoms and all electrons, as opposed to calculations employing pseudopotentials, which offer less control over how relativity is handled for light and heavy elements. We found that it is the inclusion of the scalar relativistic effects that decreases the energy of the 1T' phase with respect to the 2H phase. Including the SOC further lowers the energy of the 1T' phase. For the heaviest system, WTe\({}_{2}\), this energy lowering becomes sufficient to change the ground-state phase from 2H to 1T'. Fig. 1 summarizes our results for all systems at the nr, sr, and rf levels of theory and shows the relative energy differences between the 1T' and 2H phases. Hence, without relativity, the ground-state structure of all six systems would be 2H.
### 2H and 1T phases
The 2H and 1T phases of the TMDs both contain three atoms (one metal and two chalcogenides) in the primitive unit cell. However, the structural differences between these two phases lead to their distinct electronic properties. Whereas the 1T phase is space-inversion-symmetric with a metallic electronic structure, the 2H systems are semiconductors with broken space inversion. In combination with the strong SOC, this broken inversion symmetry leads to "giant" spin-orbit-induced Rashba splittings in 2H MX\({}_{2}\)[29].
To validate the approach presented in this work, we calculated the band structures at the nr, sr, and rf levels of theory using all-electron bases. The resulting band diagrams traversing high-symmetry \(\mathbf{k}\) points obtained with and without SOC can be found in Figs. S1 (2H) and S2 (1T) in the supplementary material (SM). The nonrelativistic results differ significantly from those obtained from the relativistic theories and are not shown. For the 2H and 1T phases, sr band structures are available in the 2D materials' Atlas of Miro, Audiffred, and Heine [6] that we used for comparison. We conclude that our results agree very well at the sr level with the Atlas results.
In the MoTe\({}_{2}\) and WTe\({}_{2}\) band structures of the 1T phase shown in Fig. S2, it is possible to see very small SOC-induced "Rashba-like" spin splittings of the order of tens of meV. However, such splittings should not be present for the inversion-symmetric 1T phase, and all the bands should be strictly doubly degenerate. We found that this observation can be attributed to small deviations (about 0.04 A) in the unit cell geometry of MoTe\({}_{2}\) and WTe\({}_{2}\) from the tetragonal lattice, that is, the 1T structure used does not exhibit exact inversion symmetry. The geometries of the other systems do not show any deviations from 1T, and we observed exact double degeneracy of bands. Finally, we note that including the
\begin{table}
\begin{tabular}{l c c c c} & \multicolumn{2}{c}{Band gap (E\({}_{g}\), eV)} & \multicolumn{2}{c}{Rashba splitting (eV)} \\ \cline{2-5} & nr & sr & fr & fr \\ \hline MoS\({}_{2}\) & 1.820 & 1.812 & 1.734 & 0.146 \\ & & (1.82) & (1.74) & [0.147] \\ MoSe\({}_{2}\) & 1.560 & 1.550 & 1.442 & 0.185 \\ & & (1.56) & (1.45) & [0.186] \\ MoTe\({}_{2}\) & 1.171 & 1.154 & 1.018 & 0.218 \\ & & (1.15) & (1.01) & \\ WS\({}_{2}\) & 2.060 & 1.984 & 1.628 & 0.416 \\ & & (1.98) & (1.64) & [0.433] \\ WSe\({}_{2}\) & 1.741 & 1.630 & 1.300 & 0.460 \\ & & (1.63) & (1.33) & [0.463] \\ WTe\({}_{2}\) & 1.332 & 1.194 & 0.863 & 0.479 \\ & & (1.18) & (0.87) & \\ \end{tabular}
\end{table}
Table 1: Band gaps and SOC-induced (Rashba) splitting of the valence band at the K point of TMD monolayers in the 2H phase calculated for nr, sr, and fr Hamiltonian. The values in the round and square brackets are taken from Refs. [6] and [13], respectively.
Figure 1: Relative total energy of the 1T’ phase of TMD monolayers per MX\({}_{2}\) calculated with respect to the 2H phase. The results were obtained for nr, sr, and fr Hamiltonians and show the relativistic effects on the structural stability.
SOC in the computational framework lifted the degeneracy occurring at intersection points of the overlapping valence and conduction bands of the metallic 1T phase. The values of the SOC-induced vertical gaps at the K point are reported in Table SI of the SM.
Table 1 contains the values of the nr, sr, and fr band gaps of the 2H phase, as well as the Rashba splittings of the valence band at the K point. The small discrepancies between the results obtained with the method presented here and the Atlas approach we attribute to differences in the chosen methodologies. In the Atlas work, the relativistic effects are treated with the zeroth-order regular approximation (ZORA) [114], which is a 2c technique that requires numerical integration schemes due to the appearance of the potential in the denominator of the Hamiltonian. In contrast, the Hamiltonian used here is 4c with most of the integrals evaluated analytically. The basis sets employed should not cause any significant deviations, since both sets of calculations were carried out with TZ quality bases with several polarization functions. Finally, we note that the Atlas results included empirical D3 treatment of London dispersion interactions [115] that we did not consider in this work. The WSe\({}_{2}\) material, for which we observed the largest difference of 30 meV in the fr setting, we furthermore benchmarked against a calculation with the bigger quadruple-\(\zeta\) (QZ) basis set. The band energies improved only marginally, _i.e._ by units of meV. Hence, we believe that our band structures are well converged with respect to the basis set as well as the relativistic Hamiltonian and should serve as the reference results for these systems for the PBE functional.
### 1T' phase
The 1T' phase of MX\({}_{2}\) is formed from the 1T structure by a spontaneous lattice distortion in which the unit cell period is doubled along one in-plane direction, which creates zigzag chains along the perpendicular in-plane direction [117]. The TMDs were shown to host the QSH effect in the 1T' structure, _i.e._ the existence of time-reversal-symmetry protected edge states that enable conduction of electrical currents on the surface of the material [11]. From a computational viewpoint, the 1T' phase has not been studied in the literature to the same extent as the 2H and 1T counterparts. Here, we calculated the electronic band structures of the 1T' MX\({}_{2}\) using the all-electron 4c method presented in this work as well as the pseudopotential method as implemented in the Vasp program package [105]. For the Vasp calculations, we used an energy cutoff of 500 eV and the Brillouin zone \(\mathbf{k}\)-point sampling of \(11\times 15\times 1\). We also tested a denser mesh of \(\mathbf{k}\) points without observing any notable differences. To avoid artificial interactions between the periodic images of the MX\({}_{2}\) layers that appear when 2D systems are studied using plane waves with the periodic boundary conditions, a vacuum region of 20 A was applied in the direction perpendicular to the 2D film.
The band structures obtained from the ReSpect and Vasp codes with SOC are presented in Fig. 3. The values of the band delta differences between the two codes are shown in Table 2, evaluated separately for the occupied (valence) and virtual (conduction) energy regions. In general, the differences are of the order of tens of meV and are larger in the virtual region. The biggest discrepancies close to the Fermi level are seen in WS\({}_{2}\), while the overall largest band delta values are found in MoS\({}_{2}\) and WTe\({}_{2}\). For comparison, we performed tests of the numerical convergence of the GTO method by tuning various parameters of the ReSpect code. In particular, we modified the number of removed momentum-space basis functions in the orthonormal basis (see Sec. IV.1), tightened the lattice-sum convergence thresholds, as well as turned off the RI approximation of the Coulomb integrals. Such modifications had a negligible effect on our GTO results; the band delta values between the higher accuracy calculations and the results shown here reached tens of \(\mu\)eV. Bigger differences were observed only when we benchmarked the WS\({}_{2}\) TZ calculation against the QZ basis set. Still, the energies of the bands close to the Fermi level changed by 1-6 meV, and the values of the band deltas for the intervals \(-10\) eV to 0 eV and 0 eV to 5 eV were \(\Delta_{\rm b}({\cal W}_{\rm val})\approx 4\) meV and \(\Delta_{\rm b}({\cal W}_{\rm con})\approx 10\) meV, respectively. To conclude, despite that minor improvements to the band structure are possible when larger basis sets are employed in the calculations, our TZ results can be considered as the reference for benchmarking performance of other approximate methods, for instance, for studying accuracy of various pseudopotentials.
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{4}{c}{Band delta (meV)} \\ & MoS\({}_{2}\) & MoSe\({}_{2}\) & MoTe\({}_{2}\) & WS\({}_{2}\) & WSe\({}_{2}\) & WTe\({}_{2}\) \\ \hline \(\Delta_{\rm b}({\cal W}_{\rm val})\) & 12.0 & 8.4 & 11.7 & 34.6 & 25.6 & 20.2 \\ \(\Delta_{\rm b}({\cal W}_{\rm con})\) & 40.5 & 28.4 & 32.8 & 26.2 & 23.4 & 42.0 \\ \end{tabular}
\end{table}
Table 2: Root-mean-square deviation between the band structures of TMD monolayers obtained from the ReSpect and Vasp codes at the fr level of theory (including SOC) using Eq. (5). The valence region \({\cal W}_{\rm val}\) ranges from \(-10\) eV to 0 eV (shifted to the Fermi level) and the conduction region \({\cal W}_{\rm con}\) ranges from 0 eV to 5 eV.
Figure 2: The structure of the 1T’ phase of monolayer TMDs.
We explored the topological nature of the 1T' phase of the MX\({}_{2}\) systems by first verifying that the band-gap opening is due to the SOC. Fig. S3 in the SM shows that energy degeneracies occurring at points where the valence band crosses the conduction band are lifted when SOC is included in the description. A band gap is opened for all systems except MoTe\({}_{2}\) and WTe\({}_{2}\) that remain metallic despite the band inversion. The lattice distortion in the 1T' phase of the 2D TMDs does not break the inversion symmetry. Hence, we used the parity of the Bloch states evaluated at the time-reversal invariant momenta (TRIM) to calculate the \(\mathbb{Z}_{2}\) invariant [118]. Our results for the parities of the individual Kramers pairs agree with those obtained by Qian [11], which yields \((-1)^{\nu}=1\) for all six TMDs and confirms the existence of the surface edge states hallmarking the quantum spin Hall phase.
## III Conclusion and outlook
2D TMDs rank among the materials attracting highest interest for hosting exotic phenomena that promise applications in spintronic devices and quantum computing. Theoretical predictions and understanding of such applications relies on accurate description of relativistic effects in the heavy atoms that are main constituents of these materials. To enable studies of materials that are strongly influenced by relativistic effects and electron wave functions close to nuclei and offer a uniform theoretical framework for solids and molecules where all electronic states are described on an equal footing, we have presented an approach here that builds on the most widely used basis sets in quantum chemistry.
GTOs provide a convenient link between finite and periodic or extended systems, yet their use for solids has been met with many technical difficulties that sparked scepticism in the community; it was even suggested that GTOs should be avoided altogether for condensed matter calculations [99]. However, in this work, we show how the common limitations associated with GTOs can be overcome, and we demonstrated that the convergence with respect to the basis set limit is possible even in the fully relativistic 4c setting. To mitigate numerical and performance issues, our method is based on the quaternion algebra, linear-scaling data structures, RI approximation of the Coulomb integrals, orthonormal momentum-space bases that sufficiently preserve the RKB condition, and a parallel electronic structure solver with minimal required IO operations. Our in-depth analysis of the 2H, 1T, and 1T' phases of 2D TMDs indicates that our all-electron 4c results can serve as the reference for developing pseudopotentials or approximate 2c relativistic techniques. Finally, we confirmed the existence of the quantum spin Hall effect in the 1T' phase of MX\({}_{2}\) by calculating the \(\mathbb{Z}_{2}\) invariant within our real-space formulation, and we attributed the origin of the lower ground-state energy of WTe\({}_{2}\) in the 1T' phase than 2H phase to the (scalar) relativistic effects.
Figure 3: Comparison of band structures of MX\({}_{2}\) monolayers in the 1T’ phase obtained from the ReSpect (dashed lines) and Vasp (full lines) codes at the relativistic level of theory including SOC. From the top left to bottom right: MoS\({}_{2}\), MoSe\({}_{2}\), MoTe\({}_{2}\), WS\({}_{2}\), WSe\({}_{2}\), WTe\({}_{2}\). The horizontal dashed black line marks the Fermi level. The path traversing high-symmetry \(\mathbf{k}\)-points in the reciprocal-space unit cell was chosen according to Ref. [116].
The work here is important for opening a pathway for further developments that leverage quantum chemistry methods for studying properties strongly affected by the relativity and SOC without sacrificing the possibility to directly model the core region of the electron charge and spin densities. For instance, accurate relativistic first-principles calculations of the spin Hamiltonian parameters (_e.g._ hyperfine coupling constants, Zeeman interactions, or zero-field splitting) are needed for determining spin dynamical properties of spin defects in semiconductors [62] to help us find suitable materials hosting qubits. Our all-electron method can also be used for generation of new accurate pseudopotentials, particularly for the systems containing heavy elements in high oxidation states.
## IV Methods
### Construction of orthonormal momentum-space basis
Throughout this section, we assume Einstein's implicit summation over doubly repeating indices.
In order to avoid variational collapse associated with an incomplete basis representation of the lower components of the wave function [119] and to obtain the correct nonrelativistic limit of the kinetic energy operator in a finite basis, we impose the RKB condition [100] for the basis bispinors \(\chi_{\mu}(\mathbf{r})\) in real space. Hence, we require that [43; 49; 55; 120; 121]
\[\chi_{\mu}(\mathbf{r})\equiv\begin{pmatrix}\mathbb{I}_{2}&0_{2}\\ 0_{2}&\frac{1}{2c}\mathbf{\sigma}\cdot\mathbf{p}\end{pmatrix}g_{\mu}(\mathbf{r}-\mathbf{A}_{ \mu}), \tag{6}\]
where \(\mathbb{I}_{2}\) is the \(2\times 2\) identity matrix, \(\mathbf{p}\equiv-i\mathbf{\nabla}\) is the electron momentum operator, and \(g_{\mu}\) are scalar basis functions centered on atom positions \(\mathbf{A}_{\mu}\). Due to the multicomponent structure of \(\hat{H}\) and \(\chi_{\mu}\), \(H_{\mu\mu^{\prime}}\) and \(S_{\mu\mu^{\prime}}\) in Eq. (4) are \(4\times 4\) complex matrices for each pair \(\mu,\mu^{\prime}\) and for each \(\mathbf{k}\). For the functions \(g_{\mu}\), we choose the primitive spherical Gaussian-type orbitals (GTOs)
\[g_{\mu}(\mathbf{r})\equiv\mathcal{N}Y_{lm}(\vartheta,\varphi)e^{-\alpha\mathbf{r}^{2}}, \tag{7}\]
where \(\mathcal{N}\) is the normalization constant, \(\alpha\) is the Gaussian exponent, and \(Y_{lm}(\vartheta,\varphi)\) are the spherical harmonics. The GTO basis is commonly implemented in many quantum chemistry codes [90; 91; 122; 123; 124; 125; 126]. For computational reasons, integrals in Eqs. (4) are evaluated analytically by solving recurrence relations [127] formulated in terms of the Cartesian GTOs [128]. The resulting integrals are then transformed to spherical GTOs.
To construct an orthonormal basis in momentum space, let us first define a nonorthogonal basis from \(\chi_{\mu\mathbf{R}}(\mathbf{r})\equiv\chi_{\mu}(\mathbf{r}-\mathbf{R})\) as
\[\chi_{\mu}(\mathbf{k})=\frac{1}{\sqrt{|\mathcal{K}|}}\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot \mathbf{R}}\chi_{\mu\mathbf{R}}. \tag{8}\]
A new basis is obtained using the transformation
\[\varphi_{p}(\mathbf{k})=\chi_{\mu}(\mathbf{k})B^{\mu}_{p}(\mathbf{k}). \tag{9}\]
From here, we for clarity drop the dependence of all matrices on \(\mathbf{k}\). The matrix \(B\) can be chosen so that \(\varphi_{p}\) are orthonormal, _i.e._\(B^{\dagger}SB=\mathbb{I}\). We perform the canonical orthogonalization
\[B=Us^{-1/2}\Lambda, \tag{10}\]
where \(U\) is the unitary matrix of eigenvectors and \(s\) is the diagonal matrix of eigenvalues, both obtained from diagonalizing the overlap matrix \(S\). \(\Lambda\) is a rectangular matrix consisting of the square identity matrix and zero rows that correspond to the basis functions that are projected out. In case no projections are required, \(\Lambda\) is simply the identity matrix.
In the relativistic 4c theory, \(\chi_{\mu}\) take the bispinor form of Eq. (6), and the \(B^{\mu}_{p}\) become \(4\times 4\) matrices for each \(\mu,p\). However, let us now consider a general basis set of bispinors \(\chi_{\mu}\) with \(\chi^{L}_{\mu}\) and \(\chi^{S}_{\mu}\) denoting the \(2\times 2\) large (upper) and small (lower) components of the basis, respectively. Similarly, let \(\varphi^{L}_{p}\equiv\chi^{L}_{\mu}X^{\mu}_{p}\) and \(\varphi^{S}_{p}\equiv\chi^{S}_{\mu}Y^{\mu}_{p}\) denote the transformed (_e.g._ orthonormalized) large and small components of the basis, respectively, \(X\) and \(Y\) forming a block-diagonal matrix \(B\) in Eq. (9). The DKS Hamiltonian takes the matrix form
\[H=\begin{pmatrix}V_{LL}&c\Pi_{LS}\\ c\Pi_{SL}&V_{SS}-2c^{2}S_{SS}\end{pmatrix}, \tag{11}\]
where \(V_{LL},V_{SS},\Pi_{LS},\Pi_{SL}\) are the matrix representations of their respective operators in the \(\chi_{\mu}\) basis, and \(S_{SS}\) is the overlap matrix for the small-component basis. Ensuring numerical stability requires projecting out basis functions \(\varphi^{L}_{p}\) and \(\varphi^{S}_{p}\) corresponding to the smallest eigenvalues of the 4c overlap matrix. At the same time, removing \(\varphi^{S}_{p}\) functions degrades the RKB condition that is no longer satisfied exactly in the new basis, which can lead to the emergence of artificial in-gap states. Here, we proceed by showing how we numerically track this basis truncation error. First, we express the RKB condition in the matrix form using the arbitrary basis \(\chi_{\mu}\). If we write the 4c eigenvalue problem with the Hamiltonian matrix in Eq. (11) and eliminate the small-component wave-function coefficients, we obtain
\[\left[V_{LL}+\frac{1}{2}\Pi_{LS}\mathcal{B}^{-1}_{SS}(\varepsilon)\Pi_{SL} \right]c_{L}=\varepsilon S_{LL}c_{L}, \tag{12}\]
where
\[\mathcal{B}_{SS}(\varepsilon)=S_{SS}+\frac{\varepsilon S_{SS}-V_{SS}}{2c^{2}}, \tag{13}\]
\(c_{L}\) are the coefficients of the large component of the wave function, and \(\varepsilon\) is the one-electron energy. Imposing the requirement that the nonrelativistic Hamiltonian is recovered in the limit \(c\rightarrow\infty\) gives the following condition
\[T_{LL}=\frac{1}{2}\Pi_{LS}S^{-1}_{SS}\Pi_{SL}, \tag{14}\]
where \(T_{LL}\) is the nonrelativistic kinetic energy matrix in the basis of the large component. We note, that the choice of bispinors \(\chi_{\mu}\) as in Eq. (6) satisfies Eq. (14) analytically, since \(\Pi_{LS}=\Pi_{SL}=\frac{1}{c}T_{LL}\) and \(S_{SS}=\frac{1}{2c^{2}}T_{LL}\). In the transformed basis \(\varphi_{p}\), Eq. (14) reads
\[X^{\dagger}T_{LL}X=\frac{1}{2}X^{\dagger}\Pi_{LS}Y\left(Y^{\dagger}S_{SS}Y \right)^{-1}Y^{\dagger}\Pi_{SL}X. \tag{15}\]
This RKB condition remains valid as long as two conditions are met: The former basis \(\chi_{\mu}\) satisfies Eq. (14), and the inverse \(Y^{-1}\) exists, _i.e._ the new basis spans the same space as the former one. By requiring that the new basis is orthonormal (\(Y^{\dagger}S_{SS}Y=\mathbb{I}\)), this condition simplifies to
\[X^{\dagger}T_{LL}X=\frac{1}{2}X^{\dagger}\Pi_{LS}YY^{\dagger}\Pi_{SL}X, \tag{16}\]
and the existence of \(Y^{-1}\) implies that \(S_{SS}^{-1}=YY^{\dagger}\). However, orthonormalization procedures that involve removing eigenvectors in the orthonormal basis of \(\varphi_{p}^{S}\) by using a rectangular \(\Lambda\) in Eq. (10) do not generally preserve the RKB condition. Hence, in the 4c framework, improving the conditioning of matrices by projecting out redundant basis functions can sacrifice the RKB between the \(L\) and \(S\) components.
In order to assess the quality of the orthonormal basis for the lower components of the wave function, we calculate how much of the RKB condition is lost by the transformation to the reduced orthonormal basis. To this end, we evaluate
\[e=\max\left[X^{\dagger}\left(\frac{1}{2c^{2}}T_{LL}YY^{\dagger}T_{LL}-T_{LL} \right)X\right], \tag{17}\]
for each \(\mathbf{k}\). Here, max indicates the largest matrix element. We found, that large values of \(e\) (\(>10^{-5}\)) could be associated with the emergence of artificial states in the band structure. To keep the error in the RKB small, for every \(\varphi_{p}^{S}\) function that is projected out, we also remove the corresponding \(\varphi_{p}^{L}\) function regardless of how large the eigenvalue of the overlap matrix \(S_{LL}\) is for this function. This means that the \(\varphi_{p}^{L}\) must be removed even if it is not causing overcompleteness of the large-component subspace. We note, that these measures were necessary for moderately-sized (triple-\(\zeta\) and quadruple-\(\zeta\)) basis sets, but the numerical issues were not observed in case the smaller double-\(\zeta\) basis was used. Finally, the process of orthonormalization and removal of redundant functions heavily depends on \(\mathbf{k}\), so the implementation of the electronic structure solver must have the flexibility to account for matrices with different sizes for each \(\mathbf{k}\).
### Parallel electronic structure solver
Construction of the ground-state one-electron wave functions within the self-consistent field (SCF) theory consists of two distinct steps that are iterated repeatedly until self-consistency is reached. The DKS Hamiltonian and overlap matrices in Eqs. (4) are first assembled in real space using the GTO basis. These are then transformed to reciprocal space, and Eq. (3) is solved in the orthonormal basis using matrix algebra (multiplications and diagonalizations). Large-scale or high-throughput calculations of band structures of solids require efficient algorithms that offer good (ideally linear) scaling with respect to the number of computer cores used as well as the size of the unit cell. This must be accomplished for both of the SCF solver steps. The demand for efficiency is especially important for all-electron 4c calculations, where the matrix sizes are much larger and even otherwise negligible matrix operations become time-consuming.
The evaluation of the DKS Hamiltonian matrix elements is dominated by the electron-electron Coulomb contributions for DFT simulations with pure exchange-correlation functionals. We adopt the resolution-of-the-identity (RI) approach [101; 102; 103] (also known as the density fitting procedure) combined with multipole expansions [129; 84] to significantly reduce the overall cost of the Coulomb terms. The RI method approximates the electron density by a linear combination of auxiliary basis functions centered on atoms, as opposed to the exact treatment of the four-center Coulomb integrals where the orbital products that constitute the density are expanded individually. Our RI implementation differs from that of previous authors [130; 84], and the theoretical challenges associated with the divergent terms in the Coulomb metric matrix of periodic systems together with details of this RI method are going to be presented elsewhere.
Once the real-space matrix elements are calculated, the remaining steps - transformation to reciprocal space, orthonormalization procedure, matrix diagonalization - is performed independently for each \(\mathbf{k}\). We exploit the factorization of the reciprocal-space tasks by employing message-passing-interface (MPI) directives which allow high level of parallelization to be achieved due to almost no communication needed among various MPI processes. Each process utilizes multiple OpenMP threads when calling internally parallelized matrix libraries. However, in order to construct the electron density
\[\rho(\mathbf{r})=\sum_{n}\int_{\mathcal{K}}f_{n}(\mathbf{k})\psi_{n}^{\dagger}(\mathbf{k}; \mathbf{r})\psi_{n}(\mathbf{k};\mathbf{r})d^{3}\mathbf{k}, \tag{18}\]
where \(f_{n}(\mathbf{k})\) is the occupation number of the \(n\)-th band and \(\psi_{n}(\mathbf{k};\mathbf{r})\) are the Bloch functions defined in Eq. (2), the band energies \(\varepsilon_{n}(\mathbf{k})\), from which the occupations are determined, must be know for all \(\mathbf{k}\). Hence, the evaluation of the electron density (needed for subsequent SCF iterations) occurs after all the wave-function coefficients \(c_{n}^{\mu}(\mathbf{k})\) and energies are found. In the all-electron 4c framework, the coefficient matrices are too large to be stored in memory for all \(\mathbf{k}\), and storing the matrices on disk hampers the MPI parallelization due to significantly increased I/O communication. To this end, we partition
the electron density into two terms
\[\rho(\mathbf{r})=\rho_{\rm c}(\mathbf{r})+\rho_{\rm v}(\mathbf{r}), \tag{19}\]
where for \(\rho_{\rm c}\) we restrict the sum \(\sum_{n}\) in Eq. (18) to contain only those bands that are assumed to be fully filled (\(f_{n}(\mathbf{k})=1\)) during the entire SCF procedure. \(\rho_{\rm c}\) contains contributions to the electron density from the majority of bands - these terms are evaluated on-the-fly after the diagonalization independently for each \(\mathbf{k}\) point without the need to store the full coefficient matrices. The remaining terms in the density are incorporated in \(\rho_{\rm v}\) and calculated in a separate loop over \(\mathbf{k}\) after all \(\varepsilon_{n}(\mathbf{k})\) and \(f_{n}(\mathbf{k})\) are known, which allows for a small number of partially filled bands (\(f_{n}(\mathbf{k})\) can vary across \(\mathbf{k}\)). The coefficients \(c_{n}^{\rm H}(\mathbf{k})\) needed for \(\rho_{\rm v}\) can be stored in memory as they form narrow rectangular matrices with only one dimension increasing with the system size. In addition, each MPI process only needs to keep the coefficients for the subset of \(\mathbf{k}\) points that the process handles, which is advantageous for multinode calculations within the distributed-memory architecture since the coefficients do not need to be shared or communicated among nodes. The parallel scheme described here enables seamless all-electron fully relativistic 4c calculations of band structures or density of states with a large number of \(\mathbf{k}\) points using thousands of cores.
### Evaluation of \(\mathbb{Z}_{2}\) invariant
The QSH phase of 2D materials with time-reversal symmetry is characterized by a nontrivial topological order associated with a nonzero \(\mathbb{Z}_{2}\) index [131]. Identification of topological materials from numerical simulations is not straightforward [132]. However, for systems with inversion symmetry, the calculation of the \(\mathbb{Z}_{2}\) invariant can be simplified using Fu and Kane's method [118] based on the knowledge of the parity of the Bloch wave functions of the bulk crystal at the time-reversal invariant momenta (TRIM) of the Brillouin zone. In this section, we describe how the \(\mathbb{Z}_{2}\) invariant is calculated in the 4c real-space GTO basis.
First, we discuss the construction of the momentum-space parity operator \(P_{\mu\mu^{\prime}}(\mathbf{k})\). Let \(\mathcal{I}\) denote the space-inversion operator with the inversion center at \(\mathbf{G}\), defined by its action on a scalar function \(f\) as \(\mathcal{I}f(\mathbf{r})\equiv f(2\mathbf{G}-\mathbf{r})\). From the requirement that the parity-transformed bispinor wave function \(\psi(\mathbf{r},t)\) satisfies the same Dirac equation, it is possible to identify the form of the 4c parity operator \(\hat{P}\) as
\[\hat{P}\psi(\mathbf{r},t)=\eta\beta\psi(2\mathbf{G}-\mathbf{r},t)\equiv\eta\beta\mathcal{I }\psi(\mathbf{r},t), \tag{20}\]
where \(\beta=\text{diag}(\mathbb{I}_{2},-\mathbb{I}_{2})\) and \(\eta\) is an arbitrary phase factor (\(|\eta|=1\)) that can be introduced by the parity transformation. Neglecting the phase gives \(\hat{P}=\beta\mathcal{I}\). The 4c matrix form of the parity operator is obtained by realizing that \(\mathcal{I}\mathbf{p}\mathcal{I}^{\dagger}=-\mathbf{p}\) and letting \(\hat{P}\) act on the RKB basis in Eq. (6), _i.e._
\[\hat{P}\chi_{\mu}\equiv\hat{P}\begin{pmatrix}\mathbb{I}_{2}&0_{2}\\ 0_{2}&\frac{1}{2c}\mathbf{\sigma}\cdot\mathbf{p}\end{pmatrix}g_{\mu}=\begin{pmatrix} \mathbb{I}_{2}&0_{2}\\ 0_{2}&\frac{1}{2c}\mathbf{\sigma}\cdot\mathbf{p}\end{pmatrix}\mathcal{I}g_{\mu}. \tag{21}\]
It follows that the parity matrix \(P_{\mu\mu^{\prime}}(\mathbf{k})\) can be obtained using Eq. (4b) for the overlap matrix with an additional application of the inversion \(\mathcal{I}\) on the scalar basis function \(g_{\mu^{\prime}}\). For an orbital from Eq. (7) with angular momentum number \(l\) that is centered at \(\mathbf{A}\) (which here denotes the position of an atom \(\mathbf{A}_{\mu}\) in a unit cell \(\mathbf{R}\)), this inversion gives
\[\mathcal{I}g_{\mu}(\mathbf{r}-\mathbf{A})=(-1)^{l}g_{\mu}\left(\mathbf{r}+\mathbf{A}-2\mathbf{G} \right). \tag{22}\]
Finally, the \(\mathbb{Z}_{2}\) invariant is calculated using the eigenvalues \(\xi_{2m}\) of the parity \(P_{\mu\mu^{\prime}}(\mathbf{k})\) as [118]
\[(-1)^{\nu}=\prod_{i=1}^{4}\prod_{m=1}^{N}\xi_{2m}(\Gamma_{i}), \tag{23}\]
where \(N\) is the number of occupied Kramers pairs, and \(\Gamma_{i}\) labels the four TRIMs in 2D.
We conclude this section by noting a few considerations required for a numerically stable and reliable calculation of the parity matrix and its eigenvalues when using the method outlined here. First, the sparse matrix storage scheme outlined in Section IV.1 based on the locality of basis function products \(\chi_{\mu}^{\dagger}(\mathbf{r})\chi_{\mu^{\prime}}(\mathbf{r}-\mathbf{R})\) needs to be modified to respect the nonlocal nature of the inversion operator. Specifically, the list of significant elements of the inversion matrix is not the same as for the overlap matrix, but is related to it via reflection. Likewise, it must be ensured that the atomic basis set chosen for the inversion-equivalent atoms is the same. Finally, it is not automatically guaranteed that the Bloch functions in Eq. (2) that are solutions of the DKS equation are also eigenfunctions of the parity operator for Bloch states with degenerate energies. Parity eigenvalues can be obtained by diagonalizing the sub-blocks of the parity matrix that correspond to these degenerate energy levels. However, in numerical simulations, energy values obtained from diagonalization routines are not necessarily exactly degenerate, but rather appear as near-degenerate. At the same time, eigenvectors corresponding to these near-degenerate levels are not uniquely defined. Hence, we introduced an energy threshold that identifies the degeneracies.
## Data availability
The data presented in this study as well as the scripts used for pre- and post-processing of the input and output files are available in the ZENODO public repository [133].
Code availability
The ReSpect code used in this study is available at www.respectprogram.org upon a reasonable request free of charge.
###### Acknowledgements.
This work was supported by the Research Council of Norway through its Centres of Excellence scheme (Grant No. 262695), a Research Grant (Grant No. 315822), and its Mobility Grant scheme (Grant No. 301864), as well as the use of computational resources provided by UNINETT Sigma2 - The National Infrastructure for High Performance Computing and Data Storage in Norway (Grant No. NN4654K). In addition, this project received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 945478 (SASPRO2), and the Slovak Research and Development Agency (Grant No. APVV-21-0497). The work at Northeastern University was supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0322 and benefited from the computational resources of Northeastern University's Advanced Scientific Computation Center (ASCC) and the Discovery Cluster.
|
2308.16678 | Dynamic nsNet2: Efficient Deep Noise Suppression with Early Exiting | Although deep learning has made strides in the field of deep noise
suppression, leveraging deep architectures on resource-constrained devices
still proved challenging. Therefore, we present an early-exiting model based on
nsNet2 that provides several levels of accuracy and resource savings by halting
computations at different stages. Moreover, we adapt the original architecture
by splitting the information flow to take into account the injected dynamism.
We show the trade-offs between performance and computational complexity based
on established metrics. | Riccardo Miccini, Alaa Zniber, Clément Laroche, Tobias Piechowiak, Martin Schoeberl, Luca Pezzarossa, Ouassim Karrakchou, Jens Sparsø, Mounir Ghogho | 2023-08-31T12:29:24Z | http://arxiv.org/abs/2308.16678v1 | # Dynamic NSNET2: Efficient Deep Noise Suppression with Early Exiting
###### Abstract
Although deep learning has made strides in the field of deep noise suppression, leveraging deep architectures on resource-constrained devices still proved challenging. Therefore, we present an early-exiting model based on nsNet2 that provides several levels of accuracy and resource savings by halting computations at different stages. Moreover, we adapt the original architecture by splitting the information flow to take into account the injected dynamism. We show the trade-offs between performance and computational complexity based on established metrics.
Riccardo Miccini\({}^{\star\dagger}\)+ Alaa Zniber\({}^{\lx@sectionsign}\) Clement Laroche\({}^{\star}\) Tobias Piechowiak\({}^{\star}\) Martin Schoeberl\({}^{\dagger}\) Luca Pezzarossa\({}^{\dagger}\) Ouassim Karrakchou\({}^{\lx@sectionsign}\) Jens Sparso\({}^{\dagger}\) Mounir Ghogho\({}^{\lx@sectionsign}\)+\({}^{\star}\) GN Audio \({}^{\dagger}\) Technical University of Denmark \({}^{\lx@sectionsign}\) International University of Rabat +
Footnote †: This research has received funding from the European Union’s Horizon research and innovation programme under grant agreement No 101070374.
Deep Noise Suppression, Dynamic Neural Networks, Early-exiting
## 1 Introduction
In recent years, audio products such as earbuds, headsets, and hearing aids have become increasingly popular and are driving the demand for real-time solutions capable of improving speech quality. Noise suppression techniques based on deep learning -- also referred to as Deep Noise Suppression (DNS) -- are superseding conventional speech enhancement techniques based on digital signal processing. This is thanks to their ability to effectively reduce non-stationary noise and deal with diverse background sounds. Several DNS models, in particular those capable of real-time causal inference, rely on Recurrent Neural Networks (RNNs) [1, 2, 3]. Such approaches usually involve feeding a frequency-domain signal -- in the form of Short-time Fourier Transform (STFT) or Mel spectrograms -- one frame at a time, into an encoding layer and one or more recurrent layers such as Gated Recurrent Units (GRU), thereby providing temporal sequence modelling abilities. This internal representation is then decoded to form a time-varying filter, which is applied to the noisy input to obtain a clean speech estimate.
Unfortunately, such architectures are not particularly suited for resource-constrained devices due to the large amount of computation required by the recurrent units. On that note, several efforts have been put in trying to optimise and deploy recurrent models on embedded hardware [4, 5] with techniques such as quantization. Nevertheless, one fundamental aspect remains unaddressed, i.e., the model requires the same amount of computation irrespective of energy/power requirements, initial speech quality of the input signal, and desired output speech quality.
To address this issue, Dynamic Neural Networks (DyNNs), also known as conditional neural networks, are one path to undergo [6]. DyNNs can adaptively adjust their parameters as well as their computational graph, based on the input they receive. This dynamism grants the network more flexibility and efficiency in handling various resource budgets, real-time requirements, and device capacities while maintaining a good performance trade-off. Amongst the most promising techniques for DyNNs that appear suitable for addressing limited hardware resources, we find _early exiting_[7, 8, 9]. Early exiting was introduced in the context of image classification [10, 11]. It appends the architecture with decision blocks (e.g., internal classifiers) that decide when to halt the computations based on the output of each exit stage (e.g., image class probability). This paradigm helps avoid using the full-fledged network, but various challenging aspects emerge from these new types of architectures. For instance, the absence of feature sharing between internal decision blocks can lead to computational waste [12, 13]. Another challenge lies in the optimal placement of the decision blocks in the architecture based on either performance gain or loss minimisation [14, 15]. Moreover, convergence is also subject to instability due to the accumulation of gradients coming from the different decision blocks, which can be mitigated by gradient re-scaling [16] or a mixture of training strategies [17].
In this work, we introduce a novel application of DyNNs to noise suppression. We adapt nsNet2 [1], a popular architecture for noise suppression, into a model capable of early exiting given user-predefined constraints. We then demonstrate that the resulting architecture achieves a monotonic increase in its denoising capabilities with each consecutive exit stage. Our architecture allows the user to choose the denoising/computational cost trade-off that best suits their needs.
ue to the additional challenges associated with unintrusively modelling speech quality in real-time, automatic exiting is beyond the scope of this work. In this paper:
* We convert nsNet2 into an early-exiting model and explore several architectural adaptations aimed at maintaining its denoising capabilities at each stage;
* We investigate the impact of different early exit training strategies (layer-wise or joint) on the denoising performance of the models;
* We evaluate the speech quality and computational efficiency of our models for each exit stage and show that our architectural adaptations decrease computational cost without degrading baseline performances.
## 2 Deep Noise Suppression
We assume that the observed signal \(X(k,n)\) -- defined in the STFT-domain where \(k\) and \(n\) are the frequency and time frame indices, respectively -- is modelled as an additive mixture of the desired speech \(S(k,n)\) and interfering noise \(N(k,n)\), expressed as:
\[X(k,n)=S(k,n)+N(k,n) \tag{1}\]
To obtain the clean speech estimate \(\widehat{S}(k,n)\), we compute a real-valued suppression gain spectral mask \(\widehat{M}\) such that:
\[\widehat{S}(k,n)=X(k,n)\cdot\widehat{M}(k,n) \tag{2}\]
This can be formulated as a supervised learning task, where a mask \(\widehat{M}(k,n)\) is learned to minimise the distance between \(S\) and \(\widehat{S}\).
### Dynamic Architecture
As our baseline, we use nsNet2 [1], an established DNS model with the following architecture comprising recurrent and fully-connected (FC) layers: FC-GRU-GRU-FC-FC-FC. Each fully-connected layer is followed by a ReLU activation, except for the last one, which features a sigmoid nonlinearity. The model operates on real-valued log-power spectrograms, computed as \(\log(|X|^{2}+\varepsilon)\), for a small \(\varepsilon\).
We introduce exit stages after each layer, as shown in Fig. 1. Each exit outputs a mask \(\widehat{M}_{i}\), which is a subset of that layer's activations with size given by the number of frequency bins, to be applied to the noisy input (chosen, in our case, as the first 257 features). Since the suppression gains are bound between 0 and 1, we use the sigmoid function to clamp our FC activations -- i.e., the outputs of layers FC\({}_{1}\), FC\({}_{2}\), FC\({}_{3}\) -- similarly to how the baseline model implements its last activation (i.e., the output of layer FC\({}_{4}\)). For the GRU layer activations, which are inherently bound between \(-1\) and \(1\) due to the final \(\tanh\) activation, we employ the simple scaling function \(0.5\cdot(1+\text{GRU}_{i}(x))\). Note that these extra steps are only performed when there is a need to extract the mask at the early stage, otherwise, we use the activations mentioned earlier. This results in an architecture able to recover the signal with up to 6 different denoising abilities.
Given the significant differences between the masks and the model's internal representation, forcing our model to derive the former at each layer may degrade its performance. We address this by reducing the number of intermediate exit stages to promote the emergence of richer internal feature representations in non-exiting layers. Thus, we introduced a version of our dynamic model with only the 4 exits marked with \(\widehat{\mathbf{M}_{0}}\), \(\widehat{\mathbf{M}_{1}}\), \(\widehat{\mathbf{M}_{3}}\), \(\widehat{\mathbf{M}_{5}}\) in Fig. 1. We decided to remove later-occurring exits because, during our experimental observations, they were more prone to performance degradation.
### Split Layers
As mentioned earlier, when dealing with early exiting, a degradation in performance in the deepest exit stages is often observed, when comparing the model against its non-dynamic analogous. This is due to the emergence of _task-specialized_ features that prevent useful information to flow further [6]. Therefore, we attempt to mitigate the issue by introducing additional data paths in the form of duplicate layers. These act as ancillary feature extractors, tasked with deriving an increasingly refined internal representation that proves useful for the downstream layers. This configuration also avoids subsetting the layers' activations to yield a mask.
As shown in Fig. 2, two alternative split-layer topologies
Figure 1: nsNet2 architecture with exit stages (dotted lines show an example of full inference path)
Figure 2: Different styles of split layer adaptations
have been formulated. In both cases, the layers \(\Phi_{i}\) generate mask estimates, while layers \(\Phi_{i}^{*}\) propagate features. The main difference between the two variants is whether a given \(\Phi_{i}^{*}\) layer receives only the output from \(\Phi_{i-1}^{*}\) or the concatenated output from both previous layers, i.e., \(concat(\Phi_{i-1},\Phi_{i-1}^{*})\). The former variant assumes that previous masks do not contain features that are useful for the model's internal representation. Conversely, the latter lifts this assumption at the expense of computational complexity, which is increased as a result of the larger input size for \(\Phi_{i}^{*}\) layers.
## 3 Training
Our model is trained on the loss function shown in Eq. (3) that was initially proposed in [18] and subsequently adapted in [1]. The loss comprises two terms: the first term computes the mean-squared error between the clean and estimated complex spectra, whereas the second term corresponds to the mean-squared error of the magnitude spectra. The two terms are weighted by a factor of \(\alpha\) and \((1-\alpha)\), respectively, with \(\alpha=0.3\). The spectra are power-law compressed with \(c=0.3\):
\[\begin{split}\mathcal{L}\left(S,\widehat{S}\right)& =\alpha\sum_{k,n}\left||S|^{c}e^{j\mathcal{L}S}-|\widehat{S}|^{c}e ^{j\mathcal{L}\widehat{S}}\right|^{2}+\\ &\quad(1-\alpha)\sum_{k,n}\left||S|^{c}-|\widehat{S}|^{c}\right| ^{2}\end{split} \tag{3}\]
To avoid the impact of large signals dominating the loss and creating unbalanced training in a batch of several samples, we normalise \(S\) and \(\widehat{S}\) by the standard deviation of the target signal \(\sigma_{S}\) before computing the loss, as per [1].
In general, training early-exit models can fall into two categories [7]: _layer-wise training_ and _joint training_. Since each has its advantages and drawbacks, we have adopted both strategies to challenge them against each other.
### Layer-wise Training
Layer-wise training is a straightforward way to train early exiting models with or without pre-trained backbones (i.e., non-dynamic architectures). The idea is to train the first sub-model, from input \(X\) and target \(S\) to the first exit stage that outputs an estimate \(\widehat{S}_{0}=X\odot\widehat{M}_{0}\). Once the training reaches an optimum, the sub-model's weights are frozen. The subsequent sub-models are afterwards trained iteratively taking as inputs their previous sub-model's last feature vector. This strategy is helpful for mitigating vanishing gradients as it allows the training of smaller parts of a bigger network. However, its main drawback is its shortsightedness as early freezing might degrade later feature representations, and thus, impede the model's expressivity.
### Joint Training
Joint training inherits from multi-objective optimisation the idea of minimising a weighted sum of the competing objective functions at play (a practice known as _linear scalarisation_). For each sub-model, we attribute a loss function \(\mathcal{L}_{i}\) as defined in Eq. (3). Thus, for \(N\) exit stages (i.e., \(N\) sub-models), the total loss can be written as a linear combination of \(\mathcal{L}_{i}\):
\[\mathcal{L}_{tot}=\sum_{i=0}^{N-1}\alpha_{i}\mathcal{L}_{i}\left(S,X\odot \widehat{M}_{i}\right) \tag{4}\]
where \(\alpha_{i}\) are weighting factors applied to each respective sub-model loss. Since we do not want to prioritise any specific exit stage, we set each \(\alpha_{i}\) to 1.
By definition, joint training establishes information sharing between the different exit stages. The model is optimised in a multiplayer-game setting since each exit (player) strives to minimise its loss along with the best internal representations for its task. Nonetheless, the loss complexity imposed on the model leads to an accumulation of gradients from the different sub-models which may result in unstable convergence [16].
## 4 Experimental Setup
Our input and target signals are 4 seconds long and pre-processed using an STFT with a window of \(512\) samples (\(32\) ms) and 50% overlap, resulting in \(257\) features per time frame. Baseline and joint trainings were allowed to run for up to \(400\) epochs whereas layer-wise training was capped at \(50\) epochs per exit stage for pre-trained models or \(50\cdot(i+1)\) epochs, where \(i\) is the exit stage, for the split-layer cases. We set the learning rate to \(10^{-4}\) and batch size to \(512\). To prevent overfitting, we implemented early stopping with patience of \(25\) epochs and decreased the learning rate by \(0.9\) every \(5\) epochs if there was no improvement.
### Dataset
We trained and evaluated our models using data from the 2020 DNS Challenge [19]. This is a synthetic dataset, composed of several other datasets (clean speech, noises, room impulse responses) that are processed so as to introduce reverberation and background noises at different SNRs and target levels, thereby allowing for arbitrary amounts of noisy-clean training examples. Similarly, the challenge provides evaluation sets -- here, we use the synthetic non-reverberant test set.
### Evaluation Metrics
For this study, we are interested in assessing the performance of our models from two perspectives: speech quality and computational efficiency. To gauge the denoised speech quality, we utilise two commonly adopted metrics, namely PESQ
and DNSMOS [20]. For evaluating the computational efficiency of the models, we consider three measures: Floating Point Operations (FLOPs), Multiply-Accumulate Operations (MACs), and Inference Time.
FLOPs and MACs are computed using _DeepSpeed_'s built-in profiler1 and the TorchInfo library2, respectively. Inference time was computed on CPU for a single frame and averaged out over \(1000\) samples. In all cases, we normalise the metrics to 1 second of input data, corresponding to \(63\) STFT frames.
Footnote 1: [https://github.com/microsoft/DeepSpeed](https://github.com/microsoft/DeepSpeed)
Footnote 2: [https://github.com/TylerYep/torchinfo](https://github.com/TylerYep/torchinfo)
### Model Configurations
We experimented with several configurations, all derived from the baseline. These comprise the baseline itself, which has been replicated and trained accordingly, as well as combinations of the techniques and adaptations mentioned earlier, and can be subdivided into the following three dimensions:
* **Exit stages**: we trained both the 6-exit and 4-exit variants introduced in Section 2.1;
* **Split layers**: we trained both the straightforward early-exiting model described in Section 2.1, starting from a checkpoint of the pre-trained baseline, as well as both split-layer variants described in Section 2.2;
* **Training strategies**: we employ both the layer-wise and joint training schemes described in Section 3 to determine the most suitable strategy.
To isolate the impact of each adaptation, we fix our architectural hyperparameters to the following values: for baseline and simple pre-trained early-exiting models, we adhered to the original feature sizes of \(400\) units for FC\({}_{1}\), GRU\({}_{1}\), and GRU\({}_{2}\), \(600\) units for FC\({}_{2}\) and FC\({}_{3}\), and \(257\) units for FC\({}_{4}\) -- i.e., the number of frequency bins in the suppression mask. In split-layers models, the number of features in each \(\Phi_{i}\) must also match the size of the suppression mask. To avoid introducing any unfair advantage into our evaluation, we picked \(128\) as the size of \(\Phi_{i}^{*}\) layers, so that the overall number of propagated features per layer is less than in the baseline. This results in models with sizes described in Table 1.
## 5 Results
A full overview of the different configurations is shown in Table 2. Here, we observe a monotonous performance increase along the exit stages until asymptotically approaching the baseline, showing that deeper layers develop more expressive representations. Predictably, this trend is observed across all model variants and applies to both PESQ and DNSMOS scores, with the notable advantage of smaller model size in split-layer configurations.
Split-layer designs trained in a layer-wise fashion were demonstrably effective at reducing the performance gap with the baseline, despite using fewer trainable parameters and operations than the straightforward variants. Indeed, the features encoded by the auxiliary layers are beneficial to the denoising tasks, especially at later exit stages (see bold figures in Table 2) where we derive richer features. Layer concatenation (Fig. 2.b) presents the highest scores, confirming that the auxiliary pipeline benefits from mask-related features.
Overall, joint training accounts for the most impact on performance. As presented in Section 3.2, here we allow the model to take full advantage of the freedom given by all its parameters to find the best representations for all exit stages. Here we also notice that later exit stages benefit more from this training strategy, which could be addressed by using different \(\alpha_{i}\) in Eq. (4).
In Fig. 3, it is interesting to observe that PESQ scores exhibit heteroskedasticity with respect to the exit stage; this can be seen as a generalisation of the baseline behaviour, which shows the largest variance, hinting that very low-quality input data are harder to recover. Bizarrely enough, the opposite phenomenon is observed when considering DNSMOS, where values are more stable around the mean.
Fig. 4 shows the different computational demands imposed by each layer, in terms of both operations and processing time. Expectedly, the GRU layers occupy the majority of the computational budget. Although splitting the layers into main and ancillary paths requires less computation than the baseline, we notice a slight increase in inference time. This could be caused by how Pytorch schedules computa
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model class** & **Trainable params.** & **Size (FP32)** \\ \hline pretrain and baseline & 2.78 M & 11.13 MB \\ split\_layers & 1.62 M & 6.48 MB \\ concat\_layers & 1.88 M & 7.54 MB \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model complexity for different configurations
Figure 3: Boxplot of quality metrics at different exit stages.
tions across the layers, or by additional retrieval and copy operations. The GRU layers also take up the majority of the inference time, due to the sequential nature of their computation. This also causes the split-layer variation to be moderately slower since they feature more GRU layers. However, exiting at a given stage \(i\) will spare the computational cost of the following layers as well as that of its respective ancillary layer \(\Phi_{i}^{*}\). Moreover, as mentioned earlier, the layers \(\Phi_{i}^{*}\) are of small dimensions by design, thus presenting negligible overhead (see Table 1).
The incremental improvements provided by each processing block can be fully appreciated in Fig. 5. Most noticeably, stage 0 extracts a coarse spectral envelope of the noise, while later stages refine the contour of speech features such as formats, their harmonics, and high-frequency unvoiced components. When comparing against the baseline, a small but noticeable compression in dynamic range is also observed; this could be a symptom of the conflicting objectives that joint training aims to optimise -- indeed, models trained layer-wise exhibit more contrast.
Finally, in Fig. 6 we provide an overview of how each model stage improves the output PESQ over different input SNR ranges. Here, we notice that the model is most effective at higher input SNRs, and that each stage's contribution to the denoising task is almost equal for each SNR bracket.
## 6 Conclusion
In this work, we proposed an efficient and dynamic version of nsNet2, built upon early-exiting. The models presented herein provide a diverse range of performance/efficiency trade-offs, that could benefit embedded devices with computational and power constraints such as headphones or hearing aids. Our best-performing models can achieve 96% and 98% of baseline performance on PESQ and DNSMOS metrics, respectively, on the last exit stage. When considering the second exit stage, we are able to reach 77% and 90% of baseline performances with 62% savings in multiply-accumulate operations.
Our future work will advance the current proposed model by automatically selecting the optimal exit stage based on the properties of the current input signal as well as a non-intrusive quality assessment module.
|
2309.10120 | How Tycho Brahe's recordings in 1572 support SN 1572 as a type I(a)
supernova | The 450th anniversary of the discovery of the SN 1572 supernova event was
celebrated in 2022. A closer look at the historical development of the field of
supernova astronomy reveals the scientific importance of Tycho Brahe's 1572
observations of this "new star". In their quest to learn more about the new
type of stellar explosion and subsequent evolution, the initial protagonists in
this field (Baader and Zwicky among others) gradually turned their attention to
the final remnant state of these supernova events. Since the remnant object
thought to be associated with the extragalactic supernova event was found to be
very dim, the focus quickly shifted toward nearby galactic events. It is at
this point where Tycho Brahe's observations played an important and often
overlooked role in the context of the development of stellar evolution as a
scientific field. Tycho Brahe's meticulous and detailed recordings of the
change in brightness of the new star not only allowed modern astronomers to
classify SN 1572 as a supernova event but also helped them pinpoint the exact
astrometric location of SN 1572. These findings helped to empirically link
extragalactic supernova events to nearby past supernova remnants in the Milky
Way. This enabled subsequent observations allowing further characterization.
Transforming the historical recordings to a standardized photometric system
also allowed the classification of SN 1572 as a type I supernova event. | Tobias Cornelius Hinse, Bertil F. Dorch, Lars V. T. Occhionero, Jakob P. Holck | 2023-09-18T19:47:47Z | http://arxiv.org/abs/2309.10120v1 | # How Tycho Brahe's recordings in 1572 support SN 1572 as a type I(a) supernova
###### Abstract
The 450th anniversary of the discovery of the SN 1572 supernova event was celebrated in 2022. A closer look at the historical development of the field of supernova astronomy reveals the scientific importance of Tycho Brahe's 1572 observations of this "new star." In their quest to learn more about the new type of stellar explosion and subsequent evolution, the initial protagonists in this field (Baader and Zwicky among others) gradually turned their attention to the final remnant state of these supernova events. Since the remnant object thought to be associated with the extragalactic supernova event was found to be very dim, the focus quickly shifted toward nearby galactic events. It is at this point where Tycho Brahe's observations played an important and often overlooked role in the context of the development of stellar evolution as a scientific field. Tycho Brahe's meticulous and detailed recordings of the change in brightness of the new star not only allowed modern astronomers to classify SN 1572 as a supernova event but also helped them pinpoint the exact astrometric location of SN 1572. These findings helped to empirically link extragalactic supernova events to nearby past supernova remnants in the Milky Way. This enabled subsequent observations allowing further characterization. Transforming the historical recordings to a standardized photometric system also allowed the classification of SN 1572 as a type I supernova event.
2023
Hung TC, Dorch BF, Occhionero IUT and Hocky 2023, How Tycho Brahe's recordings in 1572 support SN 1572 as a supernova event type I(a) supernova
## 1 Introduction
The year 2022 marks the 450th anniversary of the discovery of a guest, or new star, mysteriously appearing suddenly on the northern hemisphere within the constellation of Cassiopeia at a position that was known to be devoid of any known fixed star.
Records of the first sightings of a new star were performed by an abbot in Messina on the Island of Sicily (Italy) (Stephenson and Green, 2002) and W. Schuler in Wittenberg (Germany) (Baade, 1945) on the morning of 6 November 1572 (Julian calendar; see the following for details about the difference between the Julian and Gregorian calendar).
The discovery of the new star is credited to Tycho Brahe (born 14 December 1546, died 24 October 1601) who witnessed the new appearance of a bright star on the evening of 11
November 1572 (Julian date). The difference of 5 days between the first sightings/records by observers in Italy/Germany and Tycho's observation is likely explained due to bad weather in Denmark/Skane (southern part of today's Sweden). No recordings of Tycho's health or wellbeing in the days prior to 11 November exist. Tycho's attribution or acknowledgment for the discovery of a new star is most likely on the merit of him publishing his recordings in the important 1573 publication "_De Nova Stella_" (Brahe, 1573). For scientists and philosophers of the early renaissance, this seminal publication is at the foundation of the later historical development of astronomy and the history of natural sciences. In "_De Nova Stella_", Tycho Brahe's discovery not only documented a change in the heavens beyond the orbit of the Moon but also marks the beginning of refuting Aristot's idea (e.g., Aristot's chief cosmological treatise "_De Caelo_") of the "unchanging heavens" and, therefore, was part of the early movement toward a shift in the then prevailing scientific paradigm.
The importance and subsequent dissemination of "_De Nova Stella_" eventually catapulted the Kingdom of Denmark, and Tycho Brahe himself, on the international arena of contemporary frontiers science and provided the initial financial seed for Tycho Brahe to pursue a life-long passion to carry out ground-breaking astronomical research and instrumentation.
Today, we know that the return of investment was surmounting and sparked the beginning of the important age of enlightenment (e.g., the "Great Age of Reasoning") in Western culture and European societies in general (Thoren, 2002).
For reasons of accuracy in dates given in this review, it may be interesting to note that the Gregorian calendar was introduced in 1582 and implemented in Denmark and Sweden on the 1st of March 1700, far later than 1572. At Tycho Brahe's time, the Julian calendar was used. Currently, the Julian calendar is 13 days behind the Gregorian calendar; however, in 1572, it would have been only 10 days behind. Therefore, Tycho's first recording of SN 1572, in the extrapolated Gregorian calendar, was on 21 November 1572.
Tycho Brahe's contribution was a systematic recording in time of the astrometric position1 of the new star in an attempt to measure a daily (diurnal) parallax effect as a result of Earth's rotational motion. The result of these measurements would allow him to judge on the distance of the new star in relation to the distance to the Moon.
Footnote 1: Two types of quantitative measurements were performed by Brahe: the position of the new star relative to three known (fixed) stars in the constellation of Cassiopea and its position relative to the ecliptic coordinate system.
An interesting question to raise is what instrument did Brahe utilize when performing astrometric measurements of the new star? According to the work of Stephenson and Green (2002), the instrument depicted in Figure 1 was _used to measure the distance of the SN of AD 1572 from nearby stars_. No reference for this statement is offered. We have reasons to believe that this statement is not correct.
According to the work of Pedersen (1980) [pointing to the work of Thoren (1973)], two types of sighting instruments were utilized to measure the position of the new star and in which Brahe had one or the other involvement. The first instrument was a duplicate, but enlarged re-design of his first half-sextant (5 foot in length with an arc length of 30\({}^{\circ}\)) which he made use of in earlier times and as shown in Figure 1. According to the work of Thoren (1973), this instrument was _not_ used by Tycho Brahe when observing the new star in November 1572. We quote from the work of Thoren (1973)
_When Tycho returned to Denmark in 1570, he left his half-sextant_ [Figure 1] _behind in Augsburg as a gift. Accordingly, when he reached Skane [aka. Scania, the southern province of today's Sweden] he set about duplicating the instrument, departing from the original pattern only to the extent of making the arc twice as large._
Thoren (1973) provided a reference to the work of Brahe and Dreyer (1913), Tomas V, p. 84-87) translated to English under the title _Tycho Brahe's discussion of his instruments and scientific work._
We, therefore, learn that Tycho Brahe embarked on improving his instrument (see Figure 1) in 1570 when returning to Denmark (Knutstorp Manor, Scania) after visiting contemporary fellow astronomers near the city of Augbburg, Germany. In the work of Brahe and Kepler (1602, p. 339), Tycho Brahe recollected how he sat waiting by the accurately tuned instrument, waiting to observe the new star. The instrument mentioned in the book is the one in Figure 2. This is, therefore, probably the instrument Tycho Brahe used to observe Stella Nova. The instrument is indeed an improved version of the older half-sextant, now mounted on a plinth by a window and with what seems to be more accurate measuring systems. This instrument is also depicted in the work of Brahe and Dreyer (1913), Tomus V, which Thoren (1973) used as reference. The motivation to improve his existing half-sextant might have its origin in Brahe's active involvement in the 1569 construction of the so-called "Augsburg-Quadrant" (see Figure 3) allowing him to improve his craftsman skills in designing a new instrument and test ideas for improvements at an early age in his life. According to the work of Pedersen (1980), the Augsburg-Quadrant was also used to measure the celestial position of the new star of 1572.
We, therefore, point out that Stephenson and Green (2002) likely are in error when referring to Figure 1 as the instrument used for astrometric measurements of the new star. The actual instrument used is probably the one depicted in Figure 2.
From Tycho Brahe's own measurement, the resulting non-detection of a diurnal parallax allowed him to conclude that the new star must be beyond the distance of the Moon. This measurement and a non-detection of any motion of the new star relative to the fixed stars (the five known classic planets did change position in the sky) allowed him to conclude that the new star must also be beyond the realm of the known planets and, therefore, belong to the realm of the fixed stars. This chain of argument and conclusion is worth paying attention to and marks a turning point in the history of science. In Tycho's own words in his _De Novu Stella_ [Brahe (1573)] [from a translation taken from the work of Stephenson and Green (2002)],
Figure 2: An improved version of Brahe’s half-sextant, target, mounted on a plinth and with finer measurement scales. The instrument probably used by Tycho Brahe for his astrometric measurements of Stella Nova is shown. Source: Brahe and Kepler (1602).
_That it is neither in the orbit of Saturn, however, nor in that of Jupiter, nor in that of Mars, nor in that of any one of the other planets, is hence evident, since after the lapse of several months it has not advanced by its own motion a single minute from that place in which I first saw it... Hence this new star is located neither in the [clemental region2], below the Moon, nor in the orbits of the seven wandering stars, but in the eighth sphere, among the other fixed stars._
Footnote 2: This passage is taken from the new translation in the work of Bahe (2022).
For clarity of context, it is important to remember that at that time, the dominant belief system was a paradigm based on the Aristotelian/Ptolemaic system, where the realm of the stars (beyond the Moon and the well-known five classic planets) were never changing in position, static in relation to each other, constant in their appearance, and displayed no change in colors or brightness in the past, present, and in all eternity. At least this is what classic philosophers observed. Indistuptable, the beginning of the end of this viewpoint, which was held clear in the minds of many for over two millennia, is marked with Tycho Brahees detailed quantitative (astrometry) as well as qualitative (brightness and color) recordings of the new star, first noted by Tycho Brahe himself, on the evening of 11 November 1572 (Julian calendar). Figure 4 shows the drawing of the new star by Tycho Brahe in relation to several other stars in the constellation of Cassiopeia.
In light of the enormous impact of Brahees contribution to the development of modern astronomy3, it seems a daunting task to account for all the glory details of that development from a scientific history point of view; let alone all biographic details of T. Brahees life. We will not dwell too much on various aspects of Tycho Brahees scientific legacy. This would be beyond the scope of this review paper. We instead refer to both historic as well as modern and recent published monographs on the life and work of Tycho Brahe (Christianson, 2000; Thoren, 2002). However, with respect to one important point, we would like to emphasize the following which sets the framework of this review: Tycho Brahe--even in modern times--is best (if not only) known for his long-term, accurate, and precise (1 arc minute or less precision) measurements and recordings of the heavenly positions of stars and planets. His recordings of the new star in 1572, however--even at his own time--lend him less importance and fame in his subsequent contemporary track-record of scientific achievements. Apparently, not many fellow astronomers paid attention to his 1573 monograph. Only in the context of the development of scientific history and, especially, in the subsequent development of a more modern theory of the motion of the planets, the significance of Brahees work gained momentum and importance and the proper recognition that Brahe is bestowed with today in the 'hall of game' of important astronomers.
Footnote 3: According to the work of **Bedren** (1980), the modern design of a rotating dome enclosure is an insertion by Brahe introduced first for the construction of “Spenberg” obscenatory located just south of Uranborg Castle on the island of them.
This review paper aims to cover two tasks. The first task is an attempt to provide a review of literature (see Appendix) that aimed at presenting historical data/recordings by Tycho Brahe and contemporary scientists that allowed a modern-day quantitative reconstruction of the time variation in apparent brightness--or in modern language, the light curve--of the "new star" as described by Tycho Brahe in "_De Stella Nova_" and later with some more details in "_Programmamata_" published posthumously in 1602 by Johannes Kepler. Interestingly and likely overlooked, the field of making use of historical recordings is currently still very much active and proves surprisingly fruitful. Recent focus on the use of data spanning two thousand years was applied to the temporal color variation of Betelgeuse and Antares (Neuhauser et al., 2022, and references therein).
The second task is the result of asking the question: _what is the origin of the terms "type I" and "type II" when classifying supernova events?_ This question arises naturally when one examines in detail the work presented by Baade (1945), where, for the first time, the
light curve of the supernova of 1572 (SN 1572 or Tycho's new star) was reconstructed very carefully and with great precision from detailed historical records compiled by Brahe. The title of Baades (1945) paper is "B Cassiopeiae as a supernova of type I." One interesting aspect of this paper is the use of the classification term "type I" _based on only photometric data._ We quote from the work of Baade (1945) the following: _"The recent recognition of two types of supernovae makes it desirable to decide whether the star was a supernova of type I or type II. The light curve of the nova, derived in the present paper, clearly indicates a supernova of type I."_ No reference to any spectroscopic data was given.
This is peculiar in several ways: _i_) given the fact that in a much earlier publication, Minkowski (1941) proposed two classes of supernova events--"type I" and "type II"--based on spectroscopic data only and _ii_) Baade (1945) did not present any spectral evidence along with the reconstructed light curve with an opportunity to provide a reference to the work of Minkowski (1941). However, the latter part seems at that time a daunting/instrumountable task given the limited aperture of telescopes available at that time since several efforts to identify the remnant star of SN 1572 failed as a (conjectured) result of being very faint. Today, we know that the remnant of SN 1572 is not easily detectable in visible light. Obviously, and especially because spectroscopy is a "light-hungry" measurement, this prohibited the acquisition of any spectroscopic observations in the beginning of the 1900s. However, the curious reader will ultimately be stumbled upon this lack of accuracy in the classification of SN 1572.
As a result of a continued literature review, we encountered a similar inaccuracy already taking place historically in the work by Baade (1943), where the author reconstructed the light curve of the 1604 supernova (SN 1604 or Kepler's Supernova). Again, the supernova event was classified from the historic light curve as of type I and we quote from the introduction of the work of Baade (1943) the following: _"The light curve, derived in the present paper, shows that the star was a supernova of type I, which at maximum reached the apparent magnitude -2.2."_ Again, as was the case for SN 1572, also in this work, no spectroscopic data were presented, completely missing the opportunity to provide a spectroscopic classification according to the different chemical characteristics described in the work of Minkowski (1941).
This manuscript is structured as follows. In Section 2, we present a brief review of historical records of Tycho Brahe's supernova SN 1572 (B Cas) resulting in the construction of modern light curves. We attempt to provide some details of the original recordings in this. In this respect, the work by Stephenson and Green (2002) proves a great resource. Some discussion of recordings from Far East to Asia is given, but most recordings of scientific values were produced in Europe. In Section 3, we review the early use of the "type I" and "type II" classification and trace out the coining of the term "supernova." We also discuss the historical importance of three papers--mainly by F. Zwicky and W. Baade--that we deem necessary for the understanding of finding an answer to our question set out in the introduction. In Section 4, we present in chronological order of appearance three papers that adopt the newly introduced two-group classification (type I and type II). In Section 5, we take a more detailed look at Baade's (1945) paper with the aim to trace out his train of thoughts to classify Tycho's supernova SN 1572 as a type I supernova.
## 2 Part 1--historical records of SN 1572
The first to systematically compile historical records on the brightness change of SN 1572 was W. Baade. In his 1945 paper (Baade, 1945), he presented all recordings of sightings made by Tycho Brahe himself and contemporary observers in Europe and transformed the data to a modern magnitude system. In a footnote, a single reference was made to a sighting from Korea agreeing with what European observers saw in late 1572. Baade's motivation to dwell on historical records was a desire to classify Tycho's nova either as a type I or type II supernova. The motivation for this differentiation will be described later. The identification of SN 1572 as a supernova (versus a common nova) was already fairly established within the community.
The key to allow for this transformation is to be found in the knowledge of brightness of known celestial objects. Tycho and others used stellar brightness classification based on the classic magnitude system as depicted in Ptolemy's Almagest catalog, as well as bright planets such as Jupiter and Venus for their comparisons. However, historically, negative magnitudes for brighter objects like Jupiter and Venus were not yet introduced.
The seminal works by Tycho Brahe (1973) containing a preliminary report and the more extensive presentation of the recording found in _Astronomiae instaurate progymasmata (secunda pars)_ (Brahe and Kepler, 1602) (written in Latin), which also includes records from other observers in Europe, serve as the main source for Baades goal to derive a light curve. As noted by Baade (1945), it seems that only Tycho Brahe deemed it useful to record the brightness change from the beginning of the flare-up in early November 1572 until the star's disappearance around March 1574. In order to obtain some traction on how Tycho described his observations we reproduce some of the recordings as translated by Baade (1945).
When first seen [11 November 1572] the nova outshone all fixed stars, Vega and Sirius included. It was even a little brighter than Jupiter, then rising at sunset, so that it equalled Venus when this planet shines in its maximum brightness.
The nova stayed at nearly this same brightness through almost the whole of November. On clear days it was seen by many observers in full daylight, even at noontime, a distinction otherwise reserved to Venus only. At night it often shone through clouds which blotted out all other stars.
In December it was about equal to Jupiter. In January [1573] it was a little fainter than Jupiter and surpassed considerably the brighter stars of the first class. In February and March it was as bright as the last-named group of stars. In April and May it was equal to the stars of the second magnitude.
After a further decrease in June it reached the third magnitude in July and August, when it was closely equal to the brighter stars of Cassiopeia, which are assigned to _this magnitude._
Continuing its decrease in September, it became equal to the stars of the fourth magnitude in October and November. During the month of November, in particular, it was so similar in brightness to the near-by eleventh star of Cassiopeia that it was difficult to decide which of the two was the brighter. At the end of 1573 and in January 1574, the now hardly exceeded the stars of the fifth magnitude. In February it peaked the stars of the sixth and faintest class. Finally in March it became so faint that it could not be seen any more._
Baade's analysis is split into two parts. The first part focuses on the brightness change around the time of maximum and the second part on the slow brightness decline of the new star. During the maximum phase, brightness estimates of SN 1572 were naturally based on comparisons with either Jupiter or Venus. Known field stars were used as reference objects during the fading phase.
Baade carefully evaluates the sky position and brightness of both Jupiter and Venus at the time around late 1572 and early 1573. This is important because Tychois recordings, at times, leave room for an ambiguous interpretation. For example, Tychois phrasing "equalled Venus when this planet shines in its maximum brightness." From this statement, it is not clear whether the observation refers to an actual (nightly) observation or an estimate based on past experience or from memory. In turns out that during November 1572, Venus was near or at maximum brightness approximately 130\({}^{\circ}\) from SN 1572 and could easily have served as a reference object.
In his brightness estimate and attention to details, Baade (1945) was careful toward the accuracy of Brahe's brightness estimate near maximum. Statements from both the preliminary report of 1573 (Brahe, 1573) and from the more detailed reporting in the work of Brahe and Kepler (1602) were compared with recordings made by contemporary observers from around Europe. Baade (1945) found that most observers agree that SN 1572 was distinctly fainter than Venus allowing one to conclude that Brahe's brightness estimate around maximum is somewhat very high. As a result, a final value near the maximum brightness of \(-4.0\) was estimated. In Figure 5, we show a table of the translation of brightness reporting by Brahe to a modern magnitude system at various dates.
The appearance of SN 1572 around maximum brightness was also noted in other parts of the world. Following the work of Stephenson and Green (2002), the supernova SN 1572 was also observed by astronomers in the Far East. Sightings from China and Korea follow a then typical reporting style of reporting a single event with remarks on the size and appearance of the supernova without any referencing to objects of known brightness. Reports on the daytime observation of SN 1572 from China exist as well, confirming European sightings. The daytime flaring is plausible given the exceptional brightness and the circumpolar nature of the SN 1572.
From China, a total of five sightings exist. For example, the astronomical records from the _Mingshit_ treatise indirectly mention a new star, and we quote from the work of Stephenson and Green (2002)
_There are also some (stars) which did not exist in ancient times but which exist now. Beside (pang) Cexing there is a guest star. During the first year of the Wanil reign period [ = AD 1573] this newly appeared (chu). At first (cian) it was large, now (jim) it is small._
Today, the star _Cexing_ is identified with a star in the constellation of Cassiopeia allowing an inference of the relative position of the new star in relation to _Cexing_. Since the aforementioned quote was written in present tense, even at the time of fainting (small), the new star remained fixed (beside _Cexing_).
From Korea, two recordings exist. No recordings exist from Japanese observers. For further details on historical recordings from Southeast Asia, we refer interested readers to the work of Stephenson and Green (2002).
In the second part, Baade (1945) provided details in assigning modern apparent magnitudes to the various by-eye brightness
estimates during the decline phase throughout 1573 and until the complete disappearance of SN 1572 in early 1574. For most parts, Tycho Brahe compares the brightness with stars of known magnitudes according to the Almagest catalog. In Figure 6, we show one of five tables by Baade (1945) with stars of class 3 in the Ptolemy magnitude system and the corresponding transformation to the modern Harvard system.
Baade emphasized that the reporting of Tycho Brahe strictly adheres to Ptolemy's magnitude system for field star comparisons. This approach by Brahe ensures inherent consistency or homogeneity for all reporting and, thus, accuracy in the transformation to modern magnitudes. However, some element of uncertainty, as pointed out by Stephenson and Green (2002), remains for the observation period around February-March 1573. Remarkably, and as a result of the strict adherence to the Ptolemy star class system, Tycho deemed it _not necessary_ to include Sirius (\(\alpha\) CMa; Ptolemy stars of magnitude 1) for his brightness comparison. From Figure 5, Tycho writes _equal to brighter stars of first magnitude_, and thus, introduces some degree of imprecision. The exact meaning is under. The inclusion of Sirius would otherwise be a natural choice of reference in the transition from using Jupiter to the brightest field stars (Ptolemy stars of magnitude 1), especially given the angular distance between SN 1572 and Sirius, although low on the sky (atmospheric absorption would likely account for \(-0.5\) mag), was less than the distance to Venus in late 1572 and early 1573. From the preceding discussion, it appears that the part of the light curve covering the early months of 1573 might be in error by around 0.5 mag.
However, following the work of Stephenson and Green (2002), one attempt to interpret Tycho Brahe's observation to also include Sirius as a reference object was pointed out by Kilburn (2001) who revisited, at that time, a newly discovered star atlas--_Umapgraphia Britamica_ (published in 1750)--by the English astronomer John Bevis. Without specifying his sources, Bevis provided a summary of Tycho Brahe's observation and freely introduced an allusion to Sirius. From the work of Stephenson and Green (2002), we quote the wording by John Bevis.
However, from Tycho Brahe's own recordings in _De Stella Nova_ (Brahe, 1573) or _Progymnasmata_ (Brahe and Kepler, 1602), no reference to Sirius was ever made except the reference to stars of first magnitude.
Several other authors attempted to derive a light curve of SN 1572 from Tycho's historical recordings (Clark and Stephenson, 1977; Pskovskii, 1978; Schaefer, 1996; Stephenson and Green, 2002). The most recent derivation, and likely most accurate with attention to great detail, is given by Ruiz-Lapuente (2004) largely based on Tycho's recordings as presented in Baade (1945). The author carefully evaluated realistic uncertainties for each observation and
Figure 6: Magnitude of stars in the Cassiopia constellation in the classic Ptolemy (stars of magnitude 3) and the revised standard Harvard photometric system. Tycho Brahe compared the brightness of SN 1572 with these stars during the time period of July–August 1573. \({}^{\rm{PAJS}}\). The figure was reproduced from the work of Baade (1945) with permission.
Figure 7: Modern light curve of SN 1572 as derived by Ruiz-Lapuente (2004) using largely Tycho’s original observations. The total time window is around 510 days. The so-called stretch factor was found to be 0.9 indicative of a SN Ia event. \({}^{\rm{PAJS}}\). The figure was reproduced from the work of Ruiz-Lapuente (2004) with permission.
applied extinction corrections. A useful and hitherto overlooked observation, dated on 11 November 1572, was quoted with a recording by Jeronimo Munnoz (2-1592; compiled from his monograph _Libro dd nuevo cometa_) who was a professor of Hebrew and Mathematics at the University of Valencia and later of Astrology at the University of Salamanca. Interestingly, Munnoz also measured the diurnal parallax of SN 1572 and confirmed the null-result of Tycho Brahe. In Figure 7, we show the light curve of SN 1572 as recovered by Ruiz-Lapuente (2004).
## 3 Part 2 - early use of type I and type II classification
### Coining the term supernova
During the 1920s, and substantially in the 1930s with the steady increase of observational evidence, the idea of the existence of a special class of novae gradually entered the stage of reality in the minds of Humason, K. Lundmark, F. Zwicky, and W. Baade among others. Formally, according to ADS, the first-time reference to the term "super-novae" was given in Baade and Zwicky (1934b) entitled "_On Super-novae_" and published in January 1934. It appears that preference is toward the work of Baade and Zwicky (1934c) and Baade and Zwicky (1934a) (both4 published in May 1934) for the first-time use of the new terminology. In a short conference contribution, Osterbrock (2001) offered a historic trace back on the origin of the term "super-novae." According to Osterbrock, the term was mentioned first time orally by F. Zwicky at an American Physical Society meeting at Stanford in December 1933. The conference paper was later published in 1934 (Baade and Zwicky, 1934d) and according to Osterbrock is a condensation of the two 1934 publications. Furthermore, Osterbrock highlighted the use of alternative designation for the new class of novae and quoted several bibliographic sources without proper referencing. We attempted to find the proper references without any luck. Here, we repeat a few alternatives. In 1920, K. Lundmark (1920) described the new class as "_giant novae_," and later, Lundmark (1923) described them as "_much more luminous novae_." Further designation was "_exceptional novae_" (Hubble 1929), and W. Baade referred to them as "_Hauptnovae_" (chief novae). In a review paper, Zwicky (1940) highlighted in a footnote the comment
Footnote 4: The two publications are important in another respect, and we refer to the work of Burrows (2019) for a good review.
_Baade and I first introduced the term "supernovae" in seminars and in a lecture course on astrophysics at the California Institute of Technology in 1931._
However, following the work of Osterbrock (2001), the word "_super-Novae_" was first published by K. Lundmark in a December 1932 publication. The originality is questioned. Lundmark visited California in 1932-1933, and it is, therefore, argued for that he likely picked up the term "super-novae" from attending various seminars given his extended duration of stay. In conclusion, according to
Figure 8: Final photometric values of various supernova events as presented in the work of Baade (1938). For the “Rosetta Stone” supernova IC 4182, both the distance modulus (\(m-M\)) and the (extrapolated) peak brightness (Obs. \(m_{\rm max}\)) were determined and, thus, served as a standard supernova. The type column refers to the type of host galaxy. \({}^{\rm a}\)AS. The figure was reproduced from the work of Baade (1938) with permission.
the work of Osterbrock (2001), it was W. Baade and F. Zwicky who coined "super-novae." In later times, the writing style without a hyphen was predominantly used as is done in modern times (Roenig, 2005).
### The Baade 1938 paper
In light of the steady development in understanding the nature of novae and novae-like events (Lundmark, 1923) the focus was shifted on the final state or remnants of novae and supernovae (Baade, 1938). In particular, the focus was on the end-stage result of past supernovae events. Since the brightest novae were observed in extragalactic systems (see Figure 8), there was little hope to gain a deeper understanding due to their faintness. At that time and as a consequence of the limited ability of telescopes to gather light, attention was paid to past novae-like events in the local (Milky Way) galaxy. This point was raised by Baade (1938) dedicating the last section to the novae B Cassiopeia (SN 1572) and the Crab Nebula (SN 1054). Baade (1938) wrote
"_Nothing is known at present about the final state of supernovae. Indeed, it would require a supernova in our own galaxy to obtain this information. Fortunately, we know two objects in our galaxy which very probably have been supernovae and which may provide an answer to our question: B Cassiopeia and the Crab nebula._"
However, knowledge on whether Tychos (SN 1572) and Kepler's (1604) were to be classified as supernovae or not was still very much under debate during the mid-1930s. As toward the nature of the Crab Nebula (SN 1054), Baade (1938) provided a footnote pointing out that K. Lundmark examined historical observations from Japan "_as bright as Jupiter_" strongly indicating SN 1054 to belong to the class of supernova. We refer interested readers to the work of Lundmark (1939) for further details on historical recordings from observers in Japan. The last paragraph in the work of Baade (1938) gives an account of all the arguments in support of SN 1054 being a former supernova. The exact nature (common novae vs. supernova) of SN 1572 and SN 1604 was still unclear.
In a quest to derive a life-luminosity relation of nova and nova-like events, Zwicky (1936) wrote
"_Assuming the validity of the life-luminosity relation (1), some interesting applications can be made. For instance, the view has been advanced that Tychos Nova 1572 and perhaps Kepler's Nova 1604 were super-novae._"
From this, we learn that in a relative sense, more certainty was lent toward SN 1572 for being a supernova event. However, the amount of empirical evidence given was sparse at that time. According to the work of Baade (1938), the only indication for SN 1572 to be a supernova event is based on its "_unusual brightness at maximum_" (Baade, 1938), and in the following, we will outline the reasoning provided by Baade in his 1938 paper.
The authors noted that SN 1572 is often cited as a _possible_ supernova event because of its unusual brightness at maximum. This observation poses an important constraint in the following discussion. The author described a chain of evidence-based reasoning arguing in favor for that SN 1572 was not a common nova. A key reference is made to an important study by Humason and Lundmark (1923) where spectral properties of all stars in a region around SN 1572 down to \(V=14\) mag were investigated in a quest
Figure 9: Light curve of the supernova IC 4182 as observed between August 1937 and June 1938 as presented in Baade and Zwicky (1938). The left scale designates photographic magnitudes and matches most closely observations through the modern Bessel/Johnson \(\beta\) filter pass band (Pierce and Jacoby, 1995). The right scale designates the absolute magnitude as a result of the known distance modulus to IC 4182. The single pre-maximum measurement was obtained by chance (see text for details). \({}^{\rm{Ch}}\)SAS. The figure was reproduced from the work of Baade and Zwicky (1938) with permission.
to identify the remnant star in association with the SN 1572 nova event.
The result of this study was negative reporting nothing unusual in their measurements. As a side note [Iootnote in the work of Baade (1938)], one star as measured by Humason and Landmark (1923), located in the very vicinity of Tychos originally measured position of SN 1572 and, thus, considered as a good candidate for the remnant star of the SN 1572 event, exhibited an M-type spectrum. At that time, as a result from a different study, at least one other star (T Coronae Borealis), with an albeit questionable M-type spectrum, was associated with a nova event allowing a _possible_ bridging between spectral properties and nova events. However, according to the work of Baade (1938), the bridging was excluded by Lundmark because of the unusual spectrum.
The importance of the null-result was first realized several years later when in 1938, Humason was able to demonstrate that 16 (without exception) former nova events had spectral properties similar to very early B- or O-type spectra. Since in 1922, no stars in the vicinity of SN 1572 brighter than \(V=14\) mag had early B- or O-type spectra, Baade (1938) concluded that if SN 1572 was an ordinary nova event, then it must be fainter than 14th magnitude. Given the observed peak brightness as reported by Tycho Brahe himself (or translation thereof to a modern magnitude scale) was around \(-4\) to \(-5\) mag, the amplitude of SN 1572 light variation must have been at least 18-19 mag.
This unusual large amplitude provides a strong argument for that SN 1572 was not a common (or ordinary) nova event. At that time in 1938, the _mean amplitude_ of common or ordinary nova events was found to be around 9th mag. The large amplitude was further increased to more than 22 mag by observations carried out in 1937 by W. Baade himself (Baade, 1938) as a result of a magnitude-limited photographic search survey in the red and blue with a limiting magnitude of \(V=18\). It is important to note that this conclusion (SN 1572 being a supernova) is only possible with the help of Tychos meticulous recording of the (peak) brightness change of the new star at the end of 1572. It is also important to note that this paper (Baade, 1938) is the first to pinpoint, based on observational evidence and proper reasoning, that SN 1572 likely was a supernova event. Finally, we should also note that Baade (1938) did not offer to classify SN 1572 as a type I or type II supernova. The only clear differentiation is made between common or ordinary novae and supernovae. An explicit classification of SN 1572 being a type I supernova event was first given in the work of Baade (1945) (B Cassiopeiae as a Supernova of Type I) where Baade reconstructed the light curve SN 1572 from historical records.
### The "Rosetta Stone" supernova
In the work of Baade and Zwicky (1938), the authors described the discovery and light curves of two particular supernovae events of which one later will play an important part in the supernovae classification of both the Kepler (SN 1604) and Tycho (SN 1572) supernovae events. Therefore, this paper deserves some attention in the present discussion. The underlying photometric data were acquired as part of a dedicated sky-monitoring program searching for bright novae using a Schmidt telescope at the Palomar Observatory in California. The respective host galaxies were included in the survey as a result of their known distance readily established by other means at that time in the form of a measured distance modulus.
In the period 1937-1938, the authors recorded the change in brightness of a supernova (see Figure 9) in the irregular spiral galaxy (then called nebula) IC (Index Catalog) 4182 under favorable observing conditions. The galaxy itself was discovered by W. Baade 2 years earlier in 1936 and is relatively faint and free of interstellar extinction as a result of being located in a relatively void region of the sky outside of the galactic plane. The supernova itself is designated as IC 4182. Subsequently, the supernova IC 4182 was designated as SN 1937C (Pierce and Jacoby, 1995). However, for the remainder of this review, we continue using the original designation. A total of six telescopes of various apertures were used for the subsequent photometric follow-up observations (see Table 2 in the work of Baade and Zwicky (1938)).
It appears that some initial hesitation existed toward whether or not to include IC 4182 in the final observing list. This is because IC 4182 as a galaxy is relatively sparse in the number of host stars lowering the chance to observe an associated nova/supernova event. As often in science, chance encounters appear, and within a year or so the bright supernova IC 4182 was discovered. The apparent brightness at the time of discovery was unusually high enabling the opportunity to obtain both photometric and spectroscopic observations. Historically, the latter dataset was obtained by R. Minkowski forming an important and significant part of this seminal supernova classification paper (Minkowski, 1941) published in 1941. In a forthcoming section, we will learn that IC 4182 falls in a particular class of supernovae events, thus, forming a "_Rosetta Stone_" linking similar supernova events with typological classification based on photometric data only.
Unfortunately, the discovery of the outburst of IC 4182 was made at past brightness maximum rendering the practical inference of the true brightness maximum to be near-impossible. As discussed earlier, the maximum apparent brightness of a nova event is a first empirical clue toward identifying the event as a supernova. Again, also in this respect, W. Baade and F. Zwicky were blessed with a large bag of luck. A single and most crucial observation was provided by a fellow astronomer and others, F. Leutenegger, who happened to observe comet Finsler in 1937. By chance, IC 4182 was in the field of view of one of Leutenegger's photographic plates obtained at a time predating the earliest observations obtained by W. Baade and F. Zwicky. This allowed them a determination of the brightness on the ascending branch of the light curve from which an estimate of the maximum could be inferred. For interesting reading on additional photometric data from historic archives of IC 4182, we refer to the work of Pierce and Jacoby (1995).
### The Minkowski 1941 paper
As mentioned earlier, the first use of classifying supernovae as of either type I or type II was given in a seminal publication by Minkowski (1941) published under the title "_Spectra of Supernovae_." Historically, we are here likely learning the reporting by Rudolph Minkowski of results from the first dedicated and systematic spectroscopic sky-survey of supernovae events observed in extragalactic systems.
We quote from the work of Minkowski (1941)
"_Spectroscopic observations indicate at least two types of supernovae. Nine objects (represented by the supernovae in IC 4182 and in NGC 4636) form an extremely homogeneous group provisionally called "type I". The remaining five objects (represented by the supernova in NGC 4725) are distinctly different; they are provisionally designated as "type II"._"
Important to emphasize is the following point: the classification is solely based on properties from spectroscopic observations. In the following, we shall assume that the reader is familiar with some details of spectroscopy and related terminology. Although the statistical sample is relatively meager (nine of "type I" and five of "type II"), Minkowski already noted a greater variation for the group of "type II" supernova events. In his own words, he wrote pointing already at an early stage toward a rich set of various sub-classes and/or peculiar supernova events (Milisavljevic and Margutti, 2018). Minkowski even contemplated the introduction of a third group based on one single supernova event, and we quote
"_The individual differences in this group [type II] are large..."_
"_...at least one object, the supernova in NGC 4559 [subsequently named SN 1941A and classified as a type II-L supernova using the modern classification scheme], may represent a third type or, possibly, an unusually bright ordinary nova._"
The introduction of additional types of supernova events is later taken up again by Fritz Zwicky (refer here his 1963 or 65 paper) although his classification did not survive to modern times. Minkowski then moved on to describe the temporal variation of spectra for the two types of supernovae. He first described properties of type I supernovae spectra. With the exception of minor differences,
"_the spectrograms of all objects of type I are closely comparable at corresponding times after maxima._"
Furthermore, the spectra exhibiteda property that was even present
"_very wide emission bands_"
"_...at the earliest premaximum stage hitherto observed... No significant transformation of the spectrum occurs near maximum._"
In a later paragraph, Minkowski (1941) reports on further properties of type I supernova spectra. He wrote
"_No satisfactory explanation for the spectra of type I has been proposed. Two (O I)_ [single-ionized oxygen] _bands of moderate width in the later spectra of the supernova in IC 4182 are the only features satisfactorily identified in any spectrum of type I. They are, at the same time, the only indication of the development of a nebular spectrum for any supernova._"
From the work of Minkowski (1941), we recall that the nine supernovae events of "type I" were observed from 7 days before maximum and, thus, we obtain the picture of a homogeneous class of spectra for supernovae of type I. We point out the spectrum of a supernova observed in the extragalactic (nebula) system IC 4182. Later, this type I supernova event will play a major role for the classification of Tycho's supernova of 1572 (as well as Kepler's supernova of 1603).
Minkowski (1941) then proceeded and reported his findings describing supernovae of type II. Spectra for this class have been recorded from maximum and until 115 days thereafter. Minkowski wrote
"_Up to about a week after maximum, the spectrum is continuous and extends far into the ultraviolet, indicating a very high color temperature. Faint emission is suspected near \(H\alpha\)_ [Strongest emission line in the Balmer series at 6563 A]. _Thereafter, the continuous spectrum fades and becomes redder. Simultaneously, absorptions and broad emission bands are developed. The spectrum as a whole resembles that of normal novae in the transition stage, although the hydrogen bands are relatively faint and forbidden lines are either extremely faint or missing._"
The reported faint detection of hydrogen is the modern defining hallmark of a type II supernova. For type I supernovae, all spectra were absent of hydrogen. Minkowski then proceeded to remark that while the spectra of a type II supernova event is fairly understood from the construction of synthetic spectra and the resemblance to common or normal novae, the spectra of type I events are still lacking a satisfactory explanation (we recall the quote "_No satisfactory explanation for the spectra of type I has been proposed_"). It is interesting to pay special attention to this remark as this is likely the first historic identification for the profound difference of a type I and type II supernova explicitly pointing out the profound difference manifested in the underlying detonation/explosion physics.
Finally, Minkowski (1941) established a relative temperature hierarchy between type I, type II supernovae, and ordinary/common novae. Minkowski wrote
"_As compared with normal novae, supernovae of type II show a considerably earlier type of spectrum at maximum, hence a higher surface temperature (order of 40,000)._"
Further at the very end, he reported
"_This suggests that the supernovae of type I have still higher surface temperature and higher level of excitation than either ordinary novae or supernovae of type II._"
In other words, the temperature of type I supernova is found to be higher than for type II events which again is higher than for common or ordinary (normal) novae. This last remark by Minkowski makes it clear that supernovae of type I are the more energetic events compared to type II supernova events. Since, according to Minkowski (1941), type II events appear spectroscopically closer to the group of common or ordinary novae, and since novae are intrinsically less luminous compared to supernovae (this was already established before 1941), one can
conclude that type I supernovae events must be more luminous events.
A full presentation of data and analysis of all spectra for IC 4182 as observed in 1937-1938 was presented in the work of Minkowski (1939). Figure 10 shows a spectrum of IC 4182 observed some 9 days after maximum. In the quest to understand some spectral features several doppler-shifted lines were interpreted as being the result of an implosion of the progenitor object. This interpretation was in line with the neutron star hypothesis as suggested in Zwicky (1939), where Zwicky proposes the collapse of an ordinary star into a neutron star (Baade and Zwicky, 1934a; Baade and Zwicky, 1934d).
According to the work of Minkowski (1939), "_The absence of hydrogen and helium lines indicates a very high degree ionization; thus all the bands may be of unknown origin._" Thus, here, we read the first-time report of the lack of hydrogen as a fundamental characteristic of a type I supernova.
### Modern classification
The history of classifying supernova events starts in 1941 when Minkowski (1941) proposed two classes based on spectral observations. The first class provisionally coined "type I" showed no presence of hydrogen while the second class "type II" contained hydrogen (H\(\alpha\) line detected). Details of subsequent spectral observations during the 1980s (both the number of supernova detection and data quality increased) made it necessary to introduce sub-classes according to additional chemical fingerprints. A modern classification system was reviewed by Turatto (2003), and Figure 11 shows a schematic of supernovae classification. Observations revealed that for most type I events an additional silicon (Si II at 6150A) line was present resulting in the introduction of a type Ia supernova class. Historically, the original type I class introduced by Minkowski (1941) was later renamed type Ia. For those events with no evidence for the presence of silicon were further sub-classified according to the presence of helium. For supernovae with helium the class of type Ib and for helium-less objects the class of type Ic were introduced.
The astrophysical sites or cause of supernova events have also been identified. Two main mechanisms have been identified. For type Ia, the progenitor is the explosion of a white dwarf that accreted additional material from a nearby binary companion star eventually reaching some critical mass (Chandrasekhar mass limit) initiating a run-away carbon burning process driving a thermonuclear explosion. For the type Ib, Ic, and type II events, the progenitor is a massive star and the explosion is a result of a core collapse triggered at the very moment when the supportive radiation pressure ceases allowing a gravity-driven collapse into a neutron star or a black hole. The onset of both mechanisms mark the beginning of the end of the progenitor star's lifetime.
The type Ia and II are further sub-classified according to kinematical properties or peculiarities or spectral-kinematic features. For type Ia SNe, we have the branch-normal sub-class, and for type II, we have the classes type IIP (plateau), IIL (linear), IIb, and IIn (narrow lines). A detailed discussion of these sub-classes is beyond the scope of this review and, thus, omitted, and we refer interested readers to Turatto (2003) for further details.
## 4 Three 1942 papers
In the year 1942, three papers adopted the newly introduced two-group classification of supernovae by Minkowski (1941). They deserve some in-depth attention with regards to the early use of
Figure 11: Decision-free flow chart of supernova classification as proposed by Turatto (2003). The main classes of supernovae are of type Ia, Ib, Ic and type II. Supernova events with some minimum explosion energy are called hypernovae. See text for further details. The figure was reproduced with permission from the work of Turatto (2003).
Figure 10: Spectrum of the “Rosetta Stone” supernova IC 4182 as observed on 31 August 1937. Nine days after maximum. Numerous other spectra were recorded as the supernova faintest. “SAS. The figure was reproduced from the work of Minkowski (1939) (Figure 8) with permission.
the terms "type I" and "type II" since it appears that they are the only sources in the literature making use of the new classification terminology prior to W. Baades classification of SN 1604 and SN 1572 as "type I" supernovae (Baade, 1943; 1945). It is remarkable to see how quickly the new finding of Minkowski (1941) is adopted by the main protagonist in novae/supernovae science and how old belief systems change as a reaction of the presentation of new evidence based on state-of-the-art observations.
### Zwicky 1942 (April/July)
Zwicky (1942) published his second paper5 concerning the occurrence frequency of supernovae entitled _On the frequency of supernovae. II._ He made use of the "type I" and "type II" terminology in the final paragraph where he wrote
Footnote 5: Apt is the month stated at the end of the paper, and July is the ADS record.
"_Another peculiar circumstance is the fact that those six supernovae among the twelve found during our initial patrol of the sky which were investigated spectroscopically were supernovae of what now is called "type I". This led us to a preliminary and incorrect conclusion that all supernovae might be of the same type. Some of the supernovae found later, such as the objects in NGC 5907 and 4725, proved to be of what Minkowski proposed to call "type II". These supernovae appear to be giant analogues of the common novae, and their spectra can be interpreted accordingly, while the interpretation of the spectra of supernovae of the type I has not yet been given. Supernovae of the type II, according to Baade, are, on the average, intrinsically fainter than supernovae of the type I; and they are therefore more difficult to discover, although they are probably more frequent than supernovae of the type I._"
Around that time, Zwicky (1938) estimated one supernova per extragalactic nebula per 600 years. From this final remark, we learn from the work of Zwicky (1942) that prior to the identification (discovery) of two types of supernovae events, the main protagonists of supernovae research wrongly thought that only one type of supernova existed, though we have to remember that Zwicky published the paper as a single author only allowing us to indirectly infer that this view was shared by many in this field. Furthermore, the reference to the work of Baade when discussing "type II" supernovae is likely a reference to the work of Baade (1942) which is the second paper discussed in this section. Finally, as mentioned in Minkowski (1941), Zwicky reiterates that supernovae of "type II" have spectra that can be readily interpreted as they share similarities with common or ordinary novae events. However, a proper interpretation of spectra of "type I" is still missing. Historically, because of the lack of properly interpreting "type I" supernovae spectra, an increased interest is sparked to study galactic (within the Milky Way) supernova events. However, since these are rare events in the local Milky Way galaxy, the focus is on linking supernova in known extragalactic nebulosities with historic supernovae events. Tycho Brahaes observations of SN 1572 will play an important role in this linking.
### Minkowski 1942 (May/September)
The paper by Minkowski (1942) carries the same title as the next publication by W. Baade. It seems almost as if the two authors (both were colleagues with offices in walking distance, with W. Baade at the California Institute of Technology and R. Minkowski at the Carnegie/Mount Wilson Observatory in Pasadena, California) fairly split the analysis of the Crab Nebula according to their respective scientific/observational expertise. Here, Minkowski (1942) presented details of spectroscopic observations of select regions in or near the Crab Nebula, and Baade (discussed in the next section) presented photometric observations. We chose to list Minkowski (1942) paper first because of the May date-stamp presumably valid at the time of submission to the Astrophysical Journal, thus formally predating Baades (1942) publication. However, the two papers have been published almost simultaneously in the Astrophysical Journal and are, thus, from a publication-time point of view indistinguishable.
We here again pay attention to the use of the terms "type I" and "type II." The first use by Minkowski is interesting, and we quote (omitting any reference to footnotes as present in the original print as they contain no relevant information)
"_Little doubt remains that the Crab nebula is the remnant of the Chinese nova of A.D. 1054. This object was certainly a supernova; the records of its brightness indicate that it was a supernova of type I._"
Surprisingly, Minkowski adopted Baades differentiation between the two types of supernovae based on a measurement of brightness. We here obtain the impression that although, formally, two types of supernovae have been empirically established based on spectral properties, Minkowski himself supported the association of "type I" supernovae with the more brighter family of supernovae. Further on in the text, Minkowski made use of "type I" classification as follows:
"_For only one supernova of type I are there reliable data on the brightness before the outbreak. No star brighter than photographic magnitude 20.5 was present on earlier exposures of the nebula IC 4182 in the position of the supernova of 1937._"
The classification of a supernova in the extragalactic system IC 4182 was already given by Minkowski in his seminal 1941 paper (Minkowski, 1941) being assigned "type I." The mentioning of IC 4182 here is used in order to provide observational evidence supporting the plausibility for a derived (from the mass-luminosity relation) absolute bolometric magnitude of \(-5\) for the supernovae of 1054 at maximum. In the final paragraph, Minkowski then wrote
"_If supernovae of type I are stars of mass greater than the critical mass \(M_{\rm y}\) then it is highly suggestive to assume that supernovae of type II are stars of mass smaller than \(M_{\rm y}\). Such an assumption does not meet any contradictory observational evidence._
In its favor could be cited the fact that the frequency of supernovae of type II appears to be six times as great as that of supernovae of type I.In the absence of excess mass, a supernova of type II would not necessarily have to eject a considerable fraction of its mass. The nebula surrounding a supernova of type II should thus be fainter than that around a supernova of type I. This expectation is in general agreement with the fact that any nebula surrounding Tycho's nova of 1572, which was probably a supernova of type II, is certainly much fainter than either of two nebulae connected with supernovae of type I, namely, the Crab nebula and the nebula of Kepler's nova of 1604 recently found by Baade."_
In this text passage, besides learning about interesting thoughts with regards to the mass of stars before the breakout (detonation), we obtain Minkowski's opinion whether SN 1572 was of "type I" or "type II" based on the missing nebula "signature" of this supernova event. We have to remember that in 1942, no measurement of the maximum brightness of SN 1572 was yet known. The year 1942--with the pioneering study of Duyvendak (1942) and Mayall and Oort (1942)--marks the beginning of compiling and analyzing historic systematic records of past heavenly "guest stars" suddenly appearing on the sky. It appears very likely that the work by Duyvendak, Mayall and Oort inspired W. Baade to pursue similar work on SN 1604 (Baade, 1943) and SN 1572 (Baade, 1945) as we shall discuss soon. This surge in interest about information on the time evolution of the apparent brightness of historic supernovae events is explained in the quest to understand supernovae of "type I" in the nearby (Milky Way) galactic system. Up until 1942, "type I" supernovae were only observed in extragalactic systems.
### Baade 1942 (June/September)
In the the work of Baade (1942) (_The Crab Nebula_), he presented photographs of the famous Crab Nebula and mainly carried out astrometric-kinematic measurements and calculations of nearby field stars and of the nebula itself. This contribution is a response to a recent identification (initiated by J. J. L. Duyvendak) of the nova of 1054 as the parent star resulting in the Crab Nebula. From historic data presented in the work of Mayall and Oort (1942) and Duyvendak (1942), Baade added the comment that their data implicitly allows the classification of the supernova of 1054 to be a "type I" event. However, of more interest is to read a footnote, and we quote
"_In the following discussion the term "supernova" always refers to a supernova of type I. Supernovae of type II, with luminosities intermediate between those of ordinary novae and supernovae of type I, appear to be closely related to the ordinary novae. In any case during an outburst they present essentially the same phenomena as common novae._"
Here, we are offered an alternative definition (strictly applicable only in the paper) of "type I" and "type II" supernovae based on the total power (luminosity) output.
## 5 Historic classification of Tycho's Kepler's supernovae
In order to understand W. Baades classification of Tycho's supernova of 1572 (SN 1572), it is useful to first outline his work on Kepler's supernova of 1604 (SN 1604) forming a necessary stepping stone in his train of thoughts. As mentioned earlier, it is highly likely that Baade was inspired by the works of Duyvendak, Mayall, and Oort who supplied historic data on the ancient supernova, mainly observed from Asia, in 1054, the remnant of which we today observe as the Crab Nebula. In order not to forget the astrophysical context, the general scientific aim at that time was to obtain more information about "type I" supernova events as observed in the Milky Way. These events were poorly understood in contrast to ordinary/common novae or "type II" supernovae. Since "type I" supernova events were found to be rare events, the importance of historic recordings of the sighting of "new stars" in the Milky Way was realized in retrospect and their scientific value appreciated.
W. Baade likely first started out with the acquisition of historic data of Kepler's supernova because of the accuracy of reporting and most likely because the pre-maximum brightness phase was well covered and adequately described as we shall discuss elsewhere in some more detail. However, an interesting thought is the following: the meticulous recordings of SN 1604 by Kepler and contemporary observers, just 32 years after Tycho's observations, is likely a direct consequence of Tycho Brahes efforts (or lack of them) to take handwritten records of the steady change in brightness of the "new star" in 1572. Formulated slightly differently, the event in 1572 served as a "warning" to future observers as to do a better job (as was performed in 1604) and serves as a text-book example on how the scientific method works in practice and how it was applied early in the development of astronomy as an independent scientific research field.
The methodology developed by Baade was then applied in a similar analysis of Tycho Brahes supernova which was lacking important data before the brightness maximum. In the following, we shall keep the chronological order and briefly report the main aspect of "type I" vs. "type II" classification.
### Baade's 1943 paper--Kepler's nova of 1604
A few decades after the discovery and observation of SN 1572, a second nova appeared on the sky known as SN 1604 Kepler's nova. Its position, appearance, and temporal change in brightness were well documented by Kepler and contemporary astronomers. Historically, much of the experience made from observing the SN 1572 supernova directly benefited the characterization of SN 1604 (Baade, 1943).
Around the 1940s, increased attention was paid to the identification and further study of supernova remnants within the local galaxy. Baade (1943) started this endeavor by compiling historical data of SN 1604 allowing him to reconstruct the brightness variation of SN 1604.
According to the work of Baade (1943), the derived photometry was based on rough estimates. He further stated
"It is a typical light-curve of a supernova of type I. If any proof is needed, it is provided by the curve in Figure 1 [Figure 12] representing the decline of the nova from maximum to the end of the observations. This curve is actually the visual light-curve of the recent supernova in IC 4182, properly adjusted. The remarkable agreement in the light-changes of the two stars is characteristic of supernovae of this type, which all follow closely the same pattern. Minor variations in the widths and heights of the maxima occur; but, when supernovae of this type have reached the final decline, which sets in 80-100 days after the maximum, the further decrease in brightness is the same for all, with a linear gradient of \(+0.0137\pm 0.0012\) mag. per day. Since the nova of 1604 conforms to this pattern, we conclude that it was a supernova of type I."_
This is the first time where Baade (1943) classified a historic supernova as of type I. We highlight the point that no spectra were presented given classification of either type I or II based on spectral observations as proposed by Minkowski (1941). No explicit reference to the work of Minkowski (1941) was given which seems odd given that Minkowski (1941) introduced the two classes type I and type II. Rather the statement of type I classification is made based on the "Rosetta Stone" supernova IC 4182 as mentioned earlier. In the next section, we will delve into this in some more detail. However, Baade (1943) provided two references to works
published by Minkowski: the first instance of citation is _i_) concerned with the faintness and interstellar absorption of the Nova Ophiuchi nebulosity (the remnant of SN 1604) where Baade wrote "_This faintness of the nebulosity in the photographic region is without doubt due to selective absorption_" and provided a reference to the work of Minkowski (1943). At the end of this paper, Minkowski wrote "_The similarity of the spectrum to that of the Crab nebula suggests that the nebulosity is the remnant of a supernova rather than that of an ordinary nova. The results thus give supplementary evidence that the nebulosity is really a remnant of Kepler's nova of 1604 and that this star was a supernova of type I_." As a result, it is important to note that Minkowski offers a classification based on spectroscopic data of Nova Ophiuchi) and the second instance is _ii_) concerned with a comparison of spectrophotometric data of Nova Ophiuchi with the Crab Nebula where he notes similarities between the objects in their spectral characteristics where the main part of the emission between 3500 A (blue) and 5000 A (red) of Nova Ophiuchi is due to a continuous spectrum.
To complete this discussion of the work presented by Baade (1943), the author managed to identify the remnant of SN 1604 from a deep long-term exposure. To mitigate the effect of heavy galactic absorption, his observation made use of red filter and indeed successfully found a small diffuse patch nearby the location measured by Kepler and contemporary astronomers in 1604. Modern x-ray observations confirm this result.
Figure 12: Light curve of SN 1604 (SN Ophiuchi) with data points derived from historical data. The solid line is the light curve of the supernova in IC 4182. © AAS. The figure was reproduced with permission from the work of Baade (1943).
### Baade's 1945 paper\(-\)Tycho's nova of 1572
After satisfactorily classifying SN 1604, Baade embarked on compiling historical data for Tycho's supernova SN 1572. The main objective was to investigate and determine whether SN 1572 was of type I or type II. The methodology used is similar to the SN 1604 classification and is mainly based on the Rosetta Stone light curve as observed for IC 4182. Following the work of Baade (1945), he wrote
_"It has been pointed out in a previous paper1 that B Cassiopeiae [SN 1572], the bright nova of 1572, was undoubtedly a supernova because of its amplitude, which exceeded 22 mag. The recent recognition of two types of Supernovae makes it desirable to decide whether the star was a supernova of type I or type II. The light-curve of the nova, derived in the present paper, clearly indicates a supernova of type I. Because it throws new light on the final state of a supernova, B Cas is of particular interest."_
Baade made some interesting remarks on the precision of the magnitude estimates for each of Tychos observation. To achieve a mean error of 0.25 mag, Tycho must have\(-\)_"consciously or unconsciously"_ (Baade, 1945)--made use of the knowledge of the observed brightness (using Ptolemy's classes or magnitude system) of the nova during a certain time period. This only makes sense given that each class in Ptolemy's system spans a significant range in magnitudes, and hence, it must have been difficult to judge the correct brightness at a given time. Making use of a time span enables the inference of a mean estimate which explains the low uncertainty of 0.25 mag. Whether or not this choice was made conscious or unconscious, it is likely one of the first applications of minimizing uncertainties for random measurements.
In his discussion on the derivation of the type of supernova, Baade (1945) provided the following peculiar statement:
_The light-curve itself is typical of a supernova of type I as shown by comparison with the visual light-curves of two other supernovae of this class, SN Oph (1604) [SN 1604] and SN in LC. 4182_ (see Figure 1) [Figure 13 in the present work]. As will be pointed out in a later paper, a very characteristic feature of supernovae of type I is the linear decrease in brightness which sets in at about 120 day after maximum and is characterized by a gradient of \(+0.0137\pm 0.0012\) mag. per day. With a gradient of \(+0.014\) mag. per day for the phase interval 120 d\(-\)460 d B Cas conforms to this pattern perfectly. The descent from the maximum is less steep in B Ca than in the recent SN in LC. 4182, but it is quite evident from the data now available that there is some variation in the form of maxima of supernovae of type I, especially in their heights and widths._
Again, and as was the case for SN 1604, Baade (1945) compared Tychos observations with the "Rosetta Stone" light curve of IC 4182 for which spectra were obtained and classified as a type I supernova. Parallel shifting of a best-fit curve to IC 4182 reveals similarities in the shape of the light curve, especially the later part of the novais brightness decline.
Here, we have found the Holy Grail that allowed Baade to conclude that SN 1572 was a supernova of type I. The methodology applied is the same as applied for SN 1604. Baade wrote
_"The light-curve itself is typical of a supernova of type I as shown by comparison with the visual light-curves of two other supernovae of this class, SN Oph (1604) [SN 1604] and SN I.C. 4182 [IC 4182]."_
The peculiar wording by Baade might suggest that the data derived from Tychos observations were compared with two _distinctly different_ supernovae. However, this is not the case. Practically, Baade compared his SN 1572 data with the light curve found for IC 4182.
To follow the logic of reasoning and as discussed earlier, we retrace the findings obtained by Baade (1943). In this paper, Baade published historic data of SN 1604. He shifted the light curve of IC 4182 vertically (in brightness) and was able to match the historical data of Kepler's (SN 1604) supernova. In particular, Baade paid special attention to the linear trend approximately 100-500 days after the maximum which is similar for the three events. From this insight, and given that IC 4182 was classified as a type I supernova event (Minkowski, 1939; 1941), Baade (1943) drew the conclusion that SN 1604 must have been a type I supernova. Here, Baade applied the logic of _reasoning by analogy_ to be discussed in some more detail shortly.
Finally, the same type of reasoning was also applied to SN 1572 in the work of Baade (1945) where he compares the reconstructed historic light curve of SN 1572 with SN 1604 and IC 4182. They are all similar to each other, and since IC 4182 has a type I spectrum, therefore, following the logic of reasoning by analogy, we must conclude that SN 1572 was also a type I supernova. More modern and recent studies of SN 1572 identified this historic event as a type Ia supernova (the Minkowski type I class was later renamed to type Ia) based on the observation of light echoes.
The "reasoning-by-analogy-trick" that Baade (1943) applied to SN 1604 by shifting the light curve for IC 4182 in the vertical direction might at first give rise to point at a possible source of a "flaw in logic" or the application of a false analogy. However, in this particular case, a shift in brightness (vertical shift) would physically mean a shift in distance. This means one object, in this case IC 4182, would be imaginatively moved to the distance of the other object such as SN 1604 or SN 1572. Therefore, we can now compare the two objects fairly allowing the application of the logic of reasoning by analogy. However, for this logic to work, one would have to assume that the two events IC 4182 and SN 1572 (SN 1604) must be identical prior to the supernova event in order to trace out similar evolutionary characteristics.
The aforementioned discussion was more or less pointed out by Anne Decourchelle (Decourchelle, 2017). She wrote _"The similar light curve profile of B Cas, compared to those of two type Ia SNe (the galactic SN Oph [Kepler's Supernova of 1604] and the extragalactic SN in IC 4182), argues for it being for a type I supernova, a characteristic feature being the linear decrease in brightness after about 120 days after maximum."_ The light curve itself is typical of a supernova of type I as shown by comparison with the visual light curves of two other supernovae of this class, SN Oph (1604) and SN in IC. 4182.
## 6 Conclusion
In the present paper, we reviewed the early chronological developments of the field of supernova science with a focus on the historical identification of Tycho Brahes supernova SN 1572 as a supernova of type I.
We have described how Tycho Brahes own historical recordings from 1572 and onwards were essential in the identification and how the process coincided with the very early development of the field just prior to World War II by a handful of early supernova researchers.
However, we argue that the instrument often ascribed as the one used by Tycho Brahe in relation to his work on the supernova was in fact not used for his observation of SN 1572.
While we conclude here that the scientific importance of Tycho Brahe's recordings probably cannot be underestimated, we also note that other historical supernovae have played an important role for the initial development of supernova science, such as Kepler's supernovae SN 1604 and the Chinese supernova of 1054.
Additionally, we argue that the experience and knowledge following Tycho Brahe's 1572 and subsequent observations probably led to the detailed recording of the following supernova, SN 1604 attributed to Johannes Kepler.
Furthermore, as it turns out, the supernova associated with IC 4182 plays an absolutely fundamental role in identifying the type of SN 1572 as a kind of Rosetta Stone through reasoning by analogy: as we have shown here, the early supernova researchers equated supernova peak brightness with supernova type, which allowed the identification of SN 1572 on the basis of its light curve as derived from Tycho Brahes observations, rather than based on its unobtainable spectrum. In our review, we have demonstrated how this reasoning was part of early supernova science in the decade that followed the spectral identification of the two original types of supernovae.
## Author contributions
TH: writing-original draft. BD: conceptualization, investigation, methodology, project administration, resources, supervision, and writing-review and editing. LO: conceptualization, investigation, methodology, project administration, supervision, validation, and
writing-review and editing. JH: conceptualization, investigation, methodology, supervision, validation, and writing-review and editing.
## Acknowledgments
This research has made use of NASA's Astrophysics Data System Bibliographic Services [https://ui.adsabs.harvard.edu/](https://ui.adsabs.harvard.edu/). The idea behind this work was originally raised and proposed by Bertil Dorch. TCH acknowledges good leadership support by, and encouraging discussions with, Jens Dam during the (at times) challenging period of writing the manuscript. Also the authors would like to acknowledge Prof. M. Turatto for permission of reproducing Figure 11. The authors would like to acknowledge Prof. M. Pilar Ruiz Lapuente for fruitful discussion within the field of the early developments of research within novae and supernovae in the 1930s. Also the authors would like to express special thanks to Prof. Virginia Trimble (University of California, Irvine) for in-depth clarification on the historical development of supernova science. The first author dedicates this paper to his mom, Melke Sasse, who passed away in December 2022 - forever missed, never forgotten.
## Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2309.03612 | A high sensitivity Cherenkov detector for Prompt Gamma Timing and Time
Imaging | We recently proposed a new approach for the real-time monitoring of particle
therapy treatments with the goal of achieving high sensitivities on the
particle range measurement already at limited counting statistics. This method
extends the Prompt Gamma (PG) timing technique to obtain the PG vertex
distribution from the exclusive measurement of particle Time-Of-Flight (TOF).
It was previously shown, through Monte Carlo simulation, that an original data
reconstruction algorithm (Prompt Gamma Time Imaging) allows to combine the
response of multiple detectors placed around the target. In this work we focus
on the experimental feasibility of PGTI in Single Proton Regime (SPR) through
the development of a multi-channel, Cherenkov-based PG detector with a targeted
time resolution of 235 ps (FWHM): the TOF Imaging ARrAy (TIARA). The PG module
that we developed is composed of a small PbF$_{2}$ crystal coupled to a silicon
photoMultiplier to provide the time stamp of the PG. This prototype was tested
with 63 MeV protons delivered from a cyclotron: a time resolution of 276 ps
(FWHM) was obtained, resulting in a proton range sensitivity of 4 mm at
2$\sigma$ with the acquisition of only 600 PGs. A second prototype was also
evaluated with 148 MeV protons delivered from a synchro-cyclotron obtaining a
time resolution below 167 ps (FWHM) for the gamma detector. Moreover, using two
identical PG modules, it was shown that a uniform sensitivity on the PG
profiles would be achievable by combining the response of gamma detectors
uniformly distributed around the target. This work provides the experimental
proof-of-concept for the development of a high sensitivity detector that can be
used to monitor particle therapy treatments and potentially act in real-time if
the irradiation does not comply to treatment plan. | Maxime Jacquet, Saba Ansari, Marie-Laure Gallin-Martel, Adélie André, Yannick Boursier, Mathieu Dupont, Jilali Es-smimih, Laurent Gallin-Martel, Joël Hérault, Christophe Hoarau, Johan-Petter Hofverberg, Daniel Maneval, Christian Morel, Jean-François Muraz, Fabrice Salicis, Sara Marcatili | 2023-09-07T10:11:21Z | http://arxiv.org/abs/2309.03612v1 | # A high sensitivity Cherenkov detector for Prompt Gamma Timing and Time Imaging
###### Abstract
We recently proposed a new approach for the real-time monitoring of particle therapy treatments with the goal of achieving high sensitivities on the particle range measurement already at limited counting statistics. This method extends the Prompt Gamma (PG) timing technique to obtain the PG vertex distribution from the exclusive measurement of particle Time-Of-Flight (TOF). It was previously shown, through Monte Carlo simulation, that an original data reconstruction algorithm (Prompt Gamma Time Imaging) allows to combine the response of multiple detectors placed around the target. The sensitivity of this technique depends on both the system time resolution and the beam intensity. At reduced intensities (Single Proton Regime - SPR), a millimetric proton range sensitivity can be achieved, provided the overall PG plus proton TOF can be measured with a 235 ps (FWHM) time resolution. At nominal beam intensities, a sensitivity of a few mm can still be obtained by increasing the number of incident protons included in the monitoring procedure.
In this work we focus on the experimental feasibility of PGTI in SPR through the development of a multi-channel, Cherenkov-based PG detector with a targeted time resolution of 235 ps (FWHM): the TOF Imaging ARrAy (TIARA). Since PG emission is a rare phenomenon, TIARA design is led by the concomitant optimisation of its detection efficiency and Signal to Noise Ratio (SNR). The PG module that we developed is composed of a small PbF\({}_{2}\) crystal coupled to a silicon photoMultiplier to provide the time stamp of the PG. This module is currently read in time coincidence with a diamond-based beam monitor placed upstream the target/patient to measure the proton time of arrival. TIARA will be eventually composed of 30 identical modules uniformly arranged around the target. The absence of a collimation system and the use of Cherenkov radiators are both crucial to increase the detection efficiency and the SNR, respectively.
A first prototype of the TIARA block detector was tested with 63 MeV protons delivered from a cyclotron: a time resolution of 276 ps (FWHM) was obtained, resulting in a proton range sensitivity of 4 mm at 2\(\sigma\) with the acquisition of only 600 PGs. A second prototype was also evaluated with 148 MeV protons delivered from a synchro-cyclotron obtaining a time resolution below 167 ps (FWHM) for the gamma detector. Moreover, using two identical PG modules, it was shown that a uniform sensitivity on the PG profiles would be achievable by combining the response of gamma detectors uniformly distributed around the target.
This work provides the experimental proof-of-concept for the development of a high sensitivity detector that can be used to monitor particle therapy treatments and potentially act in real-time if the irradiation does not comply to treatment plan.
## Introduction
Protons have a very peculiar dose deposition profile compared to X-rays, with a sharp maximum at the end of their range (called Bragg peak), a limited entrance dose and nearly zero dose deposition after the Bragg peak [1]. These figures seem ideal to deliver highly conformal irradiations with a reduced number of fields and the highest selectivity. However, while the physical advantage of Proton Therapy (PT) is widely acknowledged, PT is still far from reaching its full potential. One of the most compelling open issues in PT is the reduction of irradiation uncertainties [2, 3]. The main reason for this holdup comes from the objective technical complexity of predicting and verifying the proton path in the patient.
Some imaging devices have been proposed in the past to monitor the proton range _in vivo_, which are modified versions of classical nuclear medicine imaging modalities [4, 5, 6, 7, 8, 9, 10], while other more original approaches do not provide the full PG profile (e.g. prompt gamma spectroscopy [11], PG integral counts [12, 13]). The existing methods have been extensively described in multiple reviews [14, 5, 3]. All these approaches exploit the spatial and/or temporal and/or energetic correlation of secondary particles
emitted as a result of proton interactions with the biological tissue [3, 16]. Among them, those based on the detection of Prompt Gamma (PG) rays resulting from non-elastic nuclear collisions in the patient are of particular interest to achieve a real-time measurement of the proton range and to allow stopping/correcting the treatment procedure from its very beginning in case a deviation from treatment planning is detected. Thanks to the prompt nature of these emissions, it is in principle possible to accomplish a statistically significant measurement of the proton range from the first irradiation spot. However, PG emission being a rare phenomenon (PG yield is \(\lesssim\)10\({}^{-2}\) PGs/proton/cm, with significant variations among models [17]), the development of high sensitivity PG detectors is crucial to obtain a real-time information. The required sensitivity can only be achieved by concurrently improving the intrinsic resolution of the measurement at a single event scale, and increasing the system detection efficiency to boost the measurement statistics.
With these goals in mind we have recently proposed a new modality to measure the PG vertex distribution _in vivo_: Prompt Gamma Time Imaging (PGTI) [18]. With PGTI, the overall Time-Of-Flight (TOF) of the proton (\(T_{p}\)) followed by the PG (\(T_{PG}\)) is first measured, as in the conventional PG Timing (PGT) approach [19, 20]. Then, as both \(T_{p}\) and \(T_{PG}\) depend on the PG vertex coordinates \(\mathbf{r}_{v}\), the PGTI reconstruction algorithm retrieves the latter from the following equation:
\[TOF=T_{p}(\mathbf{r}_{v})+T_{PG}(\mathbf{r}_{v},\mathbf{r}_{d}) \tag{1}\]
where \(TOF\) is experimentally measured and \(\mathbf{r}_{d}\) are the PG hit coordinates at the detector level, or the detector position coordinates if the detector is small enough. Briefly, PGTI allows to convert the measured TOF distribution into a spatial distribution, by performing an event-by-event deconvolution of the PG TOF \(T_{PG}(\mathbf{r}_{v},\mathbf{r}_{d})\). The direct consequence of this approach is to enable the readout of multiple detectors evenly distributed around the target: a capability that can be exploited to build a monitoring system that is sensitive to proton beam deviations both along the beam axis and in the transverse plane. The expected sensitivity of this technique has already been evaluated through Monte Carlo (MC) [18] for different beam intensities.
A PGT-based detector naturally offers a high detection efficiency (of the order of 10\({}^{-3}\)) as no collimation system is needed, and it is therefore a good candidate for real-time monitoring. In order to push the system TOF resolution, and therefore its spatial resolution, we have proposed to use a fast, diamond-based beam monitor operated in single proton regime [21] to tag in time each incident proton separately. Such a system, read-out in time coincidence with a conventional gamma detector, has already allowed us to reach TOF resolutions of 237 ps (FWHM) in a previous experiment with 68 MeV protons [22].
Our current focus is to develop a fast gamma detector dedicated to PGTI [23]. The TIARA (TOF Imaging ARrAy) detector will be composed of approximately 30 independent modules evenly distributed around the target to achieve 4\(\pi\) coverage. Each module will be composed of a \(\sim\)1 cm\({}^{3}\) monolithic Cherenkov radiator (PbF\({}_{2}\)) read by Silicon PhotoMultipliers (SiPM). This system is designed to allow the measurement of the PG time of arrival with excellent time resolution, and its hit position with a spatial resolution limited to the Cherenkov crystal size. The use of a pure Cherenkov radiator offers several advantages compared to the use of more conventional scintillation detectors. The Cherenkov emission process is inherently faster than the scintillation, which favours temporal resolution. At the same time, Cherenkov radiators generally have higher effective Z than scintillators, which improves photon interaction probability. Nevertheless, we will show that their greatest advantage lies in their relatively insensitivity to neutrons, which allows to set a natural cut-off on one of the largest sources of background in proton-therapy monitoring, therefore increasing the Signal to Noise Ratio (SNR) of the measurement and reducing signal pile-up.
We have shown by MC simulation how this detector can be operated in three different regimes depending on the beam intensity [18]. In Single Proton Regime (SPR), the beam intensity is reduced during the first irradiation spot to approximately 1 proton per bunch. Under this regime, the statistics available for the proton range measurement is limited, but the excellent time resolution achievable (of the order of 235 ps FWHM) ensures a measurement sensitivity of the order of 1 mm at 2\(\sigma\) for 10\({}^{8}\) protons of 100 MeV and a simulated detection efficiency of 0.6%. Alternatively, a sensitivity of 3 mm at 2\(\sigma\) was also predicted for 10\({}^{7}\) incident protons.
At nominal beam intensity (from \(\sim\)2000 to \(\sim\)2 million protons every 16 ns at isocenter [37] for the clinical accelerator used in this work), the time resolution is ultimately limited by the bunch time-width of the proton beam as it is impossible, with the current beam monitor, to establish which proton in the bunch has generated the detected PG. Nevertheless, the loss of time resolution is substantially compensated by the increased measurement statistics, and sensitivities of a few mm can still be reached, depending on the number of PGs included in the monitoring procedure: in our previous, work we estimated a proton range sensitivity of 2 mm (at 2\(\sigma\)) for 10\({}^{9}\) incident protons [18].
Finally, at very high intensities, a one-value measurement of the proton beam displacement can still be obtained by computing the centre of gravity of the TIARA detection modules. With this approach, no assumption should be made on the detector time resolution as only the counts registered in each module (and their absolute position) are needed in the reconstruction formula. A sensitivity of 2 mm (at 2\(\sigma\)) on the proton beam lateral displacement was estimated for 10\({}^{8}\) incident protons, whereas the method is less sensitive to distal proton range shifts.
The present work completes our previous simulation studies by confirming the hypotheses made and demonstrating the experimental feasibility of PGTI in SPR. In this case, the reduced intensity regime allows to characterise the inherent
performances of the detection module, that would otherwise be affected by the time properties of the accelerator. Through the results obtained in two experiments carried out with 63 and 148 MeV protons, the performances of the detector in terms of time resolution will be presented in the first section. We will also show that the gamma-ray energy measurement can be disregarded when using extremely fast detectors, and how a millimetric sensitivity on proton range measurement could be achieved with an unprecedentedly low PG statistics. In the second section, we will show the advantages of performing PGTI with detectors placed at multiple angular positions with the aim to reach a uniform sensitivity in the whole field of view. Finally, the different sources of background affecting our Cherenkov-based detector will be discussed in the third section.
## Results
### Detector characterisation with 63 MeV protons
A detector module based on a 1\(\times\)1\(\times\)1 cm\({}^{3}\) PbF\({}_{2}\) Cherenkov radiator coupled to a 3\(\times\)3 mm\({}^{2}\) MPPC from Hamamatsu (S13360-3050CS) read by a commercial preamplifier from Advansid [[https://advansid.com/products/product-detail/asd-ep-eb-pz](https://advansid.com/products/product-detail/asd-ep-eb-pz)] was conceived at LPSC (Laboratoire de Physique Subatomique et Cosmologie). The module was tested at the CAL (Centre Antoine Lacassagne) MEDICYC facility [24] with protons of 63 MeV delivered in bunches of 4 ns duration every 40 ns period.
A first cylindrical PMMA target (density = 1.19 g/cm\({}^{3}\)) of 0.5 cm (or 1 cm) thickness and 10 cm radius (cf. figure 1) was employed to characterise the module energy and time response, followed by a cylindrical PMMA block of 23 cm thickness (enough to stop the beam) and 10 cm radius, placed at 3 cm distance from the thinner PMMA target. A single crystal (sc) diamond detector of \(4.5\times 4.5\times 0.55\) mm\({}^{3}\) volume from Element6 [[https://e6cvd.com/](https://e6cvd.com/)] was used as a beam monitor to tag in time the incident protons. The diamond was read-out on both faces using a commercial C2 preamplifier from Cividec [[https://cividec.at/electronics-C2.html](https://cividec.at/electronics-C2.html)]; the two signals were summed up for analysis in order to increase the SNR and therefore improve the time resolution as proposed by Curtoni et al. [25]. A 8 mm thick, 2 mm diameter brass collimator was used to match the beam size to the limited diamond detection surface. The effective (after collimation) beam intensity reaching the beam monitor was estimated _a posteriori_ to amout \(\sim\)0.025 p/bunch from Poisson statistic considerations (see the Methods section). A time resolution of 156 ps (FWHM) was previously measured for sc-diamonds in the same energy range [22] and similar experimental conditions. The gamma detector module was positioned on the side of the large PMMA target at 14 cm from the beam axis and facing the Bragg-peak region in the thick target (orthogonal to the beam). All signals were acquired in time coincidence with the beam monitor and then digitally sampled using a HDO6104A-MS LeCroy oscilloscope with 1GHz bandwidth, 10Gs/s and a 12 bit ADC. The analysis was performed offline. In order to obtain a perfect SPR, the small fraction of 2-protons events acquired was cut-off during the analysis.
**Energy response.** Signals from the TIARA module were integrated to record the detector energy response as shown in figure 2, left. On the right, the simulated energy distribution of PGs from 63 MeV protons impinging on a PMMA target is shown for comparison. The two spectra are clearly unrelated as the TIARA module does not allow to establish the PG incident energy. As a result, none of the intense, characteristic PG emission lines visible in the right plot can be distinguished in the left plot. Actually, because of the limited detection volume, the gamma ray energy is not fully deposited in the detector. As a consequence, multiple interactions of mono-energetic gamma rays may result in a wide range of deposited energies. This phenomenon, combined to the relatively low light yield of Cherenkov radiators in the 2\(-\)10 MeV range, results in the typical SiPM single photo-electron (p.e.) spectrum shown in figure 2 (left), in which single Cherenkov photons are literally counted by the device.
An acquisition threshold of 6 p.e. was applied to the gamma detector signal in order to reduce the probability of triggering on SiPM dark counts. For the same purpose, the coincidence with the beam monitor was also imposed for data acquisition. A median number (\(N\)) of 7 p.e. was detected, corresponding to the amount of Cherenkov photons detected by the SiPM. The value of \(N\) indirectly affects the module time resolution as the SiPM contribution roughly goes as \(SPTR/\sqrt{N}\)[26], where \(SPTR\) is the intrinsic Single Photon Time Resolution (SPTR) of the SiPM.
The poor proportionality between incident and deposited PG energy mainly affects the determination of the acquisition threshold as, at low energies, the module detection efficiency dramatically depends on the PG incident energy. This can be estimated through MC simulation (see the Methods section). Neglecting the geometrical contribution, the intrinsic detection efficiency of a PG module as the one described in figure 1 depends on two contributions: the probability for the PG to interact in the crystal and the probability that this interaction produces more than \(N_{th}\) photoelectrons, with \(N_{th}\) being the threshold in p.e. applied for data acquisition. Figure 3 shows the detection efficiencies as a function of PG energy obtained by MC simulation for three different thresholds of 3, 6 and 9 p.e. It can be observed that the threshold does not provide a sharp energy cut-off. For example, using a threshold of 6 p.e. gives a 5% probability of detecting PGs with an energy of 2 MeV.
We will show in the next paragraphs that the lack of energy resolution does not compromise the proton range measurement sensitivity when a very high time resolution is achieved as long as the detection efficiency is properly taken into account.
**Coincidence Time Resolution.** The time stamps of both the diamond detector and the TIARA detection module have been recorded to obtain the proton plus gamma \(TOF\) variable in equation 1. The two diamond signals (one from each face) are first summed-up to increase their SNR, then a digital Constant Fraction Discrimination (CFD) with a 50% CF value is applied to both the PG and the diamond detectors. The resulting PG TOF distribution is presented in figure 4 for events corresponding to single protons in the beam monitor. This distribution is a measure of the system Coincidence Time Resolution (CTR): more precisely, its width results from the convolution of the system intrinsic time resolution and the proton transit time in the 5
Figure 1: Set-up of the experiment carried out at the MEDICYC facility with 63 MeV protons. A first version of the TIARA module, composed of a 1 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 3\(\times\)3 mm\({}^{2}\) SiPM and facing the Bragg peak region at 90\({}^{\circ}\), was tested in time coincidence with a sc diamond detector of \(4.5\times 4.5\times 0.55\) mm\({}^{3}\) volume. For the time and energy response characterisation, only PG signals from the thin PMMA target (cylinder of 10 cm radius) were considered: repeated measurement were carried out with either a 5 mm or a 1 cm thick target and the results were then averaged. The second target (cylinder of 10 cm radius and 23 cm thickness) was employed in the measurement of the proton range sensitivity; initially placed at 3 cm distance from the thin target, it was progressively moved by 2, 4, 6 and 10 mm in order to induce an artificial shift in the proton range. All the targets have a density of 1.19 g/cm\({}^{3}\).
Figure 2: The left plot shows the histogram of the energy deposited in the TIARA module (1 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 3\(\times\)3 mm\({}^{2}\) SiPM and placed at 90\({}^{\circ}\) respect to the beam) expressed as the integral of the SiPM signal. PGs are generated by 63 MeV protons impinging on the two PMMA targets described in figure 1. The first peak in the histogram corresponds to 6 p.e; the 7, 8, 9 p.e. peaks are also visible: a median number of 7 p.e. per PG were detected. The right plot shows the expected energy spectrum of PGs obtained by MC simulation for comparison. The simulation is performed with the Geant4.10.4.p02 toolkit implementing the QGSP-BIC-EMY physics list and reproducing the PMMA target and beam parameters used in the experiment.
mm PMMA target. The latter can be approximated by a uniform distribution of \(\sim\)51 ps width (from Geant4 simulations). After deconvolution of the proton transit time, and the suppression of the flat background, the best system time resolution is estimated to be 268 \(\pm\) 32 ps (FWHM) for data in figure 4. The same analysis was carried out independently on three TOF distributions acquired with either the 5 mm or the 1 cm thick targets; the averaged time resolution obtained from these repeated experiences is of 315 \(\pm\) 40 ps (FWHM). The CTR can also be interpreted as the quadratic sum of the beam monitor's and the PG detector's time resolutions under the hypothesis that the two contributions are independent and gaussians. Since the diamond time resolution was measured to be 156 ps (FWHM) in similar conditions [22], the TIARA detection module time resolution can be estimated to approximately amount 276 ps (FWHM). Therefore, for this detection module design and for 63 MeV protons, the system CTR is dominated by the PG time measurement.
**Study of PGT sensitivity.** The 1 cm thick PMMA target was placed downstream the diamond detector (cf. figure 1) with the thick target positioned at 3 cm distance. This configuration (considered as the reference geometry) was employed to simulate an air cavity heterogeneity in a uniform anatomy. The distance between the two targets was then progressively increased by 2, 4, 6 and 10 mm to reproduce an unpredicted variation in the anatomy and consequently a shift in the proton range. For each shift (0, 2, 4, 6 and 10 mm), the TOF distribution between the proton beam monitor and the PG detector was recorded. Figure 5a shows, as an example, the TOF histograms measured at 3 cm (0 mm shift) and 4 cm (10 mm shift) target-to-target distance. The two distributions are clearly separated: the distal fall-off of the 4 cm shift distribution is displaced
Figure 4: TOF distribution obtained with the 5 mm thick, 10 cm radius PMMA target irradiated by 63 MeV protons. The PG detector consisted of a 1 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 3\(\times\)3 mm\({}^{2}\) SiPM and placed at 90\({}^{\circ}\) respect to the beam. Data are fitted with a gaussian distribution convolved with a uniform distribution of 51 ps width. The resulting FWHM of 268 ps (114 ps rms) corresponds to the gaussian distribution FWHM and it can be interpreted as the system CTR.
Figure 3: Intrinsic detection efficiency (neglecting geometrical efficiency) of a PG detection module composed of a 1 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 3\(\times\)3 mm\({}^{2}\) SiPM. The three functions were obtained from MC simulation using the Geant4.10.4.p02 toolkit with the QGSP-BIC-EMY physics list to establish the PG interaction probability, while the UNIFIED model was applied to model the interactions of Cherenkov photons inside the crystal. Three different thresholds of 3, 6 and 9 p.e. were considered.
towards higher TOFs according to the air cavity thickness introduced between the targets. The flat background contribution, due to the random coincidences between SiPM dark counts and protons interacting in the diamond, is subtracted before analysis, and only 1-proton signals in the beam monitor are considered. After background rejection, each of the 5 TOF distributions includes around 600 PG events. For this experiment, it is not possible to establish the number of protons delivered as the scope dead time when writing the waveforms' values on disk considerably affects the total acquisition time. Nevertheless, previous MC simulations [18] allow to estimate that 600 PG events would correspond approximately to 2 \(\times 10^{6}\) incident protons for the full 30-channels TIARA prototype.
The dashed curve in figure 5b represents the simulated PG profile in the reference geometry (0 mm shift), that was obtained taking into account the energy dependency of the PG module efficiency presented in figure 3 (see the Methods section). This curve is used as a term of comparison to perform relative measurements of the proton range shift according to the approach previously described [18, 22] and summarised in the methods section. This methodology measures the proton range shift only exploiting the distal region of the PG profiles without making any assumption on the profiles' shape. The double gaussian fit (continuous lines in figure 5, left) are only presented as eye-guide but were not used for the analysis. In the right plot, an excellent agreement between the experimental and the simulated reference profiles can be observed, serving as a validation of the detector model developed in Geant4.
The measured shifts are reported in figure 6, left, as blue experimental data points, as a function of the implemented target shift.
The dashed red line represents the correlation between the measured time delay and the shift implemented in the geometry as obtained by MC simulation. Its slope amounts to 107 ps/cm, roughly corresponding to the proton speed at the exit of the
Figure 5: PGT profiles (top) and PGTI reconstructed profiles (bottom) obtained from the PMMA targets described in figure 1 irradiated with 63 MeV protons. In a) the experimental TOF profile obtained for the reference geometry (in blue) is compared to the one obtained after engendering a proton range shift of 1 cm (in red). The two histograms are fitted with a double gaussian friction to improve readability; the fit was not used for analysis. In b), the simulated reference profile (dashed line) is compared to the corresponding experimental data. In c) the two profiles shown in a) are reconstructed with the PGTI algorithm to convert them into the space domain. In d) the reconstructed simulated (dashed line) and experimental (continuous line) reference profiles, corresponding to data in figure b) are presented. The experimental profiles are obtained with a PG module composed of a 1 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 3\(\times\)3 mm\({}^{2}\) SiPM and placed at 90\({}^{\circ}\) respect to the beam. For the simulation of the reference profile, we used the Geant4.10.4.p02 toolkit with the QGSP-BIC-EMY physics list and the UNIFIED model.
thin target (the simulated value amounts to 109 ps/cm). The orange and blue error bars respectively summarise the 1\(\sigma\) and 2\(\sigma\) experimental errors obtained by bootstrapping methods (see methods section). The experimental errors on these data indicate that the TIARA detection module is able to measure a 4 mm proton range shift at 2\(\sigma\) confidence level, from the exclusive measurement of TOF with a very low statistics of acquired PGs. The same data also show that a proton range sensitivity of 2 mm would be achievable at 1\(\sigma\) confidence level.
**Study of PGTI sensitivity.** The five TOF datasets acquired for the different target shifts are reconstructed on an event-by-event basis according to equation 1 and following the methodology described in [18]. With this procedure, the TOF distributions in figure 5a are converted into PG profiles in the space domain providing straightforward means to directly measure the range shift in mm instead of ps (see figure 5c). In analogy to figure 5a, data are shown for the 3 and 4 cm air cavities. The time-to-space conversion is necessary to combine the response of multiple modules placed at different angular coordinates. Actually, the PG TOF depends on the relative position between the PG vertex and the PG detector and it must be deconvolved before summing-up TOF distributions obtained with different modules. In this work, only one TIARA detection module was available: our goal therefore was to demonstrate that PGTI can provide the same sensitivity than PGT.
The proton range shift was evaluated with respect to the simulated PGTI profile in reference conditions (dashed line in figure 5d) applying the same approach used for PGT distributions after background subtraction (see methods section). In analogy with PGT analysis, the measured proton range shift is reported in figure 6, right, against the applied cavity shift. The 1\(\sigma\) and 2\(\sigma\) statistical errors are also shown as orange and blue error bars, respectively. The same proton range sensitivity is observed for PGTI and PGT: a 4 mm range shift could be distinguished at 2\(\sigma\) confidence level. Nevertheless, an offset of 0.66 mm is present on the PGTI dataset, as highlighted by the dashed red line. This curve represents the expected correlation between the measured and the implemented range shift as obtained by MC simulations; here its intercept has been adjusted to the data points to properly estimate the range offset. This offset results from the propagation, during reconstruction, of the systematic errors made when measuring the coordinates of the TIARA detection module and those of the beam monitor. In the future, this error can be easily minimised by imaging the target (or the patient) and the experimental set-up with a CT scanner in order to establish the detectors' positions with sub-millimetric precision during the treatment planning phase. Despite this offset, the behaviour of the experimental points in figure 6, right, is still linear with a slope of 0.81, a value inferior to one as the current reconstruction is biased by the \(T_{p}\) term (cf. equation 1) determined in reference conditions. This effect could be avoided with an iterative reconstruction approach, as the one proposed by Pennazio et al. [36], that could establish offline the actual range shift. Nevertheless, in this work we focussed on an event-by-event reconstruction that can be implemented online during data acquisition, hence allowing to promptly stop the treatment if a significant shift is detected. To achieve this goal, we do not need the actual value of the proton range shift, but only to establish whether a statistically significant discrepancy with respect to the treatment plan exists or not.
### Detector characterisation with 148 MeV protons
A second version of the detector module was realised to improve the detection efficiency without compromising the time resolution. It is composed of a 1\(\times\)1\(\times\)2 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 6\(\times\)6 mm\({}^{2}\) MPPC from Hamamatsu (S13360-6075CS) and read by a custom preamplifier based on the design of Cates et al. [27]. The module was tested with 148 MeV protons at the ProteusOne S2C2 synchrocyclotron at CAL. S2C2 delivers protons in micro-bunches of 16 ns period with a 50% duty cycle [28]: this micro-structure is embedded in a macrostructure characterised by 8 \(\mu\)s pulses every ms. In this paper, the 8 \(\mu\)s beam structure will be referred to as proton pulse, whereas the 16 ns micro-structure will be mentioned as proton bunch. The beam profile is Gaussian with a measured standard deviation of 4.3 mm at 148 MeV [29]. The same sc-diamond detector used in the 63 MeV experiment, read on both sides using Cividec C2 preamplifiers, provided the time stamps for the incident protons, The effective intensity at the beam monitor level was arbitrarily set to a low value that was estimated _a posteriori_ to amount \(\sim\)0.78 p/bunch on average.
In a first experiment (cf. figure 7, left), a 5 mm thick, 10 cm radius cylindrical PMMA target (density = 1.19 g/cm\({}^{3}\)) was employed in a configuration similar to the one used at the MEDICYC accelerator in order to measure the system CTR. The TIARA module was placed close to the target axis in order to increase the geometrical efficiency, at approximately 73\({}^{\circ}\). Signals from all detectors were digitally sampled using a Wavecatcher module [30] with 500 MHz bandwidth and a sampling rate of 3.2 Gs/s. The acquisition was triggered by the coincidence of the two detectors within a 15 ns time window. The analysis was performed offline. The SPR was ensured by selecting only 1-proton events in the diamond detector. Scattered protons directly detected by the SiPM in the PG module were rejected by pulse-shape analysis as they produce longer signals than those associated to Cherenkov events in the crystal.
The energy response of the module is shown in figure 7, right: a median number of 21 p.e. is detected per each PG event when applying an acquisition threshold of 100 mV (\(\sim\)10 p.e.). This improvement, with respect to the median value of 7 p.e. obtained
in the previous experiment (figure 2, left) is mainly due to the increased size of the SiPM that allows collecting a larger number of Cherenkov photons per PG event. The difference between the PG detector time stamp (obtained with a 5% CF threshold) and the diamond time stamp (obtained with a 50% CF threshold) was calculated to build the TOF distribution for the 5 mm target irradiation shown in figure 8. This distribution presents two clear components: i) a well resolved gaussian peak (highlighted by the orange fit) that corresponds to PG events generated by protons crossing the diamond detector, and ii) a broad background (highlighted by the green fit) associated to PG events from protons passing by the beam monitor.
The first peak directly provides a measurement of the system CTR with a value of 397 ps (FWHM). This CTR can also be
Figure 6: Range shift sensitivities obtained with a PMMA target including an air cavity ranging from 3 to 4 cm, and irradiated with 63 MeV protons. On the left, the proton range shift measured with the PGT technique in unit of time is compared to the actual shift implemented in the phantom. On the right, the PGTI reconstruction allows measuring the proton range shift directly in the space domain. The dashed red line corresponds to the theoretical correlation between the implemented shift and the measured parameter as obtained from MC simulation using the Geant4.10.4.p02 toolkit with the QGSP-BIC-EMY physics list and the UNIFIED model. Error bars represent the 1\(\sigma\) (orange) and 2\(\sigma\) (blue) statistical errors obtained by the bootstrap technique (see the Methods section). With both techniques, the TIARA detection module (1 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 3\(\times\)3 mm\({}^{2}\) SiPM and placed at 90\({}^{\circ}\) from the beam direction) permits to achieve a proton range shift sensitivity of 4 mm at 2\(\sigma\) with a statistics of approximately 600 PGs.
Figure 7: Left: experimental set-up for the CTR measurement carried out at the S2C2. A 5 mm thick, 10 cm radius PMMA target was used as a source of PGs to measure the CTR. Given the limited transit time of protons within the target, the PG source can be considered point-like. The beam size (4.3 mm \(\sigma\)) was larger than the diamond surface (4.5\(\times\)4.5 mm\({}^{2}\)) as no collimator was employed. The TIARA detection module was composed of a 2 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 6\(\times\)6 mm\({}^{2}\) SiPM and placed at approximately 73\({}^{\circ}\) from the beam direction. The effective beam intensity at the beam monitor level was 0.78 p/bunch. Right: energy deposited in the TIARA detection module expressed as the integral of the SiPM signal. A threshold of 10 p.e. was applied for data acquisition, resulting in a median number of 21 p.e. detected.
interpreted as the quadratic sum of the diamond and the PG detector time resolutions. In a previous experiment, the intrinsic time resolution of the diamond was measured to be 360 ps (FWHM) when reading-out a single face of the detector. Under these conditions, the diamond time resolution is expected to be overestimated by 50\(-\)70 ps (FWHM) with respect to experiments in which signals from both sides are summed-up for analysis [25]. Despite the measured time resolution can not be directly compared to the CTR measured in this work, it still appears obvious that, for 148 MeV, the system CTR is now dominated by the beam monitor, with the TIARA detection module contributing for less than 167 ps (FWHM). This effect is expected, as an increase in the proton energy corresponds to a decrease in the energy deposited in the diamond: \(\sim\)0.9 MeV is deposited in a 0.55 mm thick diamond by 148 MeV protons instead of \(\sim\)1.7 MeV for 63 MeV protons.
The background component in the TOF distribution of figure 8 is a direct effect of the limited size of the beam monitor. Since no collimator was used in this experiment, only a fraction of the beam (approximately 20%) traverses the diamond detector. This means that, while the effective beam intensity was estimated to 0.78 p/bunch at the beam monitor level, the actual beam intensity is of the order of 4.7 p/bunch. Therefore the acquisition of part of the events may be triggered by the coincidence of a PG (or another secondary particle) with a random proton that has not produced the gamma ray (or the secondary particle). In other words, the background distribution represents the random coincidences between the PG detector and the beam monitor: its shape is not flat, because the beam time-structure is periodic with a nominal bunch width of 8 ns and a nominal standard deviation of \(8/\sqrt{12}\) ns. The background could be fitted with a Gaussian distribution of 1.27 \(\pm\) 0.13 ns sigma, a value that is consistent with the S2C2 micro-bunch standard deviation. This hypothesis is confirmed by a separate, more direct measurement of the bunch width. A random coincidences TOF distribution was built between events from the PG module and the beam monitor triggering on the bunch arriving immediately before the 15 ns coincidence window. The distribution obtained, shown in figure 8, right, is compatible with the background in figure 8, left, with its mean value shifted by the micro-bunch period (16 ns). The standard deviation of this distribution is 1.17 \(\pm\) 0.04 ns, in agreement with the measurement in figure 8 left.
In a second experiment, the thin target was replaced by a 23 cm thick, 10 cm radius PMMA target to stop 148 MeV protons after a 13.4 cm range. Two identical gamma detectors were placed close to the beam entrance (position A, at 157\({}^{\circ}\)), and close to the Bragg peak (position B, at 90\({}^{\circ}\)) both aiming at the Bragg peak region as shown in figure 9. During analysis, only 1-proton events were considered and scattered protons directly detected by the SiPM in the PG module were rejected by pulse-shape analysis. The TOF distributions measured for detectors A and B are presented in figure 10. The same effect described in figure 8 is visible here: because of the beam monitor limited size, a background associated to random coincidences is recognisable in both distributions. A second background component is present in the TOF ranges (\(-1.0\div 0.5\)) ns and (\(2.9\div 4.1\)) ns for detectors A and B distribution, respectively. These events are associated to scattered protons generating PGs in the gamma detector (most probably in the packaging) and will be discussed in detail in the next section. Nonetheless, the PG signal is still clearly detected between 0.5 and 3 ns for both detectors. Detector A is directed at the beam entrance while detector B is closer to the Bragg peak region. Thus, detector A has a larger solid angle for the measurement of PGs at the target entrance with a reduced efficiency for the Bragg peak region, whereas the opposite is true for detector B, explaining the different positions of the PG profiles maxima and their shapes. Traditionally, PGT detectors are placed at beam entrance to take advantage of the overall longer TOF, thus increasing the TOF measurement sensitivity [19, 31], and to avoid scattered protons that
Figure 8: On the left, TOF distribution obtained from the irradiation of the 5 mm target. The gaussian peak with 397 ps (FWHM) corresponds to PGs generated by protons traversing the beam monitor; the large background signal corresponds to PGs generated by protons passing by the diamond detector. On the right, the TOF distribution obtained from random coincidences between the diamond and the PG detectors confirms the origin of the background on the left plot: it is a measure of the bunch-induced time resolution.
are mostly forward-directed. However, this approach limits the PG statistics acquired in the fall-off region where the proton range measurement is performed [20] and it ultimately reduces the technique sensitivity; while the profile fall-off is very sharp for detector B, the one obtained with detector A is not well defined. Combining the readings of multiple detectors placed at different angular positions around the target is therefore the only way to achieve a uniform statistical efficiency and a uniform sensitivity throughout the whole proton range. This possibility could lead to further exploit the measured PG profile in order to assess anatomy variations in patients [32], but it requires a dedicated reconstruction as the one proposed by PGTI in order to appropriately sum-up the response of multiple detectors. As a tangible example, TOF distributions acquired with detectors in position A and B are non-uniformly shifted and stretched in the time domain, according to the variation of the PG TOF all over the proton range. While the proton transit time in the target is, by definition, the same for the two detectors, the PG TOF varies from 165 to 507 ps for detector A (342 ps of relative PG delay), and from 329 to 407 ps for detector B (78 ps of relative PG delay) for PGs generated at the target entrance and for those generated at the Bragg peak respectively: the two TOF distributions cannot be summed-up in the time domain without causing blurring and a loss of resolution/sensitivity. Alas, in this experiment, the presence of the broad background from random coincidences in the beam monitor prevented PGTI data reconstruction. In fact, the current PGTI algorithm needs the system time resolution as an input parameter for the reconstruction [18], and in presence of a double component it is impossible, for a single event, to establish the associated time resolution. Our current effort is in the development of a larger area beam monitor in order to cover the whole beam surface [33].
### Background events in Cherenkov detectors
One of the hypotheses that has motivated the use of Cherenkov radiators for PG imaging is their insensitivity to neutrons. A PG detector is subjected to three main sources of background noise: neutron and neutron-induced gamma-rays that are time-correlated to the proton beam (i.e. originating in the nozzle or in the patient); neutron and neutron-induced gamma-rays from the environment; and protons scattered in the patient/target and, in our case, the beam monitor.
From MC simulation [18], it has already been demonstrated that, for most of the particles falling in the first category, the contribution is not flat, but it increases at the fall-off of the PG profile (see neutron contribution in figure 11, data from [18]). Thus, even if this component may be rejected by TOF selection [22], still, the fall-off of the PG profile will be biased, with a direct consequence on the accuracy of the proton range measurement.
Conversely, non time-correlated background particles result by definition in a constant baseline in the TOF distribution. The absence of a collimator is pivotal in keeping the level of this environment noise negligible. The exploitation of a threshold process such as Cherenkov emission offers additional means for neutron rejection. Secondary particles produced by neutron scattering are too massive to reach the critical speed for Cherenkov emission; the same is true for scattered protons. Neutron capture, instead, requires neutron thermalisation; even if the neutron was detected through this process, its slow speed would guarantee an effective TOF selection. Basically, the only possible source of background in presence of a TOF-Cherenkov radiator comes from neutron-induced gamma rays that are not time-correlated or moderately correlated to the proton beam.
The locally constant background measured in the test with 63 MeV protons (figure 5a) provides an experimental evidence
Figure 9: Experimental set-up for the PGT profile measurement at the S2C2 facility. The 23 cm thick, 10 cm radius PMMA target fully stopped the 148 MeV protons after a range of 13.4 cm. Two gamma detector modules were placed upstream the target (det A, at 157\({}^{\circ}\)) and at the Bragg peak (det B, at 90\({}^{\circ}\)). Each module was composed of a 2 cm\({}^{3}\) PbF\({}_{2}\) crystal coupled to a 6\(\times\)6 mm\({}^{2}\) SiPM. The effective beam intensity at the beam monitor level was 0.78 p/bunch.
that the noise is not time-correlated in the region of interest for PG monitoring. Its frequency (\(f_{noise}\)) was estimated as the integral counts in the constant background divided by the acquisition duration, resulting in \(f_{noise}\) = 1.46 \(\pm\) 0.02 Hz. Under the assumption that the noise is mostly due to the SiPM dark count rate (\(DCR_{SiPM}\)), \(f_{noise}\) should be equal to the rate of random coincidences \(DCR_{coinc}\) between proton triggers in the diamond (\(f_{dia}\)) and \(DCR_{SiPM}\). \(DCR_{coinc}\) can be estimated as:
\[DCR_{coinc}=f_{dia}\times DCR_{SiPM}\times 2\tau \tag{2}\]
where 2\(\tau\) is the coincidence window width of 20 ns. From the proton beam intensity of 0.025 \(\pm\) 0.002 p/bunch and the beam frequency of 25 MHz, \(f_{dia}\) = 625 \(\pm\) 50 kHz. The intrinsic dark count of the SiPM, instead, was measured on the bench for the same threshold level of 6 p.e., and then corrected for the temperature difference between the laboratory and the MEDICYC experimental room; a value \(DCR_{SiPM}\) = 302 \(\pm\) 219 Hz was estimated. Thus, according to equation 2, \(DCR_{coinc}\) = 3.8 \(\pm\) 3.1 Hz. Despite the large experimental errors involved in the evaluation of \(DCR_{coinc}\), its compatibility with \(f_{noise}\) = 1.46 \(\pm\) 0.02 Hz suggest that, the detection of particles other than PGs, if occurring, is certainly negligible in this time window.
A further proof of this observation is given by data presented in figure 8, left. In the experiment with 148 MeV protons, the PG module included a 6\(\times\)6 mm\({}^{2}\) SiPM that ensured a better optical photon collection efficiency with respect to the 3\(\times\)3 mm\({}^{2}\) SiPM
Figure 11: Vertex distributions of different secondary particles generated by a 100 MeV proton beam impinging on the spherical phantom head described in Jacquet et al. [18]. Data are obtained by MC simulation (Geant4.10.4.p02 toolkit with the QGSP-BIC-EMY physics list); the detector is not simulated. The contribution of PG scattered in the phantom is reported separately. It can be observed that their profile has the same shape as the one from unscattered PGs and therefore they constitute a valuable signal. The contribution of secondary neutrons (in red) is not constant and cannot be easily rejected by TOF without compromising the measurement of the PG profile fall-off. Data are taken from Jacquet et al. [18].
Figure 10: TOF distributions obtained with the thick PMMA target and with detector A (in blue) placed at 157\({}^{\circ}\) and detector B (in red) placed at 90\({}^{\circ}\) from the beam direction. The relevant signal is in the region between 0.5 and 3 ns. Outside this region, the background is mainly due to random coincidences caused by the limited size of the beam monitor. The two bumps located at (\(-1.0\div\) 0.5) ns and (2.9 \(\div\) 4.1) ns for detectors A and B distributions, respectively, are associated to protons scattered in the beam detector.
used in the 63 MeV experiment. This allowed a higher threshold of 10 p.e. to be set, making the level of SiPM dark count rate negligible with respect to the data acquisition rate. The histogram in figure 8, left was built with a 15 ns coincidence window, but no data was acquired outside the 8 ns window defined by the bunch width. In summary, when the SiPM threshold is set high enough to cut-off the dark count contribution, the Cherenkov-based gamma detector is rather insensitive to neutron-associated background, and therefore the optimum candidate for the construction of a high sensitivity, fast PG detector.
Still, one last source of background remains in the collected data. Figure 10, showed two bumps in the TOF distributions, respectively in the ranges (\(-1.0\div 0.5\)) ns and (\(2.9\div 4.1\)) ns for detectors A and B. The origin of these events can be understood by TOF considerations: they cannot originate from in the target since the upstream detector (det A) measures for them a TOF smaller than the one measured for PGs originating from the target. Since these particles cannot be faster than PGs, they must come from upstream x the target. The TOF of these background events is rather well defined (suggesting a localized spatial origin) and it is compatible (for both the upstream and the downstream detectors) with protons directly travelling from the diamond detector to the PG module. Moreover, they appear in the same time-position of protons that are directly detected by the SiPM; the latter have a very particular shape and they can be easily identified (and rejected) by pulse-shape analysis. However, since protons of such energies cannot trigger Cherenkov production in the PbF\({}_{2}\), they must be converted locally into PGs in order to maintain their TOF coherence. In summary, our hypothesis is that these contributions are due to protons scattering in the diamond PCB board and interacting in the Cherenkov radiator holder made of Polyoxymethylene (\((CH_{2}O)_{n}\)). The chemical composition of Polyoxymethylene is very close to that of PMMA, and results in the local production of PGs that are indistinguishable from the actual signal. Currently this effect cannot be rejected in the data, but it can be avoided in the future by improving the crystal holder design (or removing it altogether). We do not expect the PG conversion to take place in the crystal itself as we have not observed this effect in the 63 MeV experiment, for which the crystal was simply wrapped in a black tape, with no holders. Still, this hypothesis requires further experimental verification. In the light of this observation, it should also be observed that the scattered proton background must also affect data in figure 8.
## Discussion
Prompt gamma emission is a rare physical phenomenon, therefore, increasing detector sensitivity is of utmost importance to achieve treatment monitoring in real time with PG-based systems. Being able to measure the proton range _in vivo_ with a limited PG statistics essentially means that the treatment can be verified very quickly at its very beginning and avoid unwanted over-irradiation of the healthy tissue.
Our work focuses on two main aspects: proposing a new PG imaging technique (PGTI) to improve the proton range measurement sensitivity, and conceiving an innovative PG imaging detector with high detection efficiency and high time resolution. The potential performances of PGTI have been discussed in a previous paper for different operating conditions. Here we focussed on the experimental feasibility of the PGTI technique in SPR in order to characterise the inherent performances of the proposed detection module. The SPR was realised in manual delivery mode since the clinical settings do not currently allow for such low intensities as they have no clinical applications yet. For the same reason, the dose delivery algorithm does not currently offer the possibility of switching from SPR to clinical intensity. While no technical barriers were identified in the delivery of the SPR intensity neither in terms of feasibility nor in terms of operation speed, it was observed that the SPR intensity is too low compared to the S2C2 ionisation chambers' sensitivity. This would most probably require, in the future, the development of a dose monitoring system (e.g. a diamond beam monitor) dedicated to SPR as well as a new delivery software in addition to the current IBA "blind golfer" algorithm.
For 63 MeV protons we achieved, in SPR, a proton range sensitivity of 4 mm (at 2\(\sigma\)) with an unprecedentedly low statistics of only 600 PGs. This value confirms the 3 mm (at 2\(\sigma\)) predicted by MC simulation [18] with 3000 PG events acquired. From these simulations, it can also be estimated that 600 PGs would correspond to 2 \(\times\) 10\({}^{6}\) incident protons for the full 30-channels TIARA prototype (0.6% detection efficiency). This results paves the way to the use of the TIARA detector at the very beginning of the session to position the patient and/or verify the most critical spot(s) while operating a reduction of the beam intensity to \(\sim\) 1 p/bunch. The duration of this monitoring procedure varies according to the time characteristic of the accelerator employed. For an accelerator such as the MEDICYC cyclotron, delivering 10\({}^{7}\) protons in SPR would require about 0.63 seconds according to theoretical calculations, whereas this time would be longer (31.6 seconds) for the S2C2 synchrocyclotron. The SPR approach should therefore be considered part of the patient set-up procedure (of the order of 15 minutes in the clinical practice) rather than as the treatment itself, for which PGTI should rather be implemented at nominal intensities (e.g. from \(\sim\)2000 to \(\sim\)2 million of p/bunch for the S2C2).
In the experiment with 148 MeV protons, we could compare the TOF distributions obtained with the detector in two different positions. We qualitatively showed that using multiple detector configurations is pivotal to obtain a uniform and increased detection efficiency (and eventually sensitivity) throughout the proton range. This requires the use of a dedicated reconstruction algorithm as PGTI in order to correct for the non-linearities introduced by the PG TOF term. This correction may not seem necessary when using conventional gamma detectors but, when using a detection system with 235 ps (FWHM) time resolution,
it is essential in order to fully exploit its potential precision.
PGTI therefore goes hand in hand with the development of new detectors with optimised time resolution and detection efficiency. The most recent gamma detection module, composed of a 2 cm\({}^{3}\) PbF\({}_{2}\) and a 6\(\times\)6 mm\({}^{2}\) MPPC, has shown a time resolution below 167 ps (FWHM) for 148 MeV protons irradiations. With a larger photodetector surface compared to the previous module, it was possible to set a higher detection threshold thus ensuring that no dark count events were acquired. A high detection efficiency is guaranteed by the lack of a collimation system and by the optimisation of the SNR as Cherenkov radiators are rather insensitive to background particles (mainly neutrons). We are currently working on the detector packaging optimisation to avoid the detection of scattered protons and conceiving a mechanical system to hold multiple modules all around the patient. The latter should be able to cope with the patient table, by either building a sort of helmet for the patient, or placing some of the detectors behind the table. This second approach is possible as PGT/PGTI is not very sensitive to Compton scattering: scattered PGs are only very slightly delayed (few ps at worst) and they maintain their temporal coherence (c.f. figure 11).
This work was carried out under the hypothesis that every single proton could be tagged in time by selecting only 1-proton signals during the analysis. Experimentally, and with the current design of our beam monitor, this would require lowering the beam intensity to less than one proton per bunch [21] in order to minimise the probability of 2- and 3-proton events; an approach that would further increase the duration of the monitoring procedure. In order to overcome this limitation, different solutions (software and hardware) are under investigation to tag in time each proton in the bunch. We are conceiving a dedicated algorithm that exploits the increased rise time and the different shapes of 2-, 3-, 4-proton signals to extract separate time stamps. The precision of these time stamps would be worse than those obtained for 1-proton signals (397 ps FWHM at S2C2), but they would still be more precise than the 2.7 ns FWHM CTR expected at nominal intensity for both PGT and PGTI. At the same time, we are developing a large area, multi-channel diamond-based beam monitor [21, 33] that would not only allow to overcome the size limitation of our current prototype, but also to tag in time multiple protons' signals with the same precision of single protons [33, 34]. The combination of a multi-channel monitor and a dedicated time tagging algorithm could result in a further extension of the beam intensity for the "single proton" regime. Still, once the protons' time stamps are available a dedicated algorithm should be developed to iteratively (or through ML) identify the very proton that has produced the detected PG. As a result of this procedure, some events will have a degraded time-, and therefore space-resolution. A new assessment of the technique sensitivity will be therefore necessary in this scenario, but we expect to perform better, by design, than with nominal intensities by either PGT or PGTI.
Finally, it should be kept in mind that the SPR is a possibility, not a requirement for PGTI and for our detector. Our approach would allow to perform an _in-vivo_ control of the patient set-up at the beginning of the treatment by verifying his/her anatomy, and then it could be used during the whole treatment at nominal intensities with performances that are not worse than PGT but with the advantage of employing Cherenkov detectors. At clinical intensity (e.g. from \(\sim\)2000 p/bunch for the S2C2), the loss in time resolution could be compensated by the increased acquisition statistics (see Jacquet et al. [18] for details) to achieve similar sensitivities in the proton range measurement. At even higher intensities (the maximum intensity achievable with S2C2 is of \(\sim\)36\(\times\)10\({}^{6}\) p/bunch), Cherenkov radiators offer very good perspectives to sustain high count rates. The time-scale of Cherenkov process is of the order of the ps (to be compared to tenths of ns at best for conventional scintillators), resulting in a negligible dead-time, with the signal duration essentially given by the recharge time of the SiPM microcell. The latter can be cut-off to few ns with the appropriate electronics. In fact, the low-light output of the Cherenkov process ensures that only a few microcells per PG are activated, with the others available for the next event. It is therefore realistic to design a Cherenkov module that can sustain count rates up to \(\sim\)100 MHz per channel. At these extreme count rates, however, the design of an electronic board capable of tagging in time each PG is challenging and, at some point, different approches as the calculation of the center of gravity of the PG distribution [18] should be used. For our final 30-channels prototype, we are conceiving a dedicated electronic, which will be based on digital TDCs, in order to handle the different regimes.
## Methods
### Single proton regime
For the 63 MeV experiment at the MEDICYC facility, the beam intensity was arbitrarily set to obtain a negligible ratio of 2-protons signals at the diamond level. The MEDICYC cyclotron is already calibrated to work down to a nominal intensity of 0.1 p/bunch.
At the S2C2 synchrocyclotron, the beam intensity depends on two parameters: the voltage of the Dees (V\({}_{Dee}\)) and the S2C2 collimation slit opening. V\({}_{Dee}\) is given as a percentage of the maximum value. In the clinical practice the system calibration is performed for V\({}_{Dee}>\)66.49%. In this work, the "effective" SPR at the beam monitor level required to set a V\({}_{Dee}\) of 65%. The slit opening, instead, was set to the minimal value of 1 mm. The spot integrity was verified in these conditions and no modifications were detected with respect to the clinical mode.
The SPR was performed in "manual delivery mode" for feasibility and safety reasons, so as to not corrupt the "clinical site configuration" of S2C2 which is extensively certified and validated for clinical purposes. This configuration, in fact, does not
enable the SPR, as these intensities have no clinical application at this time and any modification of the settings would require the complete recalibration of the facility and a double validation (from both IBA and the customer).
### Effective beam intensity
The effective beam intensity at the beam monitor level was calculated _a posteriori_ taking into account Poisson statistics. The diamond energy distribution was integrated in the regions corresponding to 0 and 1 proton signals to obtain the probability of having zero (P(0)) or one (P(1)) protons in the bunch. The ratio P(1)/P(0) provides the \(\lambda\) parameter of the Poisson distribution describing proton delivery, which corresponds to the average number of protons per bunch.
Nevertheless, the intensity values calculated in this work do not correspond to the actual intensity set at the accelerator level. For the MEDICYC cyclotron, the calculation is biased by the presence of a collimator, whereas, for the S2C2 synchrocyclotron, the beam monitor covered only \(\sim\)20% of the beam surface, meaning that the actual beam intensity was of the order of 4.7 p/bunch.
### Intrinsic detection efficiency
A Geant4 [35] simulation (version 10.4.p02) of the optical properties of the TIARA module, based on the UNIFIED model and the QGSP-BIC-EMY physics list [17], was performed to establish the module intrinsic detection efficiency as a function of the incident PG energy. The efficiency was computed as the fraction of PGs depositing more than 100 keV in the crystal and resulting in more than N\({}_{th}\) p.e. reaching the SiPM (with N\({}_{th}\) being the threshold expressed in p.e). The SiPM photodetection efficiency was also taken into account. Simulated data were then fitted with an analytical function (given by the sum of a sigmoid and a first degree polynomial) that was exploited to take into account the detector response in other MC simulations carried out for this work. The functions obtained for different values of N\({}_{th}\) are shown in figure 3.
### Generation of reference profiles
Reference profiles are built from MC simulations of the reference geometry. The Geant4.10.4.p02 version with the QGSP-BIC-EMY physics list was used to generate the PG time stamps on a detection surface of the same size as the gamma-detector module in order to take into account its geometrical detection efficiency.
The detector response was subsequently taken into account by considering the detection probability of each simulated event as defined by the analytical function presented in figure 3 for a threshold of 6 p.e (orange curve). The system TOF resolution was then included by convolving the data with a gaussian distribution of 315 ps FWHM (i.e. the experimentally measured value). This multi-step approach was chosen to reduce the computing time, as PG generation is a rare phenomenon and the simulation of optical photon propagation and interactions in Geant4 are time-consuming. The excellent agreement between the experimental and simulated reference profiles in figure 5 (b and d) is an indirect validation of the simulation procedure.
### Measurement of the distance between two PG profiles
After background rejection, the procedure used to measure the distance between two PG profiles is the same for PGT and PGTI (reconstructed) profiles. First, the reference X value \(x_{ref}\) (either in the unit of time or space for PGT or PGTI respectively) is defined as the distal maximum in the simulated reference profile. Then, each experimental PG profile and the simulated PG profile are integrated to exploit the noise-filtering properties of the integral function. The difference \(d_{i}\) between each experimental integrated PG profile (\(f_{i}(x)\)) and the integrated reference profile (\(f_{ref}(x)\)) is calculated as \(d_{i}=f^{-1}(y_{ref})-x_{ref}\), where \(y_{ref}=f_{ref}(x_{ref})\). This method is described in detail in Marcatili et al. [22].
### Errors for the sensitivity plot
The error on the profile fall-off position (cf. figure 6) is determined from toy experiments, using the bootstrap technique. For each experimental TOF profile, 5000 sub-samples (toy experiments) including from 30 to 135 PGs (in steps of 15 PGs) were extracted, for a total of 8 sets of 5000 data samples per profile. The size of sub-samples was kept small to limit their statistical dependency. The 5000 sub-samples were then used to estimate the 1\(\sigma\) and 2\(\sigma\) statistical errors on the difference \(d_{i}\) between the toy experiment profile and the reference profile, and to obtain their dependency on the number \(N\) of PG events in the profile. This dependencies varies as \(k/\sqrt{N}\) (where \(k\) is a constant) and allows to extrapolate the experimental statistical errors at 1\(\sigma\) and 2\(\sigma\) for the PG statistics available in the current experiment.
### Background subtraction in PGTI distributions
The background in PGT distributions is flat as it is generated from events that are not time-correlated to the PG signal. When the PGT distributions are reconstructed, the flat background is non-linearly transformed acquiring a complex, non constant shape. A model of the PGTI background is built by reconstructing a constant signal according to equation 1. The model is then fitted on the reconstructed TOF histogram and subtracted the background.
It should be noted, that a more straightforward method would have been to subtract the flat background from the PGT profile
before reconstruction. However, with the aim of implementing here an event-by-event reconstruction that could be performed as data are acquired, we performed the analysis under the assumption that the background level is not known at reconstruction.
## Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2309.09459 | Multipartite entanglement measures: a review | Quantum entanglement, a fundamental aspect of quantum mechanics, has captured
significant attention in the era of quantum information science. In
multipartite quantum systems, entanglement plays a crucial role in facilitating
various quantum information processing tasks, such as quantum teleportation and
dense coding. In this article, we review the theory of multipartite
entanglement measures, with a particular focus on the genuine as well as the
operational meaning of multipartite entanglement measures. By providing a
thorough and valuable insight on this field, we hope that this review would
inspire and guide researchers in their endeavors to further develop novel
approaches for characterizing multipartite entanglement. | Mengru Ma, Yinfei Li, Jiangwei Shang | 2023-09-18T03:42:20Z | http://arxiv.org/abs/2309.09459v1 | # Multipartite entanglement measures: a review
###### Abstract
Quantum entanglement, a fundamental aspect of quantum mechanics, has captured significant attention in the era of quantum information science. In multipartite quantum systems, entanglement plays a crucial role in facilitating various quantum information processing tasks, such as quantum teleportation and dense coding. In this article, we review the theory of multipartite entanglement measures, with a particular focus on the genuine as well as the operational meaning of multipartite entanglement measures. By providing a thorough and valuable insight on this field, we hope that this review would inspire and guide researchers in their endeavors to further develop novel approaches for characterizing multipartite entanglement.
## I Introduction
The concept of entanglement has played an exceptional role in quantum physics ever since its discovery at the beginning of last century [1; 2]. Entanglement has been recognized as a key resource in various quantum information processing tasks, such as quantum teleportation [3], superdense coding [4], and quantum key distribution [5]. Moreover, as a dividing feature between the quantum and classical worlds, how to detect the presence of entanglement has received extensive research interest [6]. However, describing the structure of quantum entanglement for multipartite systems is still challenging, as there are already an infinite number of entanglement classes for the simple four-qubit system [7]. Recognizing the significance of entanglement as a valuable resource, it becomes natural to explore its quantification through theoretical descriptions. In the seminal work by Vedral _et al._[8], the fundamental condition that the amount of entanglement cannot increase under local operations and classical communication (LOCC) is established, laying the foundation for entanglement quantification. Furthermore, Vidal [9] advanced the field by introducing a family of functions, called entanglement monotones, which are magnitudes that do not increase, on average, under LOCCs. Examples of entanglement monotones include the entanglement of formation and distillable entanglement [10], relative entropy of entanglement [11], and the robustness of entanglement [12]. In addition, entanglement can be quantified using the eigenvalue spectra of the density matrices, which provides a practical means for quantitative entanglement measures [10; 11; 13].
For the bipartite system, various entanglement measures are designed, such as concurrence [13; 14], negativity [15], entanglement of formation [13], etc. Since a single Schmidt coefficient governs the two-qubit entanglement, all the entanglement monotones for two-qubit system are equivalent. Quantifying entanglement for multipartite systems is, however, a great challenge due to the much richer mathematical structures involved as compared to the bipartite case. The generalization of concurrence, namely the generalized multipartite concurrence, can be found in Ref. [16]. In Ref. [17], an entanglement monotone for genuine multipartite entanglement is introduced, which extends the negativity and equals to the negativity in the bipartite case. Besides, other bipartite entanglement measures such as the relative entropy of entanglement and the robustness of entanglement can be easily generalized to the multipartite scenario.
Though multipartite systems have deeper underlying structures, the definition of multipartite entanglement (ME) can be borrowed straightforwardly from the bipartite case. Consider an \(N\)-partite mixed state, the definition of \(k\)-separability is defined as [18]
\[\rho_{k\text{-SEP}}=\sum_{i}p_{i}|\psi^{i}_{k\text{-SEP}}\rangle\langle\psi^{ i}_{k\text{-SEP}}|\,,\quad\sum_{i}p_{i}=1\,, \tag{1}\]
where \(p_{i}\)s can be interpreted as a probability distribution, and \(|\psi^{i}_{k\text{-SEP}}\rangle\langle\psi^{i}_{k\text{-SEP}}|\)s are \(k\)-separable pure states. The state is fully separable if \(k=N\) and is called genuine multipartite entanglement (GME) iff it is not bisepearable (or \(k=1\)). Being a significant form of entanglement, GME provides notable advantages in various quantum tasks as compared to other types of entanglement. Biseparable states may be less suitable for many applications due to their inherent limitations. Hence, the measures of GME are valuable within the standard LOCC paradigm. For instance, it has been shown that GME is in general essential to establish a multipartite secret key [19].
In recent years, there has been a continuous endeavor to qualitatively and quantitatively characterize the entanglement properties of multipartite systems. This pursuit involves the formulation of entanglement measures through two main approaches: the axiomatic ansatz and the operational approach [20]. Under the axiomatic approach, researchers aim to define measures by establishing a set of desired properties that the measures should possess. An essential requirement in this context is entanglement monotonicity [9; 21]. On
the other hand, the operational approach quantifies the utility of a state for specific protocols that rely on entanglement. This perspective evaluates the usefulness of entanglement in achieving certain tasks or performing specific operations. However, computing these measures such as the distillable entanglement tends to be challenging [10; 22; 23].
This review is organized as follows. In Sec. II, we first present an overview of entanglement measures, starting from the establishment of the LOCC paradigm to the entanglement monotonicity. We also introduce a few basic axioms that a good entanglement measure should possess. In Sec. III, we explore various common multipartite entanglement measures, and Sec. IV focuses on the GME measures. Subsequently, in Sec. V, we delve into the applications of the GME measures in various quantum information tasks and explore the multipartite entanglement measures with an operational meaning. Finally, we provide a summary and outlook in Sec. VI.
## II Overview of entanglement measures
In this section, we commence by establishing the foundational tenets of quantum entanglement within the scope of the LOCC paradigm. Then, essential guiding postulates that define entanglement measures as well as the convex-roof extension technique will be introduced. We spotlight the measure of concurrence in the end.
### The LOCC paradigm
Entanglement is a distinctive quantum property of nonlocality that cannot be found in classical systems. Before presenting the precise mathematical definition of entanglement, we first examine a typical scenario in the field of quantum information science. Imagine that a source prepares a quantum system consisting of two particles. One of the particles is sent to Alice, while the other is sent to Bob, who are located in distant laboratories. Alice and Bob can both apply local operations on their particles, including arbitrary unitary evolution and measurements. We do not pose any restriction on local dimensions in each lab, meaning that Alice, for example, can correlate her particle with an infinite-dimensional system through a unitary evolution. Next, the source may establish a classical correlation between the particles by selecting and sending one from a set of prepared quantum systems based on a classical probability distribution. Finally, Alice and Bob are equipped with quantum memories and classical information channels, enabling them to conduct their local operations with the aid of classical communication.
At this point we have established the paradigm of local operations and classical communication (LOCC) between Alice and Bob. LOCC is a subset of all the physically realizable operations on quantum states, the significance of which is best celebrated by the quantum teleportation protocol [3]. In quantum teleportation, the source distributes a two-qubit entangled state between Alice and Bob, then Alice performs a joint measurement on the information qubit she wishes to teleport and the shared entangled qubit. Subsequently, Alice sends her measurement outcome to Bob. Upon receiving the outcome, Bob can transform his entangled qubit into an exact copy of Alice's information qubit. More precisely, one entangled two-qubit state and two classical bits are consumed with LOCC operations to transport one qubit.
We notice that entanglement serves as a resource enabling quantum teleportation, and LOCC is the technique used to harness its power. Moreover, it is natural to consider LOCC when discussing entanglement as it does not involve nonlocal quantum operations that may convey quantum information. Indeed, the set of separable or non-entangled states can be considered as the states that result from applying LOCC operations on a pure product state [24]. The set of bipartite separable states is therefore defined by
\[\rho_{\text{bi-SEP}}=\sum_{i}p_{i}\rho_{i}^{\text{A}}\otimes\rho_{i}^{\text{B }}\,,\quad\sum_{i}p_{i}=1\,, \tag{2}\]
where \(p_{i}\)s refer to a probability distribution and \(\rho^{\text{X}}\) is a density matrix supported on the Hilbert space \(\mathcal{H}_{\text{X}}\), \(\text{X}=\{\text{A},\text{B}\}\). Entangled states refer to states that cannot be expressed in the form of Eq. (2).
LOCC is a versatile resource since quantum teleportation implies that an arbitrary physical operation can be performed with LOCC and shared entanglement. Moreover, LOCC is capable of concentrating entanglement from multiple partially entangled states to a reduced number of more entangled states [25], as well as purifying a noisy entangled state to a pure state [26]. However, it is worth noting that there are limitations to the descriptive power of LOCC in capturing nonlocality. Firstly, there are different classes of entangled states that cannot be transformed between each other with LOCC operations only. For bipartite cases, the necessary and sufficient condition that determines the classes is identified by majorization [27]. Next, there are nonlocal states that has no entanglement at all [28]. Even though the preparation of separable states does not require transformation of quantum information, it is possible that a set of mutually orthogonal separable states cannot be faithfully distinguished by LOCC, no matter which one of the states is presented to the local parties.
Finally, we briefly introduce the mathematical definition of LOCC [10; 29]. A general operation, i.e., quantum instrument, on a quantum state is a completely positive trace-preserving (CPTP) map. For instance, \(\mathcal{A}=\{\mathcal{A}_{1},\,\mathcal{A}_{2},\,\cdots,\,\mathcal{A}_{n}\}\) is a CPTP map if each element \(\mathcal{A}_{j}\) is a completely positive map and \(\sum_{j}\mathcal{A}_{j}\) is trace pre
serving. Here the maps of interest are bounded linear maps on a set of bounded linear operators acting on a Hilbert space \(\mathcal{H}\). Considering a quantum state with \(K\) parties, i.e., \(P_{1},\,P_{2},\,\cdots,\,P_{K}\), the Hilbert space is the tensor product of the Hilbert space for each local parties, namely \(\mathcal{H}=\bigotimes_{j=1}^{K}\mathcal{H}_{P_{j}}\). Henceforth we will use a superscript on an operator or a map to denote the party that it acts on. The one-way local protocol with respect to party \(P_{k}\) is defined by the following operations. Firstly, the local operation \(\mathcal{B}\) on party \(P_{k}\) produces a readout \(j\), and \(P_{k}\) broadcasts the outcome to all the other parties. Then, the other parties perform local operations \(\mathcal{E}_{j}^{X}\), \(X\neq P_{k}\). Thus, a one-way local instrument \(\mathcal{A}^{(R)}=\{\mathcal{A}_{1},\,\mathcal{A}_{2},\,\cdots,\,\mathcal{A}_{n}\}\) with respect to party \(P_{k}\) is given by
\[\mathcal{A}_{j}=\mathcal{B}_{j}^{P_{k}}\otimes\bigotimes_{X\neq P_{k}}\mathcal{ E}_{j}^{X}\,. \tag{3}\]
Extending the discussion to one-round LOCC operations requires the concept of coarse-graining. For two quantum instruments \(\mathcal{A}=\{\mathcal{A}_{i}|i\in S_{\mathcal{A}}\}\) and \(\mathcal{B}=\{\mathcal{B}_{i}|i\in S_{\mathcal{B}}\}\), \(\mathcal{A}\) is a coarse-graining of \(\mathcal{B}\) if there exists a partition \(S_{\mathcal{B}}=\cup_{i}T_{i,\mathcal{B}}\) of the set of labels \(S_{\mathcal{B}}\), s.t. \(\forall i\in S_{\mathcal{A}},\mathcal{A}_{i}=\sum_{j\in T_{i,\mathcal{B}}} \mathcal{B}_{j}\). By definition, a quantum instrument \(\mathcal{A}\) can be implemented with a one-round LOCC _iff_ it is a one-way local operation followed by coarse-graining.
The definition of \(r\)-round LOCC requires the concept of operation composition "\(\circ\)". An instrument \(\mathcal{A}=\{\mathcal{A}_{i}|i\in S_{\mathcal{A}}\}\) is _LOCC linked_ to another instrument \(\mathcal{B}=\{B_{i}|i\in S_{\mathcal{B}}\}\) if there exists a set of one-way local instruments \(\left\{\mathcal{E}^{(p)}\right\}=\{\mathcal{E}_{i,j}|i\in S_{\mathcal{E}}\},j =1,2,\cdots\right\}\), such that \(\mathcal{A}\) is a coarse-graining of the instrument \(\{\mathcal{E}_{i,j}\circ\mathcal{B}_{j}\}\). This can be interpreted as to first apply the instrument \(\mathcal{B}\), then apply a one-round LOCC which depends on the outcome of \(\mathcal{B}\). Thus, an instrument is \(r\)-round LOCC _iff_ it is LOCC linked to a \((r-1)\)-round LOCC. An immediate observation is that a \((r-1)\)-round LOCC is a \(r\)-round LOCC, while the inverse statement is not always true. For instance, in the task of quantum teleportation between Alice and Bob, one-round LOCC only allows the teleportation of quantum states in a single direction, while two-round LOCC enables the teleportation in both directions.
### Entanglement monotonicity
As entanglement is an indispensable resource in quantum information science, an impending task is to quantify it. If a quantum state \(\sigma\) can be converted to another state \(\rho\) with LOCC, then for local parties equipped with LOCC, any quantum information protocol that requires \(\rho\) can also be achieved with \(\sigma\). Thus, a fundamental requirement for the quantification of entanglement is that entanglement cannot increase under LOCC operations, i.e., LOCC monotonicity. An important fact is that for bipartite pure states \(\ket{\psi}\) and \(\ket{\psi}\), \(\ket{\phi}\) can be converted to \(\ket{\psi}\) via LOCC _iff_\(\ket{\phi}\) is majorized by \(\ket{\psi}\), i.e., both states have the same Schmidt dimension \(d\), and the Schmidt coefficients \(\left\{\lambda_{i}^{(\phi)}\right\}\) and \(\left\{\lambda_{i}^{(\psi)}\right\}\) satisfy
\[\sum_{i=1}^{k}\lambda_{i}^{(\phi)\downarrow}\leq\sum_{i=1}^{k}\lambda_{i}^{( \psi)\downarrow}\,,\quad\forall k\in\{1,2,\cdots,d\}\,, \tag{4}\]
where the superscript \(\downarrow\) indicates that the Schmidt coefficients are organized in a descending order. This reveals that some quantum states cannot be related to each other via LOCC. Moreover, two pure states are mutually convertible via LOCC _iff_ they can be transformed via local unitary rotations, hence deterministic LOCC is not well suited for the classification of entangled states.
To establish a stronger condition of entanglement ordering, one may consider _stochastic_ LOCC (SLOCC), where \(\sigma\) is converted to \(\rho\) via LOCC with a non-zero probability [7]. SLOCC provides a coarse-grained classification of multipartite entanglement, which means that quantum states incomparable via deterministic LOCC can now be related via SLOCC. For two \(N\)-partite pure states \(\ket{\phi}\) and \(\ket{\psi}\), they can be transformed to each other via SLOCC or they are SLOCC equivalent _iff_ there exists an invertible local operator (ILO) \(\mathcal{L}_{1}\otimes\mathcal{L}_{2}\otimes\cdots\otimes\mathcal{L}_{N}\), s.t.
\[\ket{\phi}=\mathcal{L}_{1}\otimes\mathcal{L}_{2}\otimes\cdots\otimes\mathcal{L} _{N}\ket{\psi}\,. \tag{5}\]
An SLOCC class can be built from a pure state \(\ket{\psi}\) by constructing a convex hull of all pure states that are SLOCC equivalent with \(\ket{\psi}\) and of all pure states that can be approximated arbitrarily close by those SLOCC equivalent pure states [30; 31; 32].
Strong monotonicity of a function \(E\) is defined as follows
\[\sum_{i}p_{i}E\big{(}\sigma_{i}\big{)}\leq E(\rho)\,, \tag{6}\]
where \(\{p_{i},\sigma_{i}\}\) is an ensemble produced by an arbitrary LOCC channel \(\Lambda_{\text{LOCC}}\) acting on the multipartite quantum state \(\rho\), such that
\[\Lambda_{\text{LOCC}}(\rho)=\sum_{i}p_{i}\sigma_{i}\,. \tag{7}\]
For states in the same SLOCC class, the monotonicity provides the entanglement ordering. However, there are already two types of genuinely entangled states for three-qubit systems [7; 33], as is shown in Fig. 1, resulting in the fact that different entanglement measures may lead to distinct entanglement ordering for multipartite quantum states.
### Postulates of entanglement measures
An entanglement measure (or entanglement monotone) \(E(\rho)\) should have some desirable properties, while
not all of the following properties are fulfilled by all the discussed entanglement quantifiers.
1. Entanglement vanishes for separable states. If \(\rho\) is separable, then \(E(\rho)=0\).
2. \(E(\rho)\) cannot be increased by LOCC. For any LOCC operation \(\Lambda_{\text{LOCC}}\), \[E\big{(}\Lambda_{\text{LOCC}}(\rho)\big{)}\leq E(\rho)\,.\] (8) And there is a stronger condition, if the ensemble \(\{p_{k},\rho_{k}\}\) is obtained from the state \(\rho\) by applying LOCC operations, then the measure should not increase on average \[\sum_{k}p_{k}E(\rho_{k})\leq E(\rho)\,.\] (9)
3. \(E(\rho)\) should be invariant under local unitary transformations \[E(\rho)=E\big{(}U_{1}\otimes\cdots\otimes U_{N}\rho U_{1}^{\dagger}\otimes \cdots\otimes U_{N}^{\dagger}\big{)}\,.\] (10) These operations are particular instances of LOCC operations, which are reversible.
4. Convexity. Most entanglement measures are convex, that is, the entanglement measure cannot increase under convex combination of two or more states, \[E\Big{(}\sum_{k}p_{k}\rho_{k}\Big{)}\leq\sum_{k}p_{k}E(\rho_{k})\,.\] (11)
5. Additivity. If Alice and Bob share \(n\) copies of the same state \(\rho\), then \[E\big{(}\rho^{\otimes n}\big{)}=nE(\rho)\,.\] (12) If there are two different states, then \[E\big{(}\rho_{1}\otimes\rho_{2}\big{)}=E\big{(}\rho_{1}\big{)}+E\big{(}\rho_{ 2}\big{)}\,.\] (13) And if the measure satisfies \[E\big{(}\rho^{\otimes n}\big{)}\leq nE(\rho)\,,\] (14) it is subadditive.
6. Asymptotic continuity. For states \(\rho_{n}\) and \(\sigma_{n}\) acting on the Hilbert space \(\mathcal{H}_{n}=\mathcal{H}_{n}^{A}\otimes\mathcal{H}_{n}^{B},n\in\mathbb{N}\), we have [9; 21; 34] \[\|\rho_{n}-\sigma_{n}\|_{1}\to 0\Rightarrow\frac{|E(\rho_{n})-E( \sigma_{n})|}{\log_{2}\text{dim}\mathcal{H}_{n}}\to 0\,,\] (15) where \(\|\cdot\|_{1}\) is the trace norm. The measures that satisfy the postulate of asymptotic continuity are valuable for estimating the distillable entanglement, which will be discussed in Sec. V.2.1.
Note that the first two postulates above constitute the essential criteria that any entanglement measure must satisfy.
### Convex-roof extension
The construction of convex roof is commonly used to quantify the entanglement of mixed states [9]. Given an entanglement measure of a pure state \(E(|\phi\rangle)\), it can be extended to a mixed state by means of convex roof, such that
\[E(\rho)=\inf_{\{p_{k},|\phi_{k}\rangle\}}\sum_{k}p_{k}E\big{(}|\phi_{k}\rangle \big{)}\,, \tag{16}\]
where the infimum is taken over all ensembles \(\big{\{}p_{k},|\phi_{k}\rangle\big{\}}\) of \(\rho=\sum_{k}p_{k}\,|\phi_{k}\rangle\langle\phi_{k}|\). The optimal ensemble that achieves this infimum is called the optimal ensemble for \(E\). Although concurrence of the two-qubit mixed states can be obtained analytically, typically the convex roof cannot be easily determined. However, there are a number of effective schemes to compute the convex roof, such as providing a lower bound of it as an approximation.
The convex-roof construction is a reliable method for quantifying entanglement. It produces an entanglement measure that is guaranteed to be convex. Moreover, if a measure is monotonic on pure states, then its convex-roof extension is also monotonic on mixed states [2]. Thus, to verify the monotonicity of a convex roof entanglement measure, one only needs to examine the monotonicity for pure states. It is worth noting that the properties of the convex roof can be translated to ones for the concave roof.
Figure 1: An illustration of the SLOCC classes for three-qubit states, including the fully-separable class (S), biseparable class (B), \(W\) class, and GHZ class [30].
### Concurrence
Concurrence is one of the most celebrated entanglement measures, which is the first entanglement measure built with the convex-root extension. In two-qubit cases, it has been proven that as the entanglement of formation increases, concurrence monotonically decreases [13]. Moreover, it has been successfully extended to higher-dimensional bipartite systems as
\[\mathcal{C}(|\psi\rangle)=\sqrt{1-\text{Tr}\big{(}\rho^{2}\big{)}}\,, \tag{17}\]
where \(\rho\) is the reduced density matrix of the subsystem [35]. The generalized bipartite concurrence Eq. (17) is also an entanglement monotone under SLOCC [36]. For a bipartition \(\big{\{}A|B\big{\}}\) of a pure state \(|\psi\rangle_{AB}\), we denote the corresponding concurrence as \(\mathcal{C}(|\psi\rangle_{A|B})\) or \(\mathcal{C}_{A|B}\).
## III ME measures
In this section we present examples of multipartite entanglement measures and discuss how some of them can be defined from measures of the bipartite case.
### Squashed entanglement
Many of the axiomatic measures of bipartite states can be extended to the multipartite case. For a given state \(\rho_{AB}\), the squashed entanglement is given by
\[E_{sq}(\rho_{AB})=\frac{1}{2}\inf I\big{(}A:B|E\big{)}\,, \tag{18}\]
where the infimum is taken over all density matrices \(\rho_{ABE}\) such that \(\text{Tr}_{E}\,\rho_{ABE}=\rho_{AB}\), and the conditional mutual information is defined as
\[I\big{(}A:B|E\big{)}=S(AE)+S(BE)-S(ABE)-S(E)\,, \tag{19}\]
where \(S(X)=-\,\text{Tr}(X\log_{2}X)\) is the von Neumann entropy of the system \(X\). Multipartite \(c\)-squashed entanglement is defined as follows [37]. For the \(m\)-party state \(\rho_{A_{1},A_{2},\cdots,A_{m}}\),
\[E_{sq}^{c}(\rho_{A_{1},A_{2},\cdots,A_{m}})=\inf I\big{(}A_{1}:A_{2}:\cdots:A_ {m}|E\big{)}\,, \tag{20}\]
where the infimum is taken over the extension states \(\sigma_{A_{1},A_{2},\cdots,A_{m},E}\) of the form \(\sum p_{i}\rho_{A_{1},A_{2},\cdots,A_{m}}^{i}\otimes|i\rangle_{E}\langle i|\). The conditional mutual information is a sum of conditional bipartite mutual information
\[I\big{(}A_{1}:A_{2}:\cdots:A_{m}|E\big{)}=I\big{(}A_{1}:A_{2}|E \big{)}+ \tag{21}\] \[I\big{(}A_{3}:A_{1}A_{2}|E\big{)}+\cdots+I\big{(}A_{m}:A_{1} \cdots A_{m-1}|E\big{)}\,.\]
Thus \(E_{sq}^{c}\) is an entanglement measure which is obtained through the mixed convex roof of the multipartite quantum mutual information function, and it is indeed a good entanglement measure. In Ref. [37], the authors also define the multipartite \(q\)-squashed entanglement as a generalization of the bipartite squashed entanglement. For the \(m\)-party state \(\rho_{A_{1},A_{2},\cdots,A_{m}}\),
\[E_{sq}^{q}\big{(}\rho_{A_{1},A_{2},\cdots,A_{m}}\big{)}=\inf I\big{(}A_{1}:A_ {2}:\cdots:A_{m}|E\big{)}\,, \tag{22}\]
where the infimum is taken over the extension states \(\sigma_{A_{1},A_{2},\cdots,A_{m},E}\) of \(\rho_{A_{1},A_{2},\cdots,A_{m}}\). It is proved that the \(q\)-squashed entanglement satisfies several important properties, including the monotonicity under LOCC, convexity, additivity, and asymptotic continuity. Furthermore, it is shown that multipartite \(q\)-squashed entanglement is an upper bound on multipartite distillable key; see Sec. V.2.2.
### Three-tangle
The three-tangle (or residual entanglement) \(\tau\) is an entanglement measure for three-qubit states [38]. It is defined as follows
\[\tau\equiv\tau_{ABC}=\mathcal{C}_{A|BC}^{2}-\mathcal{C}_{AB}^{2}-\mathcal{C}_{ AC}^{2}\,, \tag{23}\]
where \(\mathcal{C}_{A|BC}^{2}\) is the squared concurrence between \(A\) and the pair \(BC\), \(\mathcal{C}_{AB}^{2}\) is the squared concurrence between \(A\) and \(B\), and \(\mathcal{C}_{AC}^{2}\) is the squared concurrence between \(A\) and \(C\). The three-tangle is permutationally invariant of the parties. And \(\tau\) is an entanglement monotone, i.e., decreasing on average under LOCC in all the three parties [7]. It is noted that bipartite measures cannot determine the inequivalence of the GHZ and \(W\) classes, but three-tangle can. \(\tau(\not{\psi}_{\text{GHZ}})\neq 0\) for any state in the GHZ class, while it vanishes for any state in the \(W\) class.
The generalization of the three-tangle for \(N\) qubits, namely, the \(N\)-tangle is discussed in Ref. [39].
### Global entanglement
There exist measures that can be calculated as straightforward combinations of bipartite entanglement measures. Global entanglement of tripartite system, for instance, is the sum of squared concurrences between a single qubit versus all other qubits [40; 41; 42],
\[Q=\mathcal{C}^{2}\big{(}|\phi\rangle_{A|BC}\big{)}+\mathcal{C}^{2}\big{(}| \phi\rangle_{B|AC}\big{)}+\mathcal{C}^{2}\big{(}|\phi\rangle_{C|AB}\big{)}\,. \tag{24}\]
Its monotonicity under LOCC is simply inherited from the bipartite measures.
By using the relations [38]
\[\mathcal{C}^{2}(|\phi\rangle_{i|jk})=\tau_{ABC}+\mathcal{C}^{2}(\rho_{ij})+ \mathcal{C}^{2}(\rho_{ik})\,, \tag{25}\]
where \(\rho_{ij}=\text{Tr}_{k}\big{(}|\phi\rangle_{ijk}\langle\phi|\big{)}\) and \(i,j,k\) are distinct systems in \(\{A,B,C\}\). Then the global entanglement can be written as
\[Q=3\tau_{ABC}+2\big{[}\mathcal{C}^{2}(\rho_{AB})+\mathcal{C}^{2}(\rho_{AC})+ \mathcal{C}^{2}(\rho_{BC})\big{]}\,. \tag{26}\]
### Schmidt measure
For a pure state \(|\phi\rangle\in\mathcal{H},\mathcal{H}=\mathbb{C}^{d_{1}}\otimes\mathbb{C}^{d_{2} }\otimes\cdots\otimes\mathbb{C}^{d_{N}}\) in the composite system with parties \(A_{1},A_{2},\cdots,A_{N}\), the Schmidt measure is defined as [43]
\[E_{P}(|\phi\rangle\langle\phi|)=\log_{2}r\,, \tag{27}\]
where
\[|\phi\rangle=\sum_{i=1}^{R}\alpha_{i}\left|\phi_{A_{1}}^{(i)}\right\rangle \otimes\left|\phi_{A_{2}}^{(i)}\right\rangle\otimes\cdots\otimes\left|\phi_{A _{N}}^{(i)}\right\rangle\,, \tag{28}\]
and \(\left|\phi_{A_{j}}^{(i)}\right\rangle\in\mathbb{C}^{d_{j}},j=1,2,\cdots,N, \alpha_{i}\in\mathbb{C}\), and \(r\) is the minimal number of product terms \(R\) in the decomposition of \(|\phi\rangle\). For a bipartite system with parties \(A_{1}\) and \(A_{2}\), the minimal number of product terms \(r\) is given by the Schmidt rank of the state. One can extend the definition to the full state space \(\mathcal{S}(\mathcal{H})\) by using a convex-roof construction. For \(\rho\in\mathcal{S}(\mathcal{H})\),
\[E_{P}(\rho)=\inf\sum_{i}\lambda_{i}E_{P}(|\phi_{i}\rangle\langle\phi_{i}|)\,, \tag{29}\]
where the infimum is taken over all possible convex combinations of the form \(\rho=\sum_{i}\lambda_{i}\left|\phi_{i}\right\rangle\langle\phi_{i}|\) with \(0\leq\lambda_{i}\leq 1\) for all \(i\).
The measure is zero iff the state is fully product. Therefore, it cannot distinguish genuine multipartite entanglement from bipartite entanglement.
## IV GME measures
Numerous entanglement measures, particularly for the bipartite case, have been proposed. Yet, the situation is much more complicated for multipartite systems. How to quantify genuine entanglement contained in multipartite quantum states is still lacking in research.
If an \(N\)-partite state is not biseparable then it is called genuinely \(N\)-partite entangled. A measure of genuine multipartite entanglement (GME) \(E(\rho)\) should at least satisfy [44]
1. \(E(\rho)=0,\forall\rho\in\mathcal{S}_{\text{bi-SEP}}\), where \(\mathcal{S}_{\text{bi-SEP}}\) is the set of biseparable states;
2. \(E(\rho)>0,\forall\rho\in\mathcal{S}_{\text{GME}}\), where \(\mathcal{S}_{\text{GME}}\) is the set of all GME states;
3. (Strong monotonicity) \(E(\rho)\geq\sum_{i}p_{i}E(\rho_{i})\), where \(\{p_{i},\rho_{i}\}\) is an ensemble of states produced by applying LOCC operations on \(\rho\);
4. \(E\big{(}U_{\text{local}}\rho U_{\text{local}}^{\dagger}\big{)}=E(\rho)\), where \(U_{\text{local}}^{\dagger}\) is an arbitrary local unitary operator.
### GME-concurrence
For an \(N\)-partite pure state \(|\phi\rangle\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\cdots\otimes \mathcal{H}_{N}\), where \(\dim(\mathcal{H}_{i})=d_{i},i=1,2,\cdots,N\), the GME-concurrence is defined as [45]
\[\mathcal{C}_{\text{GME}}(|\phi\rangle):=\min_{\gamma_{i}\in\gamma}\sqrt{2 \Big{[}1-\text{Tr}\Big{(}\rho_{A_{\gamma_{i}}}^{2}\Big{)}\Big{]}}\,, \tag{30}\]
where \(\gamma=\{\gamma_{i}\}\) represents the set of all possible bipartitions \(\{A_{i}|B_{i}\}\) of \(\{1,2,\cdots,N\}\). The GME-concurrence can be generalized to mixed states \(\rho\) via a convex-roof construction, i.e.,
\[\mathcal{C}_{\text{GME}}(\rho)=\inf_{\{p_{i}|\phi_{i}\rangle\}}\sum_{i}p_{i} \mathcal{C}_{\text{GME}}\big{(}|\phi_{i}\rangle\big{)}\,, \tag{31}\]
where the infimum is taken over all possible decompositions \(\rho=\sum_{i}p_{i}\left|\phi_{i}\right\rangle\langle\phi_{i}|\). It is a special case of \(k\)-ME concurrence when \(k=2\). The measure \(k\)-ME concurrence can detect all \(k\)-nonseparable states, and exhibit properties such as being an entanglement monotone, vanishing on \(k\)-separable states, convexity, subadditivity, invariant under local unitary transformations [46].
### Geometric measure of entanglement
Motivated by the geometric measure (GM) of entanglement introduced by Wei and Goldbart [47], Ref. [48] presents a genuine multipartite entanglement measure called the generalized geometric measure (GGM). Consider an \(N\)-partite pure quantum state \(|\phi_{N}\rangle\), and let
\[g_{max}(\phi_{N})=\max_{\psi}\left|\,\langle\psi|\phi_{N}\rangle\,\right|\,, \tag{32}\]
where the maximum is over all pure quantum states \(|\psi\rangle\) that are not genuinely \(N\)-party entangled. And \(g_{max}\) quantifies the closeness of the state \(|\phi_{N}\rangle\) to all pure quantum states that are not genuinely multipartite entangled. A pure quantum state of \(N\) parties is considered to be genuinely \(N\)-party entangled if it cannot be expressed as a product state of any bipartite partition. Then the GGM is defined as
\[E_{G}(|\phi_{N}\rangle)=1-g_{max}^{2}(|\phi_{N}\rangle)\,. \tag{33}\]
It is noted that the maximization in \(g_{max}\) is only over pure states that are a product over all bipartite partitions. The GGM \(E_{G}\) is vanishing for all pure states that are not genuine multipartite entangled and non-vanishing for others. The measure is computable for a multipartite pure state of an arbitrary number of parties, and of arbitrary dimensions, and it is monotonically decreasing under LOCC.
The generalized geometric measure can be written in a computable form for all multipartite pure quantum
states as [48; 49; 50]
\[E_{G}(\phi_{N})=1-\max\Bigl{\{}\lambda_{A|B}^{2}|A\cup B =\{1,2,\cdots,N\},\] \[A\cap B =\varnothing\Bigr{\}}\,, \tag{34}\]
where \(\lambda_{A|B}\) is the maximal Schmidt coefficient of \(|\phi_{N}\rangle\) in the bipartite split \(A|B\). For the three-qubit pure states \(|\phi\rangle\), the GGM reduces to
\[E_{G}=1-\max\bigl{\{}\lambda_{A}^{2},\lambda_{B}^{2},\lambda_{C}^{2}\bigr{\}}\,, \tag{35}\]
where \(\lambda_{A}^{2}\) is the maximal eigenvalue of \(\rho_{A}=\operatorname{Tr}_{BC}\bigl{(}|\phi\rangle\langle\phi|\bigr{)}\), similarly for \(\lambda_{B}^{2}\) and \(\lambda_{C}^{2}\).
### Tripartite negativity
An ideal measure of the full tripartite entanglement of three qubits should have at least the following characteristics: i) to be zero for any fully separable or biseparable state and non-zero for any fully entangled state; ii) to be invariant under local unitary (LU); iii) to be nonincreasing under LOCC, that is, to be an entanglement monotone. Tripartite negativity is defined according to these three conditions, which is expressed as [51]
\[\mathcal{N}_{ABC}=\left(\mathcal{N}_{A|BC}\mathcal{N}_{B|AC}\mathcal{N}_{C|AB }\right)^{\frac{1}{3}}, \tag{36}\]
where the bipartite negativities are defined as
\[\mathcal{N}_{I|JK}=-2\sum_{i}\varepsilon_{i}\bigl{(}(\rho^{I})^{T}\bigr{)}\,, \tag{37}\]
where \(\varepsilon_{i}\bigl{(}(\rho^{I})^{T}\bigr{)}\) denotes the negative eigenvalues of \((\rho^{I})^{T}\), and \((\rho^{I})^{T}\) is the partial transpose of \(\rho\) with respect to subsystem \(I\), namely, \(\langle i_{I},j_{IK}|(\rho^{I})^{T}|k_{I},l_{JK}\rangle=\langle k_{I},j_{IK}| \rho|i_{I},l_{JK}\rangle\), with \(I=A,B,C\) and \(JK=BC,AC,AB\), respectively. The tripartite negativity fulfills the above three conditions for pure states.
### Concurrence fill
The challenge of establishing a proper entanglement ordering for genuine three-qubit entanglement is first tackled by the proposal of concurrence fill [52; 44]. For an arbitrary pure three-qubit state \(|\phi\rangle_{ABC}\) shared by three parties \(A,B,C\), the concurrence between the bipartition \(i\) and \(jk\) is \(\mathcal{C}\bigl{(}|\phi\rangle_{i|jk}\bigr{)}=\sqrt{2\bigl{[}1-\operatorname {Tr}(\rho_{i}^{2})\bigr{]}}\), with \(\rho_{i}=\operatorname{Tr}_{jk}\bigl{(}|\phi\rangle_{ijk}\langle\phi|\bigr{)}\) and \(i,j,k\) are distinct systems in \(\{A,B,C\}\). The three one-to-other bipartite entanglements are not independent, and they follow the relation [53]
\[\mathcal{C}^{2}\bigl{(}|\phi\rangle_{i|jk}\bigr{)}\leq\mathcal{C}^{2}\bigl{(} |\phi\rangle_{j|jk}\bigr{)}+\mathcal{C}^{2}\bigl{(}|\phi\rangle_{k|jj}\bigr{)}\,. \tag{38}\]
These three squared bipartite concurrences can be geometrically interpreted as the lengths of three sides of a triangle, which is the so-called concurrence triangle. Then the concurrence fill is defined as the square-root of the area of the concurrence triangle [44]
\[\mathcal{F} (|\phi\rangle_{ABC})=\Bigl{[}\frac{16\mathcal{P}_{ABC}}{3}\bigl{(} \mathcal{P}_{ABC}-\mathcal{C}^{2}(|\phi\rangle_{A|BC})\bigr{)} \tag{39}\] \[\times\bigl{(}\mathcal{P}_{ABC}-\mathcal{C}^{2}(|\phi\rangle_{B| AC})\bigr{)}\bigl{(}\mathcal{P}_{ABC}-\mathcal{C}^{2}(|\phi\rangle_{C|AB}) \bigr{)}\Bigr{]}^{1/4}\,,\]
with half-perimeter
\[\mathcal{P}_{ABC}=\frac{1}{2}\Bigl{[}\mathcal{C}^{2}\bigl{(}|\phi\rangle_{A| BC}\bigr{)}+\mathcal{C}^{2}\bigl{(}|\phi\rangle_{B|AC}\bigr{)}+\mathcal{C}^{2} \bigl{(}|\phi\rangle_{C|AB}\bigr{)}\Bigr{]}\,. \tag{40}\]
In addition to the proposed three-qubit concurrence triangle, the authors in Ref. [54] show that the triangle relation holds for any continuous entanglement measure and systems of any dimension. The extension of concurrence fill to four-partite systems, inspired by a richer geometrical interpretation, is presented in [55], which interprets as a combination of areas of cyclic quadrilateral and triangle structures, resulting from two types of bipartition.
The concurrence fill is proposed for GME quantification and satisfies the property of faithfulness and smoothness, but it is later proven that the concurrence fill can be increased under some LOCC operations [42]. Nevertheless, numerical evidence supports that if the edges of the triangle are chosen as bipartite concurrence, then the area is nonincreasing under LOCC. Subsequently, various genuine multipartite entanglement measures are proposed, which are based on the bipartite entanglement monotones [56; 57; 42; 58].
### Geometric mean of bipartite concurrences
The geometric mean of bipartite concurrences (GBC) \(\mathcal{G}\) satisfies the figure of merits of concurrence fill and qualifies as an entanglement monotone [56], while lacks a geometric interpretation. Denote the set of all possible bipartitions as \(\alpha\) and its cardinality \(c(\alpha)\), we have
\[\mathcal{G}(\rho)=\ \zeta_{c(\alpha)}\sqrt{\prod_{\alpha_{ij}\in\alpha}\mathcal{C}_{ A_{\alpha_{j}}B_{\alpha_{j}}}}\,. \tag{41}\]
The strong monotonicity of GBC is proved as follows. Bipartite concurrence satisfies the property of strong monotonicity [36], i.e.,
\[\sum_{i}p_{i}\mathcal{C}_{AB}\bigl{(}\sigma_{i}\bigr{)}\leq\mathcal{C}_{AB}( \rho)\,, \tag{42}\]
where \(\{p_{i},\sigma_{i}\}\) is an ensemble produced by an arbitrary LOCC channel \(\Lambda_{\text{LOCC}}\) acting on the multipartite quantum state \(\rho\), such that
\[\Lambda_{\text{LOCC}}(\rho)=\sum_{i}p_{i}\sigma_{i}\,. \tag{43}\]
Note that \(\Lambda_{\text{LOCC}}\) is also an LOCC channel with respect to the bipartition \(\{A|B\}\). Hereby we examine the monotonicity for pure states, i.e., when \(\rho\) and \(\sigma_{i}\) are all pure states. Since all the bipartite concurrences in the definition of GBC satisfy Eq. (42), we have
\[\begin{split}\mathcal{G}(\rho)&\geq\nicefrac{{ \varsigma_{i}}}{{\varsigma_{i}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\prod_{A_{ij}\in \mathfrak{A}}\left(\sum_{l}p_{i}\mathcal{C}_{A_{ij}B_{ij}}(\sigma_{i})\right) \\ &\geq\sum_{i}p_{i}\!\left(\nicefrac{{\varsigma_{i}}}{{\varsigma _{i}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The state \(\ket{G_{1,1,\cdots,1}}_{\mathbf{A;B}}\) is a tensor product of \(N\) Bell states \(\ket{\Phi_{1}}_{A_{1}B_{1}}\otimes\ket{\Phi_{1}}_{A_{2}B_{2}}\otimes\cdots\otimes \ket{\Phi_{1}}_{A_{N},B_{N}}\), which is expressed as
\[\begin{split}\ket{G_{1,1,\cdots,1}}_{\mathbf{A;B}}&= \frac{1}{\sqrt{2^{N}}}\Big{[}\ket{\overline{0}}_{A}\ket{\overline{0}}_{B}\\ &+\cdots+\Big{|}\overline{2^{N}-1}\Big{\rangle}_{A}\Big{|} \overline{2^{N}-1}\Big{\rangle}_{B}\Big{]}\,.\end{split} \tag{52}\]
Then, consider the \(4^{N}\) generalized Bell states
\[\begin{split}\ket{G_{i1,i2,\cdots,iN}}_{\mathbf{A;B}}& =\ket{\Phi_{i1}}_{A_{1},B_{1}}\\ &\otimes\ket{\Phi_{i2}}_{A_{2},B_{2}}\otimes\cdots\otimes\ket{ \Phi_{iN}}_{A_{N},B_{N}}\,,\end{split} \tag{53}\]
where \(i1,i2,\cdots,iN=1,2,3,4\).
Now consider the teleportation by using the maximally entangled state (MES) channel
\[\ket{MES}_{\mathbf{A;B}}=\frac{1}{\sqrt{2^{N}}}\sum_{j=1}^{2^{N}}\ket{I}_{A} \ket{I^{\prime}}_{B}\,, \tag{54}\]
where \(\ket{I}_{A}\)s, \(\ket{I^{\prime}}_{B}\)s are orthogonal bases of the Hilbert space of Alice's \(N\) qubits and Bob's \(N\) qubits, respectively. It is noted that \(\ket{MES}_{\mathbf{A;B}}\) may be a _genuine_ multipartite entangled state, which cannot be reduced to the tensor product of \(N\) Bell states in terms of \(2N\) parities. Using local unitary operators applied by Alice and Bob, \(\ket{MES}_{\mathbf{A;B}}\) connects with \(\ket{G_{1,1,\cdots,1}}_{\mathbf{A;B}}\) as
\[\ket{MES}_{\mathbf{A;B}}=U_{1}(AN)U_{2}(BN)\ket{G_{1,1,\cdots,1}}_{\mathbf{A;B }}\,, \tag{55}\]
where unitary operators \(U_{1}(AN)\) and \(U_{2}(BN)\) act on \(N\) qubits of Alice and Bob, respectively. Then, The joint state of the initial state of the \(N\) qubits to be teleported and the entanglement channel can be expressed as
\[\ket{\phi}_{\mathbf{a}}\otimes\ket{MES}_{\mathbf{A;B}}\] \[=\sum_{i1,i2,\cdots,iN=1}^{4}U_{1}(AN)U_{2}(aN)\ket{G_{i1,i2, \cdots,iN}}_{\mathbf{A;a}}\] \[\quad\otimes U(i1)^{-1}U(i2)^{-1}\cdots U(iN)^{-1}\ket{\phi}_{ \mathbf{B}}\,, \tag{56}\]
where \(\ket{\phi}_{\mathbf{B}}\) is the same as the initial state of qubits \(\mathbf{a}\) in Eq. (50). Here, \(U(1)=I\), \(U(2)=\sigma_{z}\), \(U(3)=\sigma_{x}\), \(U(4)=i\sigma_{y}\), with \(\sigma_{x,y,z}\) being the three Pauli operators. If Alice performs a complete projective measurement jointly on the \(2N\) qubits and gets a outcome \(U_{1}(AN)U_{2}(aN)\ket{G_{i1,i2,\cdots,iN}}_{\mathbf{A;a}}\), then Bob will achieve an accurate replication of \(\ket{\phi}_{\mathbf{a}}\) after he performs the unitary operations \(U(i1)\otimes U(i2)\otimes\cdots\otimes U(iN)\) on his qubits in accordance with Alice's result of the measurement. Hence, teleportation is achieved by Alice and Bob.
It is shown that the multipartite entanglement channel in Eq. (55) is necessary and sufficient for the faithful teleportation of an \(N\)-qubit state together with \(2N\) bits of classical communication. The GHZ states and \(W\) states of \(2N\) qubits, however, cannot be transferred into a maximally entangled state through local operations performed by Alice and Bob, thus they cannot be effectively utilized for faithful teleportation of an arbitrary \(N\)-qubit state.
The dense coding scheme using genuine four-partite entangled state \(\ket{\chi^{00}}\) communicate 4 bits of classical information by sending two particles perfectly, which is impossible with a four-partite GHZ or \(W\) state [61]. The maximally entangled five-qubit state (exhibits genuine multipartite entanglement) \(\ket{\psi_{5}}=\frac{1}{2}(\ket{001}\ket{\Phi_{4}}+\ket{010}\ket{\Phi_{2}}+ \ket{100}\ket{\Phi_{3}}+\ket{111}\ket{\Phi_{1}})\) as a resource for superdense coding is discussed in Ref. [63]. In this case, they assume Alice owns the first three qubits and Bob has the other two of \(\ket{\psi_{5}}\). Then Alice applies one of the sets of unitary transformations on her particle, which is the process of encoding. After having performed one of the operations described above, Alice sends off her qubits to Bob using conventional physical medium. Bob can then perform a five-partite measurement to extract the classical bits, which corresponds to decoding. It is noted that Alice and Bob have agreed on dense coding rules before encoding. The capacity of superdense coding for the state \(\ket{\psi_{5}}\) equals to 5, reaching the Holevo bound of maximal amount of classical information that can be encoded.
### Operational multipartite entanglement measures
In Secs. III and IV, we have reviewed various entanglement measures that are defined by axiomatic ansatz. Here we move on to the operationally meaningful measures which can quantify the value of a state for a specific protocol that requires entanglement.
#### iv.2.1 Distillable entanglement and entanglement cost
Non-maximally entangled states are insufficient for faithful teleportation as entanglement plays a crucial role in the process. However, by having a sufficient number of copies of a non-maximally entangled state, one can achieve asymptotically faithful teleportation at a certain rate. Considering a scenario where a large number \(n\) of copies of a state \(\rho\) are given, the question arises whether it is possible to transform \(\rho^{\otimes n}\) to an output state \(\sigma^{\otimes n}\) using LOCC operations, where LOCC protocols are assumed to be trace-preserving. To address this question from a physical perspective, we introduce the asymptotic limit approach, and the distance between the state resulting from the distillation protocol and the desired state is measured using the trace norm. If \(n\rightarrow\infty\), the arbitrarily good approximation of \(\sigma^{\otimes m}\) by \(\sigma_{m}\) becomes possible. In other words, one can transform \(\rho^{\otimes n}\) to an output state \(\sigma_{m}\) that approximates \(\sigma^{\otimes m}\) very well for sufficiently large \(m\) with the achievable rate \(r=m/n\).
The distillable entanglement is the supremum of the rates at which maximally entangled states can be obtained from an input supply of states in the form of \(\rho\) over all possible distillation protocols. Denote a general trace-preserving LOCC operation by \(\Lambda\), and let \(\Phi(d)\) represent the density operator corresponding to the maximally entangled state vector in dimension \(d\), then the distillable entanglement is defined as
\[E_{D}(\rho)=\sup\biggl{\{}r:\lim_{n\to\infty}\Bigl{[}\inf_{\Lambda}\|\Lambda( \rho^{\otimes n})-\Phi(2^{rn})\|_{1}\Bigr{]}=0\biggr{\}}\,. \tag{57}\]
It is noted that the state \(\Phi(2^{rn})\) is local unitarily equivalent to \(rn\) copies of the two-qubit maximally entangled state \(\Phi(2)^{\otimes rn}\).
Now consider how many qubits we have to communicate in order to create a state \(\rho\). The entanglement cost quantifies the maximal possible rate \(r\) at which one can generate states that are close approximations of many copies of \(\rho\) using a supply of blocks of two-qubit maximally entangled states. It provides insights into the efficiency of converting entangled resources to states that approximate the target state \(\rho\), which is given by
\[E_{C}(\rho)=\inf\biggl{\{}r:\lim_{n\to\infty}\Bigl{[}\inf_{\Lambda}\|\rho^{ \otimes n}-\Lambda(\Phi(2^{rn}))\|_{1}\Bigr{]}=0\biggr{\}}\,. \tag{58}\]
Computing \(E_{D}(\rho)\) and \(E_{C}(\rho)\) is a challenging task, however, these measures have significant implications in the investigation of channel capacities, and also closely related to the entanglement of formation and relative entropy of entanglement [64, 65, 66].
For the multipartite scenario, the definition of distillable entanglement can be tailored to specific target states, such as GHZ states or cluster states, capturing different properties of the state being considered. Similar situations arise when defining the entanglement cost, where singlet states are commonly used as the resource, but other resources like GHZ or \(W\) states can also be considered. Different experimental settings and resource availability motivate various definitions of entanglement cost and distillable entanglement. The operational point of view is crucial in understanding and interpreting these measures.
Bound entangled states are closely related to the problem of entanglement distillation. Generally, states that are entangled yet not distillable are termed as _bound entangled_[2, 67], which include positive partial transpose (PPT) entangled states. Figure 2 illustrates the set of all states for _bipartite_ systems, including separable states (SEP), PPT entangled states, _hypothetical_ non-PPT (NPT) nondistillable states (or NPT bound entangled states), and distillable states [68, 69, 70]. It is clear that separable states cannot be distilled, and all PPT states are not distillable. Notably, for \(2\otimes 2\) and \(2\otimes 3\) systems, it is known that all PPT states are separable and all NPT states are distillable. However, whether the hypothetical NPT nondistillable states exist remains an important and open question.
#### iii.2.2 Multipartite distillable key
The distillable (secret) key quantifies the asymptotic rate at which Alice and Bob may distill secret classical bits from many copies of a shared quantum state. Rather than focusing on distilling singlets, one can explore the distillation of private states, which encompasses a broader class of states. Let's start with the \(m\)-partite private state, or \(m\)-partite _pdit_[71]
\[\Gamma^{(d)}=\sum_{i,j=0}^{d-1}\frac{1}{d}\,|ii\cdots i\rangle\langle jj\cdots j |\otimes U_{i}\rho_{A^{\prime}_{1},A^{\prime}_{2},\cdots,A^{\prime}_{m}}U^{ \dagger}_{j}\,, \tag{59}\]
where \(\rho_{A^{\prime}_{1},A^{\prime}_{2},\cdots,A^{\prime}_{m}}\) denotes a state on the Hilbert space \(\mathcal{H}_{A^{\prime}_{1},A^{\prime}_{2},\cdots,A^{\prime}_{m}}\), and \(U_{i}(i=0,1,\cdots,d-1)\) are some unitary operations. Thus the state \(\Gamma^{(d)}\) is defined on the Hilbert space \((\mathcal{H}_{A_{1}}\otimes\mathcal{H}_{A_{2}}\otimes\cdots\otimes\mathcal{H }_{A_{m}})\otimes(\mathcal{H}_{A^{\prime}_{1}}\otimes\mathcal{H}_{A^{\prime}_ {2}}\otimes\cdots\otimes\mathcal{H}_{A^{\prime}_{m}})\). The system \(A_{1},A_{2},\cdots,A_{m}\) giving secure bits of key is called the _key part_ of the private state, and the system \(A^{\prime}_{1},A^{\prime}_{2},\cdots,A^{\prime}_{m}\) depends the key in system \(A_{1},A_{2},\cdots,A_{m}\) from eavesdropper is called the _shield part_.
For a given multipartite state \(\rho_{A_{1},A_{2},\cdots,A_{m}}\) acting on \(\mathcal{H}_{A_{1}}\otimes\mathcal{H}_{A_{2}}\otimes\cdots\otimes\mathcal{H }_{A_{m}}\), consider a sequence \(\Lambda_{n}\) of LOCC operations such that \(\Lambda_{n}\bigl{(}\rho_{A_{1},A_{2},\cdots,A_{m}}^{\otimes n}\bigr{)}=\sigma ^{(n)}\), where \(n\) is the number of copies of the state. A set of operations \(\mathcal{P}=\cup_{n=1}^{\infty}\{\Lambda_{n}\}\) is called a pdit distillation protocol of state \(\rho_{A_{1},A_{2},\cdots,A_{m}}\) if the condition
\[\lim_{n\to\infty}\|\sigma^{(n)}-\Gamma^{(d_{n})}\|_{1}=0 \tag{60}\]
holds. Here \(\Gamma^{(d_{n})}\) is a multipartite pdit whose key part is of dimension \(d_{n}\times d_{n}\). For a protocol \(\mathcal{P}\), its rate is given by taking the limit superior of \((\log_{2}d_{n})/n\), i.e.,
\[\mathcal{R}(\mathcal{P})=\limsup_{n\to\infty}\frac{\log_{2}d_{n}}{n}\,. \tag{61}\]
Then the distillable key of state \(\rho_{A_{1},A_{2},\cdots,A_{m}}\) is given by
Figure 2: Schematic representation of the set of all states for bipartite systems, including the SEP states, PPT entangled states, hypothetical NPT nondistillable states, and distillable states.
[72]
\[K_{D}^{(m)}(\rho_{A_{1},A_{2},\cdots,A_{m}})=\sup_{\mathcal{P}}\mathcal{R}( \mathcal{P})\,. \tag{62}\]
For the \(N\)-partite quantum state \(\rho\), the normalized multipartite squashed entanglement is an upper bound on the distillable key [37]
\[K_{D}^{(m)}(\rho)\leq\frac{1}{m}E_{sq}^{q}(\rho)\,. \tag{63}\]
Another upper bound of the distillable key is the regularized version of the relative entropy [72].
When considering the distribution of secret keys, the multipartite private states, which are the output of a protocol that distills secret key among trusted parties, are necessarily genuine multipartite entangled [19]. Besides, the distillation of private states is the goal of quantum key repeater, a device that extends the distance of quantum key distribution (QKD) while being capable of tolerating higher noise levels and beyond the limitations of entanglement distillation [73, 74].
#### iv.1.3 Accessible entanglement and source entanglement
If a quantum state \(\rho\) can be transformed to another quantum state \(\sigma\) by LOCC, then \(\rho\) is considered to be at least as useful as \(\sigma\). This is because any application or protocol that can be achieved using \(\sigma\) can also be achieved using \(\rho\), but the reverse may not be true. If the parties are provided with \(\rho\), they can transform it to \(\sigma\) without incurring any additional cost and proceed with the desired protocol. The study of LOCC transformations allows us to determine the relative usefulness of different quantum states and forms the basis for quantifying entanglement. And LOCC corresponds to those operations which can be implemented without depleting the existing entanglement resources, thus entanglement measures must be quantities that are nonincreasing under these LOCC transformations.
In order to further understand multipartite entanglement, entanglement measures with an operational meaning for arbitrary multipartite states (pure or mixed) of any dimension are introduced [75]. These measures, known as _accessible_ and _source_ entanglement, can be computed once the possible LOCC transformations are characterized. If there exists a deterministic LOCC protocol that can transform state \(\ket{\psi}\) to state \(\ket{\phi}\), we say that state \(\ket{\psi}\) can _reach_ state \(\ket{\phi}\), and state \(\ket{\phi}\) is _accessible_ from state \(\ket{\psi}\). For a given state \(\ket{\psi}\), the accessible set \(M_{a}(\ket{\psi})\) is defined as the set of states that are reachable via LOCC from \(\ket{\psi}\). And the source set, \(M_{s}(\ket{\psi})\) is the set of states that can reach \(\ket{\psi}\). Then consider \(\mu\) as an arbitrary measure in the set of local unitarily equivalent classes, the corresponding accessible volume which measures the amount of states that can be accessed by \(\ket{\psi}\), and the source volume which measures the amount of states that can be used to reach the state are \(V_{a}(\ket{\psi})=\mu[M_{a}(\psi)]\) and \(V_{s}(\ket{\psi})=\mu[M_{s}(\psi)]\), respectively. The intuition is that the larger the accessible volume is, the more valuable the state is, while the larger the source volume is, the less powerful the state is. Thus, the accessible entanglement and the source entanglement are given by
\[E_{a}(\ket{\psi})=\frac{V_{a}(\ket{\psi})}{V_{\rm sup}^{\rm sup}}\,,\;\;E_{s}( \ket{\psi})=1-\frac{V_{s}(\ket{\psi})}{V_{s}^{\rm sup}}\,, \tag{64}\]
where \(V_{a}^{\rm sup}\) and \(V_{s}^{\rm sup}\) represent the maximally accessible volume and the maximally source volume according to the measure \(\mu\), respectively. It is worth noting that the operational properties of these measures make it easy to prove that they are nonincreasing under LOCC. Also, \(M_{s}(\ket{\psi})=\emptyset\) iff the state is in the MES because the maximally entangled states are the only states that cannot be reached by any other state [76]. For example, the \(W\) state achieves the maximum values of both \(E_{a}(\ket{W})=1\) and \(E_{s}(\ket{W})=1\). This highlights the exceptional nature of the \(W\) state, making it the most valuable state within the \(W\) class.
#### iv.1.4 Concentratable entanglement
As a computable and operationally meaningful family of entanglement monotones, the concentratable entanglement is introduced in Ref. [77]. Consider an \(N\)-qubit pure quantum state \(\ket{\phi}\), \(\mathcal{S}=\{1,2,\cdots,N\}\) as the set of labels for each qubit and its power set \(\mathcal{P}(\mathcal{S})\), for any set of qubit labels \(s\in\mathcal{P}(\mathcal{S})\setminus\{\mathcal{O}\}\), the concentratable entanglement is defined as
\[\mathcal{C}_{\rm C}(s)=1-\frac{1}{2^{c(s)}}\sum_{a\in\mathcal{P}(s)}\mathrm{Tr} \Big{(}\rho_{a}^{2}\Big{)}\,, \tag{65}\]
where \(c(s)\) is the cardinality, and \(\rho_{a}\) represents the joint reduced state. The purity \(\mathrm{Tr}\big{(}\rho_{a}^{2}\big{)}\) can be computed via an overlap test, for instance to employ the \(N\)-qubit parallelized swap test to compute the state overlap \(\ket{\langle\phi_{1}|\phi_{2}\rangle}|^{2}\). For the example of \(N\)-qubit \(\ket{W}\) and \(\ket{\mathrm{GHZ}}\) state, one finds that \(\mathcal{C}_{\rm C}^{\rm[GHZ]}(s)>\mathcal{C}_{\rm C}^{\ket{W}}(s)\) and the concentratable entanglement detects more multipartite entanglement in \(\ket{\mathrm{GHZ}}\) than in \(\ket{W}\).
The \(N\)-tangle, concurrence, and linear entropy of entanglement can be seen as special cases of concentratable entanglement. For two-qubit systems, \(\mathcal{C}_{\rm C}(s)=\tau_{(2)}/4=\mathcal{C}^{2}/4\) for all \(s\in\mathcal{P}(\mathcal{S})\), where \(\tau_{(N)}=|\bra{\phi}\ket{\tilde{\phi}}|^{2}\) with \(\ket{\tilde{\phi}}=\sigma_{y}^{\otimes N}\ket{\phi^{*}}\) and \(\mathcal{C}\) is the concurrence. Concentratable entanglement quantifies the probability that at least one swap test fails when \(N\) of them are applied in parallel across two copies of \(\ket{\phi}\). It is operationally interpreted as the probability of producing Bell pairs in the parallelized swap test. The controlled-swap (c-swap) operation is an experimentally accessible tool and a fundamental component in quantum communication protocols for measuring and witnessing entanglement.
Summary and outlook
In this review, we have surveyed various multipartite entanglement measures with an emphasis on the genuine and operational multipartite entanglement measures. The intention behind this review is to provide valuable insights and serve as a helpful resource for future research in the field of quantum information processing. In addition, we hope that this review would inspire and guide researchers in their endeavors to further develop novel approaches for characterizing multipartite entanglement.
In spite of continuous progress, however, the current status of entanglement measure theory is still marked by a number of outstanding open problems, some of which include
1. The quest for an effective and universally applicable measure to quantify genuine multipartite entanglement in systems involving more than three parties remains as a significant challenge.
2. While theoretical entanglement measures provide valuable insights, operational measures are designed to be practically applicable in experimental scenarios. Thus, the exploration and advancement of new operational measures for multipartite entanglement is essential.
3. Extending entanglement measures to mixed-state multipartite systems is also a meaningful direction. Real-world quantum systems often involve mixed states due to unavoidable interactions with the environment, hence understanding and quantifying multipartite entanglement in mixed states is critical for the successful implementation of various quantum information processing tasks.
In summary, we believe that the research on multipartite entanglement measures is still ongoing and is likely to have a crucial impact on other tasks in the era of quantum information science.
## VII Declaration of competing interest
The authors declare that they have no conflicts of interest in this work.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (Grants No. 92265115 and No. 12175014) and the National Key R&D Program of China (Grant No. 2022YFA1404900).
|